title
stringlengths 3
68
| text
stringlengths 685
186k
| relevans
float64 0.76
0.82
| popularity
float64 0.93
1
| ranking
float64 0.75
0.81
|
---|---|---|---|---|
Madness and Civilization | Madness and Civilization: A History of Insanity in the Age of Reason (, 1961) is an examination by Michel Foucault of the evolution of the meaning of madness in the cultures and laws, politics, philosophy, and medicine of Europe—from the Middle Ages until the end of the 18th century—and a critique of the idea of history and of the historical method.
Although he uses the language of phenomenology to describe the influence of social structures in the history of the Othering of insane people from society, Madness and Civilization is Foucault's philosophic progress from phenomenology toward something like structuralism (a label Foucault himself always adamantly rejected).
Background
Philosopher Michel Foucault developed Madness and Civilization from his earlier works in the field of psychology, his personal psychological difficulties, and his professional experiences working in a mental hospital. He wrote the book between 1955 and 1959, when he worked cultural-diplomatic and educational posts in Poland and Germany, as well as in Sweden as director of a French cultural centre at the University of Uppsala.
Summary
In Madness and Civilization, Foucault traces the cultural evolution of the concept of insanity (madness) in three phases:
the Renaissance;
the Classical Age; and
the Modern era
Middle Ages
In the Middle Ages, society distanced lepers from itself, while in the "Classical Age" the object of social segregation was moved from lepers to madmen, but in a different way. The lepers of the Middle Ages were certainly considered dangerous, but they were not the object of a radical rejection, as would be demonstrated by the fact that leper hospitals were almost always located near the city gates, far but not invisible from the community. The relative presence of the leper reminded everyone of the duty of Christian charity, and therefore played a positive role in society.
Renaissance
In the Renaissance, art portrayed insane people as possessing wisdom (knowledge of the limits of the world), whilst literature portrayed the insane as people who reveal the distinction between what men are and what men pretend to be. Renaissance art and literature further depicted insane people as intellectually engaged with reasonable people, because their madness represented the mysterious forces of cosmic tragedy. Foucault contrasts the Renaissance image of the ship of fools with later conceptions of confinement. The Renaissance, rather than locking up madmen, ensured their circulation, so that the madman as a "passenger" and "passing being" became the symbol of the human condition: "Madness is the anticipation of death".
Yet Renaissance intellectualism began to develop an objective way of thinking about and describing reason and unreason, compared with the subjective descriptions of madness from the Middle Ages.
Classical Age
At the dawn of the Age of Reason in the 17th century, there occurred "the Great Confinement" of insane people in the countries of Europe; the initial management of insane people was to segregate them to the margins of society, and then to physically separate them from society by confinement, with other anti-social people (prostitutes, vagrants, blasphemers, et al.) into new institutions, such as the General Hospital of Paris. According to Foucault, the creation of the "general hospital" corresponds to Descartes's Meditations, and the desire to eliminate the irrational from philosophical discourse. "Classical reason" would have produced a "fracture" in the history of madness. Moreover, Christian European society perceived such anti-social people as being in moral error, for having freely chosen lives of prostitution, vagrancy, blasphemy, unreason, etc. To revert such moral errors, society's new institutions to confine outcast people featured way-of-life regimes composed of punishment-and-reward programs meant to compel the inmates to choose to reverse their choices of lifestyle.
The socio-economic forces that promoted this institutional confinement included the legalistic need for an extrajudicial social mechanism with the legal authority to physically separate socially undesirable people from mainstream society; and for controlling the wages and employment of poor people living in workhouses, whose availability lowered the wages of freeman workers. The conceptual distinction, between the mentally insane and the mentally sane, was a social construct produced by the practices of the extrajudicial separation of a human being from free society to institutional confinement. In turn, institutional confinement conveniently made insane people available to medical doctors then beginning to view madness as a natural object of study, and then as an illness to be cured.
Modern era
The Modern era began at the end of the 18th century, with the creation of medical institutions for confining mentally insane people under the supervision of medical doctors. Those institutions were product of two cultural motives: (i) the new goal of curing the insane away from poor families; and (ii) the old purpose of confining socially undesirable people to protect society. Those two, distinct social purposes soon were forgotten, and the medical institution became the only place for the administration of therapeutic treatments for madness. Although nominally more enlightened in scientific and diagnostic perspective, and compassionate in the clinical treatment of insane people, the modern medical institution remained as cruelly controlling as were mediaeval treatments for madness. In the preface to the 1961 edition of Madness and Civilization, Foucault said that:
Reception
In the critical volume, Foucault (1985), the philosopher José Guilherme Merquior said that the value of Madness and Civilization as intellectual history was diminished by errors of fact and of interpretation that undermine Foucault's thesis—how social forces determine the meanings of madness and society's responses to the mental disorder of the person. Specifically problematic was his selective citation of data, which ignored contradictory historical evidence of preventive imprisonment and physical cruelty towards insane people during the historical periods when Foucault said society perceived the mad as wise people—institutional behaviors allowed by the culture of Christian Europeans who considered madness worse than sin. Nonetheless, Merquior said that, like the book Life Against Death (1959), by Norman O. Brown, Foucault's book about Madness and Civilization is "a call for the liberation of the Dionysian id"; and gave inspiration for Anti-Oedipus: Capitalism and Schizophrenia (1972), by the philosopher Gilles Deleuze and the psychoanalyst Félix Guattari.
In his 1994 essay "Phänomenologie des Krankengeistes" ('Phenomenology of the Sick Spirit'), philosopher Gary Gutting said:[T]he reactions of professional historians to Foucault's Histoire de la folie [1961] seem, at first reading, ambivalent, not to say polarized. There are many acknowledgements of its seminal role, beginning with Robert Mandrou's early review in [the Annales d'Histoire Economique et Sociale], characterizing it as a "beautiful book" that will be "of central importance for our understanding of the Classical period." Twenty years later, Michael MacDonald confirmed Mandrou's prophecy: "Anyone who writes about the history of insanity in early modern Europe must travel in the spreading wake of Michael Foucault's famous book, Madness and Civilization."
Later endorsements included Jan Goldstein, who said: "For both their empirical content and their powerful theoretical perspectives, the works of Michel Foucault occupy a special and central place in the historiography of psychiatry;" and Roy Porter: "Time has proved Madness and Civilization [to be by] far the most penetrating work ever written on the history of madness." However, despite Foucault being herald of "the new cultural history", there was much criticism.
In Psychoanalysis and Male Homosexuality (1995), Kenneth Lewes said that Madness and Civilization is an example of the "critique of the institutions of psychiatry and psychoanalysis" that occurred as part of the "general upheaval of values in the 1960s." That the history Foucault presents in Madness and Civilization is similar to, but more profound than The Myth of Mental Illness (1961) by Thomas Szasz.
See also
Anti-psychiatry
Cogito and the History of Madness
The Archaeology of Knowledge
Notes
References
External links
Some images and paintings that appear in the book
1961 non-fiction books
Anti-psychiatry books
French-language books
French non-fiction books
Books about mental health
Plon (publisher) books
Books about social history
Works by Michel Foucault | 0.770902 | 0.993594 | 0.765964 |
Feminist history | Feminist history refers to the re-reading of history from a woman's perspective. It is not the same as the history of feminism, which outlines the origins and evolution of the feminist movement. It also differs from women's history, which focuses on the role of women in historical events. The goal of feminist history is to explore and illuminate the female viewpoint of history through rediscovery of female writers, artists, philosophers, etc., in order to recover and demonstrate the significance of women's voices and choices in the past. Feminist History seeks to change the nature of history to include gender into all aspects of historical analysis, while also looking through a critical feminist lens. Jill Matthews states "the purpose of that change is political: to challenge the practices of the historical discipline that have belittled and oppressed women, and to create practices that allow women an autonomy and space for self-definition"
Two particular problems which feminist history attempts to address are the exclusion of women from the historical and philosophical tradition, and the negative characterization of women or the feminine therein; however, feminist history is not solely concerned with issues of gender per se, but rather with the reinterpretation of history in a more holistic and balanced manner.
"If we take feminism to be that cast of mind that insists that the differences and inequalities between the sexes are the result of historical processes and are not blindly "natural," we can understand why feminist history has always had a dual mission—on the one hand to recover the lives, experiences, and mentalities of women from the condescension and obscurity in which they have been so unnaturally placed, and on the other to reexamine and rewrite the entire historical narrative to reveal the construction and workings of gender." —Susan Pedersen
The "disappearing woman" has been a focus of attention of academic feminist scholarship. Research into women's history and literature reveals a rich heritage of neglected culture.
Understanding feminist history
Feminist history combines the search for past female scholars with a modern feminist perspective on how history is affected by them. While many mistake it as women's history, feminist history does not solely focus on the retelling of history from a woman's perspective. Rather, it is interpreting history with a feminist frame of mind. It is also not to be confused with the history of feminism, which recounts the history of the feminist movements. Feminist historians, instead, include "cultural and social investigations" in the job description. Feminist history came into being as women began writing accounts of their own and other women's lives. A few of these, such as Susan B. Anthony and Audre Lorde, documented histories of their feminist movements.
Feminist historians collect to analyze and analyze to connect. Rather than just recording women's history, they allow a connection to be made with "public history." However, problems remain in integrating this improved history into a curriculum appropriate for students. Finally, feminist historians must now be able to understand the digital humanities involved in creating an online database of their primary sources as well as published works done by notable feminist historians. Feminist digital humanists work with feminist historians to reveal an online integration of the two histories. Harvard's Women's Studies Database contain sources, like the Gerritsen Collection, that allow scholarly papers by feminists to be written and publicly convey the fact that there is more than one history and the progress made in combining them.
Relations to women's history
Feminist historians use women's history to explore the different voices of past women. This gathering of information requires the help of experts who have dedicated their lives to this pursuit. It provides historians with primary sources that are vital to the integration of histories. Firsthand accounts, like Fiedler's And the Walls Come Tumbling Down? (A Feminist View from East Berlin) recounts the daily lives of past women. It documents how their lives were affected by the laws of their government. Women's historians go on to interpret how the laws changed these women's lives, but feminist historians rely on this information to observe the 'disappearing woman'. Fieldler even mentioned that "[t]hese feminists were disappointed when they meant ordinary eastern women who were good housewives too, while enjoying outside work." Because these feminists only knew the public history of the German Democratic Republic, they projected themselves into the imaginary.
Upon investigation of eastern women's lives, they found that though the GDR's socialist policies encouraged women in the labor force, there had been no women creating these policies. Once again, the patriarch had created a public history in which women were cut out. The discovery of neglected cultural accounts, similar to Fiedler's, has allowed women's historians to create large databases, available to feminist historians, out of them. These sources are analyzed by the historians to compare them to scholarly works published during the same time period. Finding works that are within the same time period is not too difficult, but the challenge is in knowing how to combine what they learned from the source with what they know from the works.
Integrating histories
Feminist historians see mainly two specific histories. The first is the public, singular history. It is composed of political events and newspapers. The second is made up of women's history and analyzed primary sources. The integration of these two histories helps historians to look at the past with a more feminist lens, the way feminist historians do. Professor Peter G. Filene of the University of North Carolina recounted in his paper Integrating Women's History and Regular History that "[his] purpose is to help students understand the values and behavior of people who are unlike themselves. Through history we enter other lives, analyze the forces that shaped those lives, and ultimately understand patterns of culture." In fact, when Filene was asked to teach a course on the history of American women, the revelations of past women allowed him to recognize that he was not learning heroine history, or herstory, but a compensatory history. However, this thought limited his studies. He found himself thinking of women's contributions to what men had already written down. Rather than having the histories of the 'public' and the 'domestic' sphere, one should know that this line between the two is imaginary.
Though not all women are politicians or war generals, boys are raised in the domestic sphere. Not only that, but men come back to it every day in their private homes. Even President Theodore Roosevelt can be quoted to say "[n]o man can be a good citizen who is not a good husband and a good father." Similar to how history needs domestic history incorporated into it, men's history cannot be understood without their private experiences known. Women's history thus needs their private experiences to be combined with their public. To successfully integrate these histories, the world must not have male and female spheres that are synonyms for the private and public. The connections found in public and private men's and women's history need to be systematically synthesized to successfully integrate them. So the idea of just two histories creates the challenge that most feminist historians have.
Feminist historiography
Feminist historiography is another notable facet of feminist history. One important feminist historiography writer and researcher is Judith M. Bennett. In her book, History Matters: Patriarchy and the Challenge of Feminism, Bennett writes on the importance of studying a "patriarchal equilibrium". Cheryl Glenn also writes on the importance of feminist historiography "Writing women (or any other traditionally disenfranchised group) into the history of rhetoric, then, can be an ethically and intellectually responsible gesture that disrupts those frozen memories in order to address silences, challenge absences, and assert women's contributions to public life" This facet of feminist history inspect historical writings that are typically assumed to be canon, and reinvents them under a feminist lens.
See also
Herstory
History of feminism
Women's history
Feminist digital humanities
References
External links
Click! The Ongoing Feminist Revolution
Independent Voices: an open access collection of alternative press newspapers
Women's History & Feminist Theory Links (archived 9 December 2006)
home page for Lilith, a feminist history journal (archived 18 December 2006)
feminist history bibliography
The Women's History Project and The Women's History Project Page increasing public awareness to significant female figures from various countries and cultures, their actions and contributions to humanity.
Women's history
Historiography | 0.786183 | 0.974278 | 0.76596 |
Panethnicity | Panethnicity is a political neologism used to group various ethnic groups together based on their related cultural origins; geographic, linguistic, religious, or "racial" (i.e. phenotypic) similarities are often used alone or in combination to draw panethnic boundaries. The term panethnic was used extensively during mid-20th century anti-colonial/national liberation movements. In the United States, Yen Le Espiritu popularized the term and coined the nominal term panethnicity in reference to Asian Americans, a racial category composed of disparate peoples having in common only their origin in the continent of Asia.
It has since seen some use as a replacement of the term race; for example, the aforementioned Asian Americans can be described as "a panethnicity" of various unrelated peoples of Asia, which are nevertheless perceived as a distinguishable group within the larger multiracial North American society.
More recently the term has also come to be used in contexts outside multiculturalism in US society, as a general replacement for terms like ethnolinguistic group or racial group.
The concept is to be distinguished from "pan-nationalism", which similarly groups related ethnicities but in the context of either ethnic nationalism (e.g. pan-Arabism, pan-Celticism, pan-Germanism, pan-Indianism, pan-Iranism, pan-Slavism, pan-Turkism) or civic nationalism (e.g. pan-Africanism).
United States
Panethnicity has allowed Asian Americans to unite based on similar historical relations with the United States (such as - in some cases - US military presence in their native countries). The Asian American panethnic identity has evolved to become a means for immigrant groups such as Asian Americans to unite in order to gain political strength in numbers. Similarly, one can speak of a "panethnic European American category".
The term "American" has become one of the more widespread panethnic concepts.
Mainstream institutions and political policies often play a big role in the labeling of panethnic groups. They often enact policies that deal with specific groups of people, and panethnic groups are one way to group large numbers of people. Public policy might dole out resources or make deals with multiple groups, viewing them all as one large entity.
Panethnic labels are often, though not always, created and employed by outsiders of the group that is being defined panethnically. In the case of the Asian American movement of the 1960s and 1970s, the panethnic label "Asian American" was not created by outsiders; rather, it was coined by professor Yuji Ichioka and his spouse, Emma Gee, in order to consolidate Asian activists that they had seen at various political demonstrations of the time. The manner in which the two garnered support for the alliance sheds light on the expressly panethnic approach that was at the core of this new Asian American identity: they went through the roster of the Peace and Freedom Party, a majority white anti-war organization that was protesting the Vietnam War at the time, and telephoned all the individuals they could find with "Asian" surnames. Though the Asian American identity was initially not inclusive of many Asian ethnicities, new waves of Asian immigrants since the 1965 Immigration and Nationality Act have accelerated the expansion of the identity. At the time of the 2000 US census, 88% of Asian America was made up of six Asian ethnicities: Chinese, Japanese, Korean, Filipino, Indian, and Vietnamese.
Criticism
The use of "Asian American" as a panethnic racial label is often criticized, due to the term only encompassing some of the diverse peoples of Asia, and for grouping together the racially and culturally different South Asians with East Asians as the same "race". Americans of West Asian descent, such as Iranian, Israeli, Armenian, and many Arab nationalities, are notably excluded from the term despite West Asia being geographically part of Asia. As well as West Asians having racial and cultural similarities with South Asians. The common justification for grouping together South Asians and East Asians is because of Buddhism's origins in India, but the religion has "practically died out" in South Asia.
Although the panethnic term refers to Americans of East Asian, South Asian, and Southeast Asian ancestry, "Asian American" is usually synonymous for people of East Asian ancestry and or appearance, which has caused some to highlight the general exclusion of South Asians and Southeast Asians.
See also
References
Sources
Ethnicity
Race in the United States | 0.774831 | 0.98855 | 0.765959 |
Dialectic | Dialectic (, dialektikḗ; ), also known as the dialectical method, refers originally to dialogue between people holding different points of view about a subject but wishing to arrive at the truth through reasoned argumentation. Dialectic resembles debate, but the concept excludes subjective elements such as emotional appeal and rhetoric. It has its origins in ancient philosophy and continued to be developed in the Middle Ages.
Hegelianism refigured "dialectic" to no longer refer to a literal dialogue. Instead, the term takes on the specialized meaning of development by way of overcoming internal contradictions. Dialectical materialism, a theory advanced by Karl Marx and Friedrich Engels, adapted the Hegelian dialectic into a materialist theory of history. The legacy of Hegelian and Marxian dialectics has been criticized by philosophers such as Karl Popper and Mario Bunge, who considered it unscientific.
Dialectic implies a developmental process and so does not naturally fit within classical logic. Nevertheless, some twentieth-century logicians have attempted to formalize it.
History
There are a variety of meanings of dialectic or dialectics within Western philosophy.
Classical philosophy
In classical philosophy, dialectic is a form of reasoning based upon dialogue of arguments and counter-arguments, advocating propositions (theses) and counter-propositions (antitheses). The outcome of such a dialectic might be the refutation of a relevant proposition, or a synthesis, a combination of the opposing assertions, or a qualitative improvement of the dialogue.
The term "dialectic" owes much of its prestige to its role in the philosophies of Socrates and Plato, in the Greek Classical period (5th to 4th centuries BC). Aristotle said that it was the pre-Socratic philosopher Zeno of Elea who invented dialectic, of which the dialogues of Plato are examples of the Socratic dialectical method.
Socratic method
The Socratic dialogues are a particular form of dialectic known as the method of elenchus (literally, "refutation, scrutiny") whereby a series of questions clarifies a more precise statement of a vague belief, logical consequences of that statement are explored, and a contradiction is discovered. The method is largely destructive, in that false belief is exposed and only constructive in that this exposure may lead to further search for truth. The detection of error does not amount to a proof of the antithesis. For example, a contradiction in the consequences of a definition of piety does not provide a correct definition. The principal aim of Socratic activity may be to improve the soul of the interlocutors, by freeing them from unrecognized errors, or indeed, by teaching them the spirit of inquiry.
In common cases, Socrates uses enthymemes as the foundation of his argument.
For example, in the Euthyphro, Socrates asks Euthyphro to provide a definition of piety. Euthyphro replies that the pious is that which is loved by the gods. But, Socrates also has Euthyphro agreeing that the gods are quarrelsome and their quarrels, like human quarrels, concern objects of love or hatred. Therefore, Socrates reasons, at least one thing exists that certain gods love but other gods hate. Again, Euthyphro agrees. Socrates concludes that if Euthyphro's definition of piety is acceptable, then there must exist at least one thing that is both pious and impious (as it is both loved and hated by the gods)—which Euthyphro admits is absurd. Thus, Euthyphro is brought to a realization by this dialectical method that his definition of piety is not sufficiently meaningful.
In another example, in Plato's Gorgias, dialectic occurs between Socrates, the Sophist Gorgias, and two men, Polus and Callicles. Because Socrates' ultimate goal was to reach true knowledge, he was even willing to change his own views in order to arrive at the truth. The fundamental goal of dialectic, in this instance, was to establish a precise definition of the subject (in this case, rhetoric) and with the use of argumentation and questioning, make the subject even more precise. In the Gorgias, Socrates reaches the truth by asking a series of questions and in return, receiving short, clear answers.
Plato
In Platonism and Neoplatonism, dialectic assumed an ontological and metaphysical role in that it became the process whereby the intellect passes from sensibles to intelligibles, rising from idea to idea until it finally grasps the supreme idea, the first principle which is the origin of all. The philosopher is consequently a "dialectician". In this sense, dialectic is a process of inquiry that does away with hypotheses up to the first principle. It slowly embraces multiplicity in unity. The philosopher Simon Blackburn wrote that the dialectic in this sense is used to understand "the total process of enlightenment, whereby the philosopher is educated so as to achieve knowledge of the supreme good, the Form of the Good".
Medieval philosophy
Logic, which could be considered to include dialectic, was one of the three liberal arts taught in medieval universities as part of the trivium; the other elements were rhetoric and grammar.
Based mainly on Aristotle, the first medieval philosopher to work on dialectics was Boethius (480–524). After him, many scholastic philosophers also made use of dialectics in their works, such as Abelard, William of Sherwood, Garlandus Compotista, Walter Burley, Roger Swyneshed, William of Ockham, and Thomas Aquinas.
This dialectic (a quaestio disputata) was formed as follows:
The question to be determined ("It is asked whether...");
A provisory answer to the question ("And it seems that...");
The principal arguments in favor of the provisory answer;
An argument against the provisory answer, traditionally a single argument from authority ("On the contrary...");
The determination of the question after weighing the evidence ("I answer that...");
The replies to each of the initial objections. ("To the first, to the second etc., I answer that...")
Modern philosophy
The concept of dialectics was given new life at the start of the 19th century by Georg Wilhelm Friedrich Hegel, whose dialectical model of nature and of history made dialectics a fundamental aspect of reality, instead of regarding the contradictions into which dialectics leads as evidence of the limits of pure reason, as Immanuel Kant had argued. Hegel was influenced by Johann Gottlieb Fichte's conception of synthesis, although Hegel didn't adopt Fichte's "thesis–antithesis–synthesis" language except to describe Kant's philosophy: rather, Hegel argued that such language was "a lifeless schema" imposed on various contents, whereas he saw his own dialectic as flowing out of "the inner life and self-movement" of the content itself.
In the mid-19th century, Hegelian dialectic was appropriated by Karl Marx and Friedrich Engels and retooled in what they considered to be a nonidealistic manner. It would also become a crucial part of later representations of Marxism as a philosophy of dialectical materialism. These representations often contrasted dramatically and led to vigorous debate among different Marxist groups.
Hegelian dialectic
The Hegelian dialectic describes changes in the forms of thought through their own internal contradictions into concrete forms that overcome previous oppositions.
This dialectic is sometimes presented in a threefold manner, as first stated by Heinrich Moritz Chalybäus, as comprising three dialectical stages of development: a thesis, giving rise to its reaction; an antithesis, which contradicts or negates the thesis; and the tension between the two being resolved by means of a synthesis. Although, Hegel opposed these terms.
By contrast, the terms abstract, negative, and concrete suggest a flaw or an incompleteness in any initial thesis. For Hegel, the concrete must always pass through the phase of the negative, that is, mediation. This is the essence of what is popularly called Hegelian dialectics.
To describe the activity of overcoming the negative, Hegel often used the term Aufhebung, variously translated into English as "sublation" or "overcoming", to conceive of the working of the dialectic. Roughly, the term indicates preserving the true portion of an idea, thing, society, and so forth, while moving beyond its limitations. What is sublated, on the one hand, is overcome, but, on the other hand, is preserved and maintained.
As in the Socratic dialectic, Hegel claimed to proceed by making implicit contradictions explicit: each stage of the process is the product of contradictions inherent or implicit in the preceding stage. On his view, the purpose of dialectics is "to study things in their own being and movement and thus to demonstrate the finitude of the partial categories of understanding".
For Hegel, even history can be reconstructed as a unified dialectic, the major stages of which chart a progression from self-alienation as servitude to self-unification and realization as the rational constitutional state of free and equal citizens.
Marxist dialectic
Marxist dialectic is a form of Hegelian dialectic which applies to the study of historical materialism. Marxist dialectic is thus a method by which one can examine social and economic behaviors. It is the foundation of the philosophy of dialectical materialism, which forms the basis of historical materialism.
In the Marxist tradition, "dialectic" refers to regular and mutual relationships, interactions, and processes in nature, society, and human thought.
A dialectical relationship is a relationship in which two phenomena or ideas mutually impact each other, leading to development and negation. Development refers to the change and motion of phenomena and ideas from less advanced to more advanced or from less complete to more complete. Dialectical negation refers to a stage of development in which a contradiction between two previous subjects gives rise to a new subject. In the Marxist view, dialectical negation is never an endpoint, but instead creates new conditions for further development and negation.
Karl Marx and Friedrich Engels, writing several decades after Hegel's death, proposed that Hegel's dialectic is too abstract. Against this, Marx presented his own dialectic method, which he claimed to be "direct opposite" of Hegel's method.
Marxist dialectics is exemplified in Das Kapital. As Marx explained dialectical materialism,
Class struggle is the primary contradiction to be resolved by Marxist dialectics because of its central role in the social and political lives of a society. Nonetheless, Marx and Marxists developed the concept of class struggle to comprehend the dialectical contradictions between mental and manual labor and between town and country. Hence, philosophic contradiction is central to the development of dialectics: the progress from quantity to quality, the acceleration of gradual social change; the negation of the initial development of the status quo; the negation of that negation; and the high-level recurrence of features of the original status quo.
Friedrich Engels further proposed that nature itself is dialectical, and that this is "a very simple process, which is taking place everywhere and every day". His dialectical "law of the transformation of quantity into quality and vice versa" corresponds, according to Christian Fuchs, to the concept of phase transition and anticipated the concept of emergence "a hundred years ahead of his time".
For Vladimir Lenin, the primary feature of Marx's "dialectical materialism" (Lenin's term) is its application of materialist philosophy to history and social sciences. Lenin's main contribution to the philosophy of dialectical materialism is his theory of reflection, which presents human consciousness as a dynamic reflection of the objective material world that fully shapes its contents and structure.
Later, Stalin's works on the subject established a rigid and formalistic division of Marxist–Leninist theory into dialectical materialism and historical materialism. While the first was supposed to be the key method and theory of the philosophy of nature, the second was the Soviet version of the philosophy of history.
Soviet systems theory pioneer Alexander Bogdanov viewed Hegelian and materialist dialectic as progressive, albeit inexact and diffuse, attempts at achieving what he called tektology, or a universal science of organization.
Dialectical naturalism
Dialectical naturalism is a term coined by American philosopher Murray Bookchin to describe the philosophical underpinnings of the political program of social ecology. Dialectical naturalism explores the complex interrelationship between social problems, and the direct consequences they have on the ecological impact of human society. Bookchin offered dialectical naturalism as a contrast to what he saw as the "empyrean, basically antinaturalistic dialectical idealism" of Hegel, and "the wooden, often scientistic dialectical materialism of orthodox Marxists".
Theological dialectics
Neo-orthodoxy, in Europe also known as theology of crisis and dialectical theology, is an approach to theology in Protestantism that was developed in the aftermath of the First World War (1914–1918). It is characterized as a reaction against doctrines of 19th-century liberal theology and a more positive reevaluation of the teachings of the Reformation, much of which had been in decline (especially in western Europe) since the late 18th century. It is primarily associated with two Swiss professors and pastors, Karl Barth (1886–1968) and Emil Brunner (1899–1966), even though Barth himself expressed his unease in the use of the term.
In dialectical theology the difference and opposition between God and human beings is stressed in such a way that all human attempts at overcoming this opposition through moral, religious or philosophical idealism must be characterized as 'sin'. In the death of Christ humanity is negated and overcome, but this judgment also points forwards to the resurrection in which humanity is reestablished in Christ. For Barth this meant that only through God's 'no' to everything human can his 'yes' be perceived. Applied to traditional themes of Protestant theology, such as double predestination, this means that election and reprobation cannot be viewed as a quantitative limitation of God's action. Rather it must be seen as its "qualitative definition". As Christ bore the rejection as well as the election of God for all humanity, every person is subject to both aspects of God's double predestination.
Dialectic prominently figured in Bernard Lonergan's philosophy, in his books Insight and Method in Theology. Michael Shute wrote about Lonergan's use of dialectic in The Origins of Lonergan's Notion of the Dialectic of History. For Lonergan, dialectic is both individual and operative in community. Simply described, it is a dynamic process that results in something new:
Dialectic is one of the eight functional specialties Lonergan envisaged for theology to bring this discipline into the modern world. Lonergan believed that the lack of an agreed method among scholars had inhibited substantive agreement from being reached and progress from being made compared to the natural sciences. Karl Rahner, S.J., however, criticized Lonergan's theological method in a short article entitled "Some Critical Thoughts on 'Functional Specialties in Theology'" where he stated: "Lonergan's theological methodology seems to me to be so generic that it really fits every science, and hence is not the methodology of theology as such, but only a very general methodology of science."
Criticisms
Friedrich Nietzsche viewed dialectic as a method that imposes artificial boundaries and suppresses the richness and diversity of reality. He rejected the notion that truth can be fully grasped through dialectical reasoning and offered a critique of dialectic, challenging its traditional framework and emphasizing the limitations of its approach to understanding reality. He expressed skepticism towards its methodology and implications in his work Twilight of the Idols: "I mistrust all systematizers and I avoid them. The will to a system is a lack of integrity". In the same book, Nietzsche criticized Socrates' dialectics because he believed it prioritized reason over instinct, resulting in the suppression of individual passions and the imposition of an artificial morality.
Karl Popper attacked the dialectic repeatedly. In 1937, he wrote and delivered a paper entitled "What Is Dialectic?" in which he criticized the dialectics of Hegel, Marx, and Engels for their willingness "to put up with contradictions". He argued that accepting contradiction as a valid form of logic would lead to the principle of explosion and thus trivialism. Popper concluded the essay with these words: "The whole development of dialectic should be a warning against the dangers inherent in philosophical system-building. It should remind us that philosophy should not be made a basis for any sort of scientific system and that philosophers should be much more modest in their claims. One task which they can fulfill quite usefully is the study of the critical methods of science". Seventy years later, Nicholas Rescher responded that "Popper's critique touches only a hyperbolic version of dialectic", and he quipped: "Ironically, there is something decidedly dialectical about Popper's critique of dialectics." Around the same time as Popper's critique was published, philosopher Sidney Hook discussed the "sense and nonsense in dialectic" and rejected two conceptions of dialectic as unscientific but accepted one conception as a "convenient organizing category".
The philosopher of science and physicist Mario Bunge repeatedly criticized Hegelian and Marxian dialectics, calling them "fuzzy and remote from science" and a "disastrous legacy". He concluded: "The so-called laws of dialectics, such as formulated by Engels (1940, 1954) and Lenin (1947, 1981), are false insofar as they are intelligible." Poe Yu-ze Wan, reviewing Bunge's criticisms of dialectics, found Bunge's arguments to be important and sensible, but he thought that dialectics could still serve some heuristic purposes for scientists. Wan pointed out that scientists such as the American Marxist biologists Richard Levins and Richard Lewontin (authors of The Dialectical Biologist) and the German-American evolutionary biologist Ernst Mayr, not a Marxist himself, have found agreement between dialectical principles and their own scientific outlooks, although Wan opined that Engels's "laws" of dialectics "in fact 'explain' nothing".
Even some Marxists are critical of the term "dialectics". For instance, Michael Heinrich wrote, "More often than not, the grandiose rhetoric about dialectics is reducible to the simple fact that everything is dependent upon everything else and is in a state of interaction and that it's all rather complicated—which is true in most cases, but doesn't really say anything."
Formalization
Defeasibility
Dialog games
Mathematics
Mathematician William Lawvere interpreted dialectics in the setting of categorical logic in terms of adjunctions between idempotent monads. This perspective may be useful in the context of theoretical computer science where the duality between syntax and semantics can be interpreted as a dialectic in this sense. For example, the Curry-Howard equivalence is such an adjunction or more generally the duality between closed monoidal categories and their internal logic.
See also
Conversation
Dialogue
A philosophical journal
Various works on dialectics and logical reasoning
Dialectical behavior therapy
Dialectical research
Dialogic
Discourse
Doublethink
False dilemma
Reflective equilibrium
Relational dialectics
Tarka sastra
Unity of opposites
Universal dialectic
References
External links
v:Dialectic algorithm – An algorithm based on the principles of classical dialectics
Studies in the Hegelian Dialectic by J. M. E. McTaggart (1896) at marxists.org
Rhetoric
Philosophical methodology
Concepts in ancient Greek metaphysics
Ancient Greek logic | 0.766531 | 0.999204 | 0.765921 |
Architectural style | An architectural style is a classification of buildings (and nonbuilding structures) based on a set of characteristics and features, including overall appearance, arrangement of the components, method of construction, building materials used, form, size, structural design, and regional character.
Architectural styles are frequently associated with a historical epoch (Renaissance style), geographical location (Italian Villa style), or an earlier architectural style (Neo-Gothic style), and are influenced by the corresponding broader artistic style and the "general human condition". Heinrich Wölfflin even declared an analogy between a building and a costume: an "architectural style reflects the attitude and the movement of people in the period concerned.
The 21st century construction uses a multitude of styles that are sometimes lumped together as a "contemporary architecture" based on the common trait of extreme reliance on computer-aided architectural design (cf. Parametricism).
Folk architecture (also "vernacular architecture") is not a style, but an application of local customs to small-scale construction without clear identity of the builder.
Styles in the history of architecture
The concept of architectural style is studied in the architectural history as one of the approaches ("style and period") that are used to organize the history of architecture (Leach lists five other approaches as "biography, geography and culture, type, technique, theme and analogy"). Style provides an additional relationship between otherwise disparate buildings, thus serving as a "protection against chaos".
The concept of style was foreign to architects until the 18th century. Prior to the era of Enlightenment, the architectural form was mostly considered timeless, either as a divine revelation or an absolute truth derived from the laws of nature, and a great architect was the one who understood this "language". The new interpretation of history declared each historical period to be a stage of growth for the humanity (cf. Johann Gottfried Herder's Volksgeist that much later developed into Zeitgeist). This approach allowed to classify architecture of each age as an equally valid approach, "style" (the use of the word in this sense became established by the mid-18th century).
Style has been subject of an extensive debate since at least the 19th century. Many architects argue that the notion of "style" cannot adequately describe the contemporary architecture, is obsolete and ridden with historicism. In their opinion, by concentrating on the appearance of the building, style classification misses the hidden from view ideas that architects had put into the form. Studying history of architecture without reliance on styles usually relies on a "canon" of important architects and buildings. The lesser objects in this approach do not deserve attention: "A bicycle shed is a building; Lincoln Cathedral is a piece of architecture" (Nikolaus Pevsner, 1943). Nonetheless, the traditional and popular approach to the architectural history is through chronology of styles, with changes reflecting the evolution of materials, economics, fashions, and beliefs.
Works of architecture are unlikely to be preserved for their aesthetic value alone; with practical re-purposing, the original intent of the original architect, sometimes his very identity, can be forgotten, and the building style becomes "an indispensable historical tool".
Evolution of style
Styles emerge from the history of a society. At any time several styles may be fashionable, and when a style changes it usually does so gradually, as architects learn and adapt to new ideas. The new style is sometimes only a rebellion against an existing style, such as postmodern architecture (meaning "after modernism"), which in 21st century has found its own language and split into a number of styles which have acquired other names.
Architectural styles often spread to other places, so that the style at its source continues to develop in new ways while other countries follow with their own twist. For instance, Renaissance ideas emerged in Italy around 1425 and spread to all of Europe over the next 200 years, with the French, German, English, and Spanish Renaissances showing recognisably the same style, but with unique characteristics. An architectural style may also spread through colonialism, either by foreign colonies learning from their home country, or by settlers moving to a new land. One example is the Spanish missions in California, brought by Spanish priests in the late 18th century and built in a unique style.
After an architectural style has gone out of fashion, revivals and re-interpretations may occur. For instance, classicism has been revived many times and found new life as neoclassicism. Each time it is revived, it is different. The Spanish mission style was revived 100 years later as the Mission Revival, and that soon evolved into the Spanish Colonial Revival.
History of the concept of architectural style
Early writing on the subjects of architectural history, since the works of Vitruvius in the 1st century B.C., treated architecture as a patrimony that was passed on to the next generation of architects by their forefathers. Giorgio Vasari in the 16th century shifted the narrative to biographies of the great artists in his "Lives of the Most Excellent Painters, Sculptors, and Architects".
Constructing schemes of the period styles of historic art and architecture was a major concern of 19th century scholars in the new and initially mostly German-speaking field of art history. Important writers on the broad theory of style including Carl Friedrich von Rumohr, Gottfried Semper, and Alois Riegl in his Stilfragen of 1893, with Heinrich Wölfflin and Paul Frankl continued the debate into the 20th century. Paul Jacobsthal and Josef Strzygowski are among the art historians who followed Riegl in proposing grand schemes tracing the transmission of elements of styles across great ranges in time and space. This type of art history is also known as formalism, or the study of forms or shapes in art. Wölfflin declared the goal of formalism as , "art history without names", where an architect's work has a place in history that is independent of its author. The subject of study no longer was the ideas that Borromini borrowed from Maderno who in turn learned from Michelangelo, instead the questions now were about the continuity and changes observed when the architecture transitioned from Renaissance to Baroque.
Semper, Wölfflin, and Frankl, and later Ackerman, had backgrounds in the history of architecture, and like many other terms for period styles, "Romanesque" and "Gothic" were initially coined to describe architectural styles, where major changes between styles can be clearer and more easy to define, not least because style in architecture is easier to replicate by following a set of rules than style in figurative art such as painting. Terms originated to describe architectural periods were often subsequently applied to other areas of the visual arts, and then more widely still to music, literature and the general culture. In architecture stylistic change often follows, and is made possible by, the discovery of new techniques or materials, from the Gothic rib vault to modern metal and reinforced concrete construction. A major area of debate in both art history and archaeology has been the extent to which stylistic change in other fields like painting or pottery is also a response to new technical possibilities, or has its own impetus to develop (the kunstwollen of Riegl), or changes in response to social and economic factors affecting patronage and the conditions of the artist, as current thinking tends to emphasize, using less rigid versions of Marxist art history.
Although style was well-established as a central component of art historical analysis, seeing it as the over-riding factor in art history had fallen out of fashion by World War II, as other ways of looking at art were developing, and a reaction against the emphasis on style developing; for Svetlana Alpers, "the normal invocation of style in art history is a depressing affair indeed". According to James Elkins "In the later 20th century criticisms of style were aimed at further reducing the Hegelian elements of the concept while retaining it in a form that could be more easily controlled".
Practical issues
In the middle of the 19th century мultiple aesthetic and social factors forced architects to design the new buildings using a selection of styles patterned after the historical ones (working "in every style or none"), and style definition became a practical matter. The choice of an appropriate style was subject of elaborate discussions; for example, the Cambridge Camden Society had argued that the churches in the new British colonies should be built in the Norman style, so that the local architects and builders can go through the paces repeating the architectural history of England.
See also
Historicism (architecture)
History of architecture
List of architectural styles
Revivalism (architecture)
Notes
References
"Alpers in Lang": Alpers, Svetlana, "Style is What You Make It", in The Concept of Style, ed. Berel Lang, (Ithaca: Cornell University Press, 1987), 137–162, google books.
Elkins, James, "Style" in Grove Art Online, Oxford Art Online, Oxford University Press, accessed March 6, 2013, subscriber link
Elsner, Jas, "Style" in Critical Terms for Art History, Nelson, Robert S. and Shiff, Richard, 2nd Edn. 2010, University of Chicago Press, , 9780226571690, google books
Gombrich, E. "Style" (1968), orig. International Encyclopedia of the Social Sciences, ed. D. L. Sills, xv (New York, 1968), reprinted in Preziosi, D. (ed.) The Art of Art History: A Critical Anthology (see below), whose page numbers are used.
"Kubler in Lang": Kubler, George, Towards a Reductive Theory of Style, in Lang
Lang, Berel (ed.), The Concept of Style, 1987, Ithaca: Cornell University Press, , 9780801494390, google books; includes essays by Alpers and Kubler
Preziosi, D. (ed.) The Art of Art History: A Critical Anthology, Oxford: Oxford University Press, 1998,
Architectural design
Architectural history | 0.771336 | 0.992948 | 0.765897 |
Individual | An individual is one that exists as a distinct entity. Individuality (or self-hood) is the state or quality of living as an individual; particularly (in the case of humans) as a person unique from other people and possessing one's own needs or goals, rights and responsibilities. The concept of an individual features in many fields, including biology, law, and philosophy. Every individual contributes significantly to the growth of a civilization. Society is a multifaceted concept that is shaped and influenced by a wide range of different things, including human behaviors, attitudes, and ideas. The culture, morals, and beliefs of others as well as the general direction and trajectory of the society can all be influenced and shaped by an individual's activities.
Etymology
From the 15th century and earlier (and also today within the fields of statistics and metaphysics) individual meant "indivisible", typically describing any numerically singular thing, but sometimes meaning "a person". From the 17th century on, an individual has indicated separateness, as in individualism.
Biology
In biology, the question of the individual is related to the definition of an organism, which is an important question in biology and the philosophy of biology, despite there having been little work devoted explicitly to this question. An individual organism is not the only kind of individual that is considered as a "unit of selection". Genes, genomes, or groups may function as individual units.
Asexual reproduction occurs in some colonial organisms so that the individuals are genetically identical. Such a colony is called a genet, and an individual in such a population is referred to as a ramet. The colony, rather than the individual, functions as a unit of selection. In other colonial organisms, individuals may be closely related to one another but may differ as a result of sexual reproduction.
Law
Although individuality and individualism are commonly considered to mature with age/time and experience/wealth, a sane adult human being is usually considered by the state as an "individual person" in law, even if the person denies individual culpability ("I followed instructions").
An individual person is accountable for their actions/decisions/instructions, subject to prosecution in both national and international law, from the time that they have reached the age of majority, often though not always more or less coinciding with the granting of voting rights, responsibility for paying tax, military duties, and the individual right to bear arms (protected only under certain constitutions).
Philosophy
Buddhism
In Buddhism, the concept of the individual lies in anatman, or "no-self." According to anatman, the individual is really a series of interconnected processes that, working together, give the appearance of being a single, separated whole. In this way, anatman, together with anicca, resembles a kind of bundle theory. Instead of an atomic, indivisible self distinct from reality, the individual in Buddhism is understood as an interrelated part of an ever-changing, impermanent universe (see Interdependence, Nondualism, Reciprocity).
Empiricism
Empiricists such as Ibn Tufail in early 12th century Islamic Spain and John Locke in late 17th century England viewed the individual as a tabula rasa ("blank slate"), shaped from birth by experience and education. This ties into the idea of the liberty and rights of the individual, society as a social contract between rational individuals, and the beginnings of individualism as a doctrine.
Hegel
Georg Wilhelm Friedrich Hegel regarded history as the gradual evolution of the Mind as it tests its own concepts against the external world. Each time the mind applies its concepts to the world, the concept is revealed to be only partly true, within a certain context; thus the mind continually revises these incomplete concepts so as to reflect a fuller reality (commonly known as the process of thesis, antithesis, and synthesis). The individual comes to rise above their own particular viewpoint, and grasps that they are a part of a greater whole insofar as they are bound to family, a social context, and/or a political order.
Existentialism
With the rise of existentialism, Søren Kierkegaard rejected Hegel's notion of the individual as subordinated to the forces of history. Instead, he elevated the individual's subjectivity and capacity to choose their own fate. Later Existentialists built upon this notion. Friedrich Nietzsche, for example, examines the individual's need to define his/her own self and circumstances in his concept of the will to power and the heroic ideal of the Übermensch. The individual is also central to Sartre's philosophy, which emphasizes individual authenticity, responsibility, and free will. In both Sartre and Nietzsche (and in Nikolai Berdyaev), the individual is called upon to create their own values, rather than rely on external, socially imposed codes of morality.
Objectivism
Ayn Rand's Objectivism regards every human as an independent, sovereign entity that possesses an inalienable right to their own life, a right derived from their nature as a rational being. Individualism and Objectivism hold that a civilized society, or any form of association, cooperation or peaceful coexistence among humans, can be achieved only on the basis of the recognition of individual rights — and that a group, as such, has no rights other than the individual rights of its members. The principle of individual rights is the only moral base of all groups or associations. Since only an individual man or woman can possess rights, the expression "individual rights" is a redundancy (which one has to use for purposes of clarification in today's intellectual chaos), but the expression "collective rights" is a contradiction in terms. Individual rights are not subject to a public vote; a majority has no right to vote away the rights of a minority; the political function of rights is precisely to protect minorities from oppression by majorities (and the smallest minority on earth is the individual).
See also
References
Further reading
Gracie, Jorge J. E. (1988) Individuality: An Essay on the Foundations of Metaphysics. State University of New York Press.
Klein, Anne Carolyn (1995) Meeting the Great Bliss Queen: Buddhists, Feminists, and the Art of the Self. .
Self
Individualism
Personhood
Concepts in social philosophy
Metaphysical properties | 0.767984 | 0.99726 | 0.76588 |
Seeing Like a State | Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed is a book by James C. Scott critical of a system of beliefs he calls high modernism, that centers on governments' overconfidence in the ability to design and operate society in accordance with purported scientific laws.
The book makes an argument that states seek to force "legibility" on their subjects by homogenizing them and creating standards that simplify pre-existing, natural, diverse social arrangements. Examples include the introduction of family names, censuses, uniform languages, and standard units of measurement. While such innovations aim to facilitate state control and economies of scale, Scott argues that the eradication of local differences and silencing of local expertise can have adverse effects.
The book was first published in March 1998, with a paperback version appearing in February 1999.
Summary
Scott shows how central governments attempt to force legibility on their subjects, and fail to see complex, valuable forms of local social order and knowledge. A main theme of this book, illustrated by his historic examples, is that states operate systems of power toward 'legibility' in order to see their subjects correctly in a top-down, modernist, model that is flawed, problematic, and often ends poorly for subjects. The goal of local legibility by the state is transparency from the top down, from the top of the tower or the center/seat of the government, so the state can effectively operate upon their subjects.
The book uses examples like the introduction of permanent last names in Great Britain, cadastral surveys in France, and standard units of measure across Europe to argue that a reconfiguration of social order is necessary for state scrutiny, and requires the simplification of pre-existing, natural arrangements. While, in earlier times, a field could be measured in the amount of cows it could sustain or the types of plants it could grow, post centralization, its size is measured in hectares. This allows governors who have little to no local knowledge to immediately understand the outline of the area but simultaneously blinds the state to the complex interactions which happen within nature and society. In agriculture and forestry, for example, it led to monoculture, or the sole focus on cultivating a single crop or tree at the cost of all others. While monoculture is easy to measure, manage, and understand, it is also less resilient to ecological crises than polyculture is.
In the case of last names, Scott cites a Welsh man who appeared in court and identified himself with a long string of patronyms: "John, ap Thomas ap William" etc. In his local village, this naming system carried a lot of information, because people could identify him as the son of Thomas and grandson of William, and thus distinguish him from the other Johns, the other children of Thomas, and the other grandchildren of William. Yet it was of less use to the central government, which did not know Thomas or William. The court demanded that John take a permanent last name (in this case, the name of his village). This helped the central government keep track of its subjects, at the cost of a more nuanced yet fuzzy and less legible understanding of local conditions.
Schemes that successfully improve human lives, Scott argues, must take into account local conditions, and that the high-modernist ideologies of the 20th century have prevented this. He highlights collective farms in the Soviet Union, the building of Brasilia, and forced villagization in 1970s Tanzania as examples of failed schemes which were led by top-down bureaucratic efforts and where officials ignored or silenced local expertise.
Scott takes great effort to highlight that he is not necessarily anti-state. At times, the central role played by the state is necessary for programs such as disaster response or vaccinations. The flattening of knowledge which goes hand-in-hand with state centralization can have disastrous consequences when officials see centralised knowledge as the only legitimate information that they should consider, ignoring more specialised but less clearly defined indigenous and local expertise.
Scott explores the concept of "metis," which refers to practical knowledge gained through experience and shaped by individual contexts. He compares this type of knowledge to "epistemic" knowledge, which is more formalized and associated with scientific methods and institutional education. Unlike epistemic knowledge, which is standardized and centralized, metis is adaptable and diverse. It emerges from the accumulated experiences of individuals within specific contexts, resulting in a rich tapestry of localized knowledge systems. This flexibility allows metis to evolve and respond to changing circumstances, making it highly applicable in various practical domains. However, Scott also discusses the challenges that metis faces in contemporary society, particularly in the context of industrialization and state control. He argues that attempts to standardize knowledge and impose universal ideologies often undermine the diverse nature of metis, marginalizing localized knowledge systems in favor of more centralized and standardized forms of knowledge production. Scott also criticizes authoritarian efforts to impose rigid knowledge frameworks, as they overlook the nuanced and context-dependent nature of metis. Instead of recognizing the value of diverse knowledge forms, these authoritarian approaches seek to homogenize and control knowledge production for political or economic purposes. Scott advocates for the preservation and acknowledgment of metis alongside epistemic knowledge. He highlights the importance of embracing the dynamic and diverse nature of practical knowledge derived from experience, emphasizing its relevance in addressing complex challenges and promoting resilience in the face of change.
Scott examines the limitations of high-modernist urban planning and social engineering, contending that these approaches often result in unsustainable outcomes and diminish human autonomy and abilities. Scott contrasts the rigid, centralized designs of high modernism with the adaptable, diverse nature of institutions shaped by practical wisdom, or "metis." Scott criticizes the monocultural, one-dimensional nature of high-modernist projects, suggesting that they fail to account for the complexity and dynamism of real-life systems. Examples from agriculture, urban planning, and economics are used to illustrate how rigid, top-down approaches can lead to environmental degradation, social dislocation, and a loss of human agency. Furthermore, he emphasizes the importance of diversity, flexibility, and adaptability in human institutions, arguing that these qualities enhance resilience and effectiveness. He highlights the role of informal, bottom-up practices in complementing and sometimes subverting formal systems, demonstrating how metis-driven institutions can thrive in complex, ever-changing environments. Scott advocates for institutions that are shaped by the knowledge and experience of their participants, rather than imposed from above. He suggests that such institutions are better equipped to navigate uncertainty, respond to change, and foster the development of individuals with a wide range of skills and capabilities.
Reception
Book reviews
Stanford University political scientist David D. Laitin described it as "a magisterial book." But he said there were flaws in the methodology of the book, saying the book "is a product of undisciplined history. For one, Scott’s evidence is selective and eclectic, with only minimal attempts to weigh disconfirming evidence... It is all too easy to select confirming evidence if the author can choose from the entire historical record and use material from all countries of the world."
John N. Gray, author of False Dawn: The Delusions of Global Capitalism, reviewed the book favorably for the New York Times, concluding: "Today's faith in the free market echoes the faith of earlier generations in high modernist schemes that failed at great human cost. Seeing Like a State does not tell us what it is in late modern societies that predisposes them, against all the evidence of history, to put their trust in such utopias. Sadly, no one knows enough to explain that."
Economist James Bradford DeLong wrote a detailed online review of the book. DeLong acknowledged Scott's adept examination of the pitfalls of centrally planned social-engineering projects, which aligns with the Austrian tradition's critique of central planning. Scott's book, according to DeLong, effectively demonstrates the limitations and failures of attempts to impose high modernist principles from the top down. However, DeLong also suggested that Scott may fail to fully acknowledge his intellectual roots, particularly within the Austrian tradition. DeLong argued that while Scott effectively critiques high modernism, he may avoid explicitly aligning his work with the Austrian perspective due to subconscious fears of being associated with certain political ideologies. DeLong's interpretation of the book was critiqued by Henry Farrell on the Crooked Timber blog, and there was a follow-up exchange including further discussion of the book.
Economist Deepak Lal reviewed the book for the Summer 2000 issue of The Independent Review, concluding: "Although I am in sympathy with Scott’s diagnosis of the development disasters he recounts, I conclude that he has not burrowed deep enough to discover a systematic cause of these failures. (In my view, that cause lies in the continuing attraction of various forms of 'enterprises' in what at heart remains Western Christendom.) Nor is he right in so blithely dismissing the relevance of classical liberalism in finding remedies for the ills he eloquently describes."
Political scientist Ulf Zimmermann reviewed the book for H-Net Online in December 1998, concluding: "It is important to keep in mind, as Scott likewise notes, that many of these projects replaced even worse social orders and at least occasionally introduced somewhat more egalitarian principles, never mind improving public health and such. And, in the end, many of the worst were sufficiently resisted in their absurdity, as he had shown so well in his, Weapons of the Weak and as best demonstrated by the utter collapse of the soviet system. "Metis" alone is not sufficient; we need to find a way to link it felicitously with—to stick with Scott's Aristotelian vocabulary—phronesis and praxis, or, in more ordinary terms, to produce theories more profoundly grounded in actual practice so that the state may see better in implementing policies."
Michael Adas, professor of history at Rutgers University reviewed the book for the Summer 2000 issue of the Journal of Social History.
Russell Hardin, a professor of politics at New York University, reviewed the book for The Good Society in 2001, disagreeing with Scott's diagnosis somewhat. Hardin, who believes in collectiveness (collective actions) concluded: "The failure of collectivization was therefore a failure of incentives, not a failure to rely on local knowledge."
Discussions
The September 2010 issue of Cato Unbound was devoted to discussing the themes of the book. Scott wrote the lead essay. Other participants were Donald Boudreaux, Timothy B. Lee, and J. Bradford DeLong. A number of people, including Henry Farrell and Tyler Cowen, weighed in on the discussion on their own blogs.
See also
Panopticism
Further reading
References
1998 non-fiction books
Books about social history
Modernism | 0.770273 | 0.994239 | 0.765836 |
Culture of Asia | The culture of Asia encompasses the collective and diverse customs and traditions of art, architecture, music, literature, lifestyle, philosophy, food, politics and religion that have been practiced and maintained by the numerous ethnic groups of the continent of Asia since prehistory. Identification of a specific culture of Asia or universal elements among the colossal diversity that has emanated from multiple cultural spheres and three of the four ancient River valley civilizations is complicated. However, the continent is commonly divided into six geographic sub-regions, that are characterized by perceivable commonalities, like culture, religion, language and relative ethnic homogeneity. These regions are Central Asia, East Asia, North Asia, South Asia, Southeast Asia and West Asia.
As the largest, most populous continent and rich in resources, Asia is home to several of the world's oldest civilizations, that produced the majority of the great religious systems, the oldest known recorded myths and codices on ethics and morality.
However, Asia's enormous size separates the various civilizations by great distances and hostile environments, such as deserts and mountain ranges. Yet by challenging and overcoming these distances, trade and commerce gradually developed a truly universal, Pan-Asian character. Inter-regional trade was the driving and cohesive force, by which cultural elements and ideas spread to the various sub-regions, via the vast road network and the many sea routes.
History
Multiple cultural regions
Asia's various modern cultural and religious spheres correspond roughly with the principal centers of civilization.
West Asia (or Southwest Asia as Ian Morrison puts it, or sometimes referred to as the Middle East) has their cultural roots in the pioneering civilizations of the Fertile Crescent and Mesopotamia, spawning the Persian, Arab, Ottoman empires, as well as the Abrahamic religions of Judaism and later Islam. According to Morrison, in his book Why the West Rules--For Now, these original civilizations of the Hilly Flanks are so far (by archaeological evidence) the oldest (first evidence of farming c9000 BC). The Hilly flanks is also the birthplace of his definition of the west (which groups the Middle East with Europe). According to his definition this would make Asia the origin of western culture. Not everybody agrees with him though.
South Asia, India and the Indosphere emanate from the Indus Valley civilisation.
The East Asian cultural sphere developed from the Yellow River civilization. Southeast Asia's migration waves of more varied ethnic groups are relatively recent. Commercial interaction with South Asia eventually lead to the adoption of culture from India and China (including Hinduism, Buddhism, Confucianism, Daoism). The region later absorbed influences from Islam as well, and the Malays are currently the largest Islamic population in the world. North Asia's (otherwise known as Siberia) harsh climate and unfavorable soil proved to be unsuited to permanently support large urban settlements and only permits the presence of a pastoral and nomadic population, spread over large areas. Nonetheless, North Asian religious and spiritual traditions eventually diffused into more comprehensive systems such as Tibetan Buddhism that developed its own unique characteristics (e.g. Mongolian Buddhism). For these reasons it is becoming more unconventional to separate it from the rest of East Asian cultures.
Central Asia has also absorbed influences from both West Asia and East Asia (including Persia and Mongolia), making it another melting pot of cultures.
The cultural spheres are not mutually disjoint and can even overlap, representing the innate diversity and syncretism of human cultures and historical influences.
East Asia
The term East Asian cultural sphere defines the common cultural sphere of China, Japan, North Korea, South Korea in East Asia and Vietnam in Southeast Asia. Ethnic and linguistic similarities, shared artistic traditions, written language and moral values suggest that most East Asian people are descendants of the Yellow River civilization, that emerged in the flood plains of northern China around 10.000 years B.P. People within this sphere are sometimes referred to as East Eurasian, and the major languages of this region (including Sino-Tibetan, Austroasiatic, Altaic, Austronesian, Kra-Dai) are thought to have originated from regions in China (see East Asian cultural sphere#historical linguistics).
Historically, China occupied the prominent, central role in East Asia for a long time in recorded history, as it "deeply influenced the culture of the peripheral countries and also drew them into a "China-centered" [...] international order", that was briefly interrupted by the 20th century. Nations within its orbit from Central Asia to Southeast Asia paid to the Chinese tributary system (also see List of tributaries of China). The Imperial Chinese Tributary System is based on the Confucian religious and philosophical idea of submission to celestial harmony was also recognized by nations beyond, in Southeast Asia in particular. Ceremonies were presided over by the Emperor of China as the Son of Heaven and curator of the Mandate of Heaven. In elaborate ceremonies both, the tributary state and the various Chinese dynasties agreed to mutually favorable economic co-operation and beneficial security policies.
Some of defining East Asian cultural characteristics are the Chinese language and traditional writings systems of Hanzi as well as shared religious and ethical ideas, that are represented by the Three teachings Buddhism, Taoism and Confucianism. The Chinese script is one of the oldest continuously used writing systems in the world, and has been a major unifying force and medium for conveying Chinese culture in East Asia. Classical Chinese was the literary language of elites and bureaucrats. Historically used throughout the region, it is still in use by Chinese diaspora communities around the world, as well as in Japan, Korea, Vietnam, and pockets of Southeast Asia.
However, as Chinese writing concepts were passed on to Korea, Japan and Vietnam, these nations developed their own characteristic writing systems to complement Hanzi. Vietnam invented their own Chữ Nôm glyphs, Japan invented Kana, and Korea invented their own alphabet Hangul. To this day, Vietnam mostly writes in Chữ Quốc ngữ (a modified Latin alphabet) but there is also a resurgence of Hán-Nôm (a type of writing that combines both Chữ Hán and Chữ Nôm) as well. Sino cognates compose a vast majority of the vocabulary of these languages (see Sino-Vietnamese vocabulary, Sino-Korean vocabulary, Sino-Japanese vocabulary). In the 20th century, China has also re-borrowed terms from Japan to represent western concepts known as Wasei-kango.
Apart from the unifying influence of Confucianism, Taoism, Chinese characters and numerous other Chinese cultural influences, East Asian national customs, architecture, literature, cuisines, traditional music, performing arts and crafts also have developed from many independent and local concepts, they have grown and diversified as many rank among the most refined expressions of aesthetic, artistic and philosophical ideas in the world. Notable among others are Japanese gardens and landscape planning, Heian literature, Vietnamese Water puppetry and the artifacts of the Đông Sơn culture. Modern research has also focused on the several nations pivotal role on the collective body of East Asian Buddhism and the Korean influence on Japanese culture as well as Japanese influence on Korean culture.
Southeast Asia
Southeast Asia divides into Mainland Southeast Asia, that encompasses Vietnam, Laos, Cambodia, Thailand, Myanmar and West Malaysia, and Maritime Southeast Asia, that includes Indonesia, East Malaysia, Singapore, the Philippines, East Timor, Brunei, Cocos (Keeling) Islands, and Christmas Island. At the crossroads of the Indian and East Asian maritime trade routes since around 500 B.C., the region has been greatly influenced by the culture of India and China. Most of the influence of India came in the era of the Chola dynasty spreading Tamil and Hindu cultures across present south east Asian countries and even expanding and establishing Hindu kingdoms in the region. The term Indianised Kingdoms is a designation for numerous Southeast Asian political units, that had to a varying degree adopted most aspects of India's statecraft, administration, art, epigraphy, writing and architecture. The religions Hinduism, Buddhism and Islam gradually diffused into local cosmology. Nonetheless, the Southeast Asian nations have very diversely adapted to these cultural stimuli and evolved their distinct sophisticated expression in lifestyle, the visual arts and most notably in architectural accomplishments, such as Angkor Wat in Cambodia and Borobudur in Indonesia.
Buddhist culture has a lasting and significant impact in mainland Southeast Asia (Myanmar, Thailand, Laos, Cambodia and Vietnam); most Buddhists in Indochina practice Theravada Buddhism. In the case of Vietnam, it is also influenced much by Confucianism and the culture of China. Myanmar has also been exposed to Indian cultural influences. Before the 14th century, Hinduism and Buddhism were the dominant religions of Southeast Asia. Thereafter, Islam became dominant in Indonesia, Malaysia and Brunei. Southeast Asia has also had a lot of Western influence due to the lasting legacy of colonialism. One example is the Philippines which has been heavily influenced by the United States and Spain, with Christianity (Catholicism) as the dominant religion. East Timor also demonstrates Portuguese influence through colonialism, as is also a predominantly Christian nation.
A common feature found around the region are stilt houses. These houses are elevated on stilts so that water can easily pass below them in case of a flood. Another shared feature is rice paddy agriculture, which originated in Southeast Asia thousands of years ago. Dance drama is also a very important feature of the culture, utilizing movements of the hands and feet perfected over thousands of years. Furthermore, the arts and literature of Southeast Asia is very distinctive as some have been influenced by Indian (Hindu), Chinese, Buddhist, and Islamic literature.
South Asia
Evidence of Neolithic culture has been found throughout the modern states Afghanistan, Bangladesh, Bhutan, Maldives, Nepal, India, Pakistan and Sri Lanka that represent South Asia (also known as the Indian subcontinent). Since 3,300 B.C. in modern-day northeastern Afghanistan, in Pakistan and northwestern India a sophisticated Bronze Age cultural tradition emerged, that after only a few centuries fully flourished in urban centers. Due to the high quality of its arts, crafts, metallurgy and buildings, the accomplishments in urban planning, governance, trade and technology etc. it has been classified as one of the principal Cradles of civilization. Referred to as the Indus Valley civilisation or Harappan Civilisation it thrived for almost 2.000 years until the onset of the Vedic period (c. 1500 – c. 600 B.C.). The great significance of the Vedic texts (that don't mention cities or urban life) for South Asian culture, their impact on family, societal organisation, religion, morale, literature etc. has never been contested. The Indus Valley Civilisation on the other hand has only come to light by means of 20th century archaeology. Scholars, who employ several periodization models argue over whether South Asian tradition is consciously committed to the Harappan culture.
Declining climatic conditions, (aridification) and population displacement (Indo-Aryan migration) are regarded as to have caused the fatal disruption of the Harappa culture, that was superseded by the rural Vedic culture.
Following the Indo-Aryan settlement in the Indo-Gangetic Plain and the establishment of the characteristic social groups (Brahmanas, Kshatriyas, Vaishyas and Shudras) in the caste system based on the Jāti model in the Varna order, the tribal entities variously consolidated into oligarchic chiefdoms or kingdoms (the 16 Mahajanapadas), beginning in the sixth century B.C. The late Vedic political progress results in urbanization, strict social hierarchy, commercial and military rivalries among the settlers, that have spread all over the entire sub-continent. The large body of Vedic texts and literature, supported by the archaeological sequence allows researchers to reconstruct a rather accurate and detailed image of the Vedic culture and political organisation. The Vedas constitute the oldest work of Sanskrit literature and form the basis of religious, ethic and philosophic ideas in South Asia. They are widely, but not exclusively regarded the basics and scriptural authority on worship, rituals, ceremonies, sacrifices, meditation, philosophy and spiritual knowledge for the future Hindu and Buddhist cosmology. Commentaries and discussions also focus on the development of valid political ideas and concepts of societal progress and ethic conformity.
Hinduism, Buddhism, Jainism and Sikhism are major religions of South Asia. After a long and complex history of cosmological and religious development, adoption and decline, the Hindu-synthesis and the late but thorough introduction of Islam about 80% of modern-day Indians and Nepalis identify as Hindus. In Sri Lanka and Bhutan most people adhere to various forms of Buddhism. Islam is the predominant religion in Afghanistan, the Maldives (99%), Pakistan (96%) and Bangladesh (90%).
Afghanistan and Pakistan are situated at the western periphery of South Asia, where the cultural character has been made by both the Indosphere and Persia. Pakistan is split with its two western regions of Baluchistan and Khyber Pakhtunkhwa sharing a greater Iranic heritage due to the native Pashtuns and Baloch people of the regions. Its two eastern regions of Punjab and Sindh share cultural links to Northwest India. Bangladesh and the Indian state of West Bengal share a common heritage and culture based on the Bengali language. The Culture of India is diverse and a complex mixture of many influences. Nepal is culturally linked to both India and Tibet and the varied ethnic groups of the country share many of the festivals and cultural traditions used and celebrated in North and East India and Tibet. Nepali, the dominant language of Nepal uses the Devanagari alphabet which is also used to write many North Indian languages. Bhutan is a culturally linked to Tibet and India. Tibetan Buddhism is the dominant religion in Bhutan and the Tibetan alphabet is used to write Dzongkha, the dominant language of Bhutan. There is a cultural and linguistic divide between North and South India. Sri Lanka is culturally tied to both India and Southeast Asia. Sinhala, the dominant language in the country is written in Sinhala script which is derived from the Kadamba-Pallava alphabet, certain cultural traditions, and aspects of its cuisine, for example, show South Indian influences. Cultural festivals, aspects of its cuisine and Theravada Buddhism, the dominant religion in Sri Lanka, show a Southeast Asian affinity.
Indo-Aryan languages are spoken in Pakistan, Bangladesh, Sinhala of Sri Lanka and most of North, West and East India and Nepal. Dravidian languages namely Telugu, Tamil, Kannada and Malayalam languages are spoken across South India and in Sri Lanka by the Tamil community. Tibeto-Burman languages are spoken in Nepal, Bhutan, and North & North East India. Iranic Languages are spoken in Baluchistan and Khyber Pakhtunkhwa in PakistanThe main languages of Afghanistan are Pashto and Dari.
Central Asia
Central Asia, in between the Caspian Sea and East Asia, envelops five former Soviet Socialist Republics: Kazakhstan, Kyrgyzstan, Tajikistan, Uzbekistan and Turkmenistan. However, Afghanistan is sometimes included. Its strategic and historic position around the east–west axis and the major trading routes such as the Silk Road has made it a theatre a steady exchange of ideas and east–west conflicts such as the Battle of Talas. The region was conquered and dominated by a variety of cultures, such as the Chinese, Greeks, Mongols, Persians, Tatars, Russians, and Sarmatians. As some Central Asian areas have been inhabited by nomadic people, numerous urban centers have developed in a distinct local character.
This region was mainly dominated by Russians in the Soviet era and even after its dissolution in 1991. Even now the region is dominated by them.
North Asia
For the most part, North Asia (more widely known as Siberia) is considered to be made up of the Asian part of Russia solely. The geographic region of Siberia was the historical land of the Tatars in the Siberia Khanate. However Russian expansion essentially undermined this and thus today it is under Russian rule. Other ethnic groups that inhabit Siberia include the Buryats, Evenks, and Yakuts. There are roughly 40 million people living in North Asia and the majority consists now of Ethnic Russians. However, many East Asians also inhabit the region, and historically they have been the majority before Russia's expansion east.
West Asia
West Asia must be distinguished from the Middle East, a more recent Eurocentric term, that also includes parts of Northern Africa. West Asia consists of Turkey, Syria, Georgia, Armenia, Azerbaijan, Iraq, Iran, Lebanon, Jordan, Israel, Palestine, Saudi Arabia, Kuwait, Bahrain, Qatar, United Arab Emirates, Oman and Yemen. Cyprus is frequently considered to be part of the region but it has ethnic and cultural ties to Europe as well. The Israelite/Jewish civilization of the Fertile Crescent would have a profound impact on the rest of West Asia, giving birth to the 3 Abrahamic faiths. In addition, the Jewish origins of Christianity, along with the many cultural contributions from both Jews and Arabs in Europe, meant that West Asian culture had left a lasting impact on Western civilization as well. Other indigenous West Asian religions include Zoroastrianism, Yazidism, Alevism, Druze and the Baháʼí Faith.
Today, almost 93% of West Asia's inhabitants are Muslim and is characterized by political Islamic, with the exception of Israel, a Jewish state. At its north-western end, Armenia and Georgia have an unmistakable Christian tradition, while Lebanon shares a large Christian and a large Muslim community. Ethnically, the region is dominated by Arab, Persian, Kurdish, Azerbaijani, and Turkish people. Among them smaller indigenous groups are the Jews, Assyrians, Druze, Samaritans, Yazidis and Mandeans. Many Middle Eastern countries encompass huge deserts where nomadic people live to this day. In great contrast, modern cities like Abu Dhabi, Dubai, Amman, Riyadh, Tel Aviv, Doha and Muscat have developed on the coastal lands of the Mediterranean Sea, the Persian Gulf and at the periphery of the Arabian Desert.
West Asian cuisine is immensely rich and diverse. The literature is also immensely rich with Arabic, Jewish, Persian, and Turkish dominating.
Architecture
Asia is home to countless grandiose and iconic historic constructions, usually religious structures, castles and fortifications or palaces. However, after several millennia, many of the greatest buildings have been destroyed or dismantled such as the Ziggurats of Mesopotamia, most of the Great Wall of China, Greek and Hellenistic temples or the royal cities of Persia.
China
Chinese architecture has taken shape in East Asia over many centuries as the structural principles have remained largely unchanged, the main changes being only the decorative details. An important feature in Chinese architecture is its emphasis on articulation and bilateral symmetry, which signifies balance. Bilateral symmetry and the articulation of buildings are found everywhere in China, from palace complexes to humble farmhouses. Since the Tang dynasty, Chinese architecture has had a major influence on the architectural styles of Korea, Vietnam, and Japan.
India
Indian architecture is that vast tapestry of production of the Indian Subcontinent that encompasses a multitude of expressions over space and time, transformed by the forces of history considered unique to the sub-continent, sometimes destroying, but most of the time absorbing. The result is an evolving range of architectural production that nonetheless retains a certain amount of continuity across history. Being few architectures brought by Mughals in Northern India. Dravidian architecture in Southern India flourished under chola's, vijayanagara, satavahana and many other south India's flourished kingdoms until the Mughals occupation and followed by Britishers in India.
Korea
Korean architecture refers to an architectural style that developed over centuries in Korea. Just like in the case of other Korean arts, architecture tends to be naturalistic, favors simplicity, avoids the extremes and is economical with shapes.
Indonesia
The Indonesian architecture reflects the diversity of cultural, historical and geographic influences that have shaped Indonesia as a whole. It ranges from native vernacular architecture, Hindu-Buddhist temples, colonial architecture, to modern architecture.
Indonesian vernacular architecture is called rumah adat. The houses hold social significance in society and demonstrate local ingenuity in their relations to environment and spatial organisation. Notable examples include Rumah Gadang, Tongkonan, Balinese houses and Javanese Joglo. Hindu-Buddhist temple monument called candi, with the best example are Borobudur massive stone mandala-stupa and Prambanan Hindu temple dedicated to Trimurti gods. By the 16th century, the Portuguese followed by the Dutch colonize Indonesian archipelago, and developed European architecture technique and developed colonial architecture.
Japan
Japanese architecture is distinctive in that it reflects a deep ″understanding of the natural world as a source of spiritual insight and an instructive mirror of human emotion″. Attention to aesthetics and the surroundings is given, natural materials are preferred and artifice is generally being avoided. Impressive wooden castles and temples, some of them 2000 years old, stand embedded in the natural contours of the local topography. Notable examples include the Hōryū Temple complex (6th century), Himeji Castle (14th century), Hikone Castle (17th century) and Osaka Castle.
The architecture of any country is a marker of its culture, history and tradition. The materials used, the shape, the lines, curves and colours all come together to present a masterpiece that is unique and beautiful. Vietnamese architecture is no different. From vernacular stilt houses to extravagant palaces and concrete towers, the country's building is an ode to its rich past and its promising future.
Nepal
Vietnam
Traditional houses in Vietnam were characterized by wooden structures topped by steep roofs. The roofs would be covered with fish-scale tiles and curve outwards, while beams and rafters held up the main building. In some places, stilt houses were built and the houses usually had an odd number of rooms. However, the coming of various dynasties shaped cultural landmarks in the country in different ways. Palaces, pagodas and citadels flourished in Vietnam for over 500 years.
The Lý dynasty of the 11th century, for example, was deeply influenced by Buddhism and incorporated intricate reliefs and motifs into their architecture. In 1031, a staggering 950 pagodas were constructed by the reigning monarch Lý Thái Tông. During this period, rounded statues, door-steps, decorated roofs and bannisters were common features of Vietnamese architecture. The Imperial Citadel of Thăng Long, a UNESCO world heritage site now, is a must-visit for tourists looking to experience Vietnamese heritage up close. Located in present-day Hanoi, the citadel was the political Centre of the region for 13 centuries consecutively and will delight history buffs in particular. This magnificent structure is a fine example of Vietnamese architecture from the medieval era.
Trần dynasty, which gained a foothold in the 13th century, brought its own set of beliefs and customs that made its mark in Vietnam's architectural history. Buildings became more complex and demarcated, and gardens became a part of temples and places of worship. Tower-temples also emerged at this time; The Phổ Bình Tower in Nam Định province and Bình Sơn Tower in Vĩnh Phúc province are relics from the Trần dynasty.
Malay Peninsula
Various cultural influences, notably Chinese, Indian and Europeans, played a major role in forming Malay architecture. Until recent time, wood was the principal material used for all Malay traditional buildings. However, numerous stone structures were also discovered particularly the religious complexes from the time of Srivijaya and ancient isthmian Malay kingdoms.
West Asia
The ancient architecture of the region of the Tigris–Euphrates River system dates back to the 10th millennium BC and lead to the development of urban planning, the courtyard house, and ziggurats. The basic and dominant building material was the mudbrick, which is still in use in the region for the construction of residential structures. Kiln-burnt bricks were coated with a vitreous enamel for purposes of decoration and bitumen functioned as cement. Palaces or temples were constructed on terraces as rooms usually grouped round quadrangles, with large doorways and the roofs rested on richly ornamented columns.
Art
Middle Eastern dance has various styles and has spread to the West in the form known as bellydancing. In the Punjab region of India and Pakistan, bhangra dance is very popular. The bhangra is a celebration of the harvest. The people dance to the beat of a drum while singing and dancing.
In Southeast Asia, dance is an integral part of the culture; the styles of dance vary from region to region and island to island. Traditional styles of dance have evolved in Thailand and Burma. The Philippines have their own styles of dance such as Cariñosa and Tinikling; during the Spanish occupation of the Philippines, practitioners of Filipino martial arts hid fighting movements into their dances to keep the art alive despite the fact that it was banned by the occupiers.
Martial arts
Martial arts figure prominently in many Asian cultures, and the first known traces of martial arts date from the Xia dynasty of ancient China from over 4000 years ago. Some of the best known styles of martial arts in the world were developed in East Asia, such as Karate from Okinawa, Judo from Japan, Taekwondo from Korea and the various styles of Chinese martial arts known collectively as kung fu. Ancient India was home to many martial arts that were mentioned in the Vedas such as Khadgavidya, Dhanurvidya, Gadayuddha, and Malla-yuddha. These various martial arts and communities flourished after the Vedic period. Many other styles of martial arts originated in Southeast Asia, Southeast Asian boxing from Indochina, Vovinam from Vietnam, Arnis from the Philippines, and Pencak Silat from Indonesia. In addition, popular styles of wrestling have originated in Turkey and Mongolia.
Development of Asian martial arts continues today as newer styles are created. Modern hybrid martial arts systems such as Jeet Kune Do and Krav Maga often incorporate techniques from traditional East Asian martial arts. Asian martial arts are highly popular in the Western world and many have become international sports. Karate alone has 50 million practitioners worldwide.
Cinema
Cinema is prominent in South Asia, where the Bollywood (representing the most-spoken language in the region of Hindi) and South Indian film industries vie for dominance. Pakistan's Lollywood also is growing, while historically, Bengali cinema was highly acclaimed by international film circles, with the movies of Satyajit Ray still praised today.
China's cinema has grown in recent decades, with the country also influencing the content of Hollywood productions by virtue of its large market. Hong Kong cinema was historically very influential, with kung fu films a major cultural export of the city for decades.
Japanese and Korean productions have become very popular recently; Japanese anime and manga have supplemented each other and become a part of world culture, while Korean films, dramas, and music (K-pop) have grown with much support from the Korean government. The 2019 Korean film Parasite was the first Asian film to win an Academy Award.
Languages
Asia is a continent with great linguistic diversity, and is home to various language families and many language isolates. In fact, Asia contains almost every major language family except the Bantu languages. A majority of Asian countries have more than one language that is natively spoken. For instance, according to Ethnologue over 600 languages are spoken in Indonesia while over 100 are spoken in the Philippines. The census of India of 2001 recorded 30 languages which were spoken by more than a million native speakers and 122 which were spoken by more than 10,000 people, with hundreds of other, smaller languages. Korea, on the other hand, is home to only one language.
The main languages found in Asia, along with examples of each, are:
Afro-Asiatic: Arabic, Aramaic, Hebrew
Altaic languages (not accepted to be a family but a convenient grouping): Japonic: Japanese, Ryukyuan; Korean; Mongolic: Buryat, Mongolian; Turkic: Azeri, Kazakh, Kyrgyz, Turkish, Turkmen; Tungusic: Manchu
Austroasiatic: Khasi, Khmer, Mon, Santali, Vietnamese, Wa
Austronesian: Bicolano, Cham, Ilocano, Javanese, Kapampangan, Kedayan, Malay (Indonesian & Malaysian), Minangkabau, Pangasinan, Sundanese, Tagalog, Tetum, Visayan (Cebuano, Hiligaynon, Waray)
Chinese or Sino-Tibetan: Hakka, Hokkien (Taiwanese), Mandarin, Wu (Shanghainese), Yue (Cantonese), Burmese, Dzongkha, Lepcha, Meitei, Nepal Bhasa, Tibetan, Tshangla; Miao–Yao: Hmong, Iu Mien; Tai–Kadai: Bouyei, Isan, Kam, Lao, Shan, Thai, Zhuang
Circassian: Kabardian
Dagestanian: Chechen, Ingush
Dravidian: Kannada, Malayalam, Tamil, Telugu and relatively less spoken languages such as Tulu, Kodagu etc...
Georgian
Indo-European: Armenian, Assamese, Bengali, Bhojpuri, Dhivehi, Gujarati, Hindustani (Hindi, Urdu), Kashmiri, Kurdish, Maithili, Marathi, Nepali, Odia, Pashto, Persian (Tajik and Dari), Punjabi, Sanskrit, Sindhi, Sinhala, Russian, Greek; as well as Romance-based creoles: Chavacano, Macanese
Uralic: Khanty, Mari, Nenets, Permics
Literature
Arabic
Arabic literature is the writing, both prose and poetry, produced by writers in the Arabic language. One of the most famous literary works of West Asia is One Thousand and One Arabian Nights.
Chinese
In Tang and Song dynasty China, famous poets such as Li Bai authored works of great importance. They wrote shī (Classical Chinese: 詩) poems, which have lines with equal numbers of characters, as well as cí (詞) poems with mixed line varieties.
Hebrew and Diaspora Jewish
Jewish literature consists of ancient, medieval, and modern writings by Jews, both in their original homeland and in the diaspora. A sizable amount of it is written in the Hebrew language, although there have been cases of literature written in Hebrew by non-Jews. Without doubt, the most important such work is the Hebrew Bible (Tanakh). Many other ancient works of Hebrew literature survive, including religious and philosophical works, historical records, and works of fiction.
Indian
The famous poet and playwright Kālidāsa wrote two epics: Raghuvamsham (Dynasty of Raghu) and Kumarasambhavam (Birth of Kumar Kartikeya); they were written in Classical Sanskrit rather than Epic Sanskrit some other examples of his plays are Abhigyanam Shakuntala . Other examples of works written in Classical Sanskrit include the Pānini's Ashtadhyayi which standardized the grammar and phonetics of Classical Sanskrit. The Laws of Manu is an important text in Hinduism. Kālidāsa is often considered to be the greatest playwright in Sanskrit literature, and one of the greatest poets in Sanskrit literature, whose Recognition of Shakuntala and Meghaduuta are the most famous Sanskrit plays. He occupies the same position in Sanskrit literature that Shakespeare occupies in English literature. Some other famous plays were Mricchakatika by Shudraka, Svapna Vasavadattam by Bhasa, and Ratnavali by Sri Harsha. Later poetic works include Geeta Govinda by Jayadeva. Some other famous works are Chanakya's and Vatsyayana's Kamasutra.
Japanese
In the early eleventh century, court lady Murasaki Shikibu wrote Tale of the Genji considered the masterpiece of Japanese literatures and an early example of a work of fiction in the form of a novel.
Early-Modern Japanese literature (17th–19th centuries) developed comparable innovations such as haiku, a form of Japanese poetry that evolved from the ancient hokku (Japanese language: 発句) mode. Haiku consists of three lines: the first and third lines each have five morae (the rough phonological equivalent of syllables), while the second has seven. Original haiku masters included such figures as Edo period poet Matsuo Bashō (松尾芭蕉); others influenced by Bashō include Kobayashi Issa and Masaoka Shiki.
Korean
Korean literature begins in the Three Kingdoms period, and continues through the Goryeo and Joseon dynasties to the modern day. Examples of Korean poetric forms include sijo and gasa, with Jeong Cheol and Yun Seon-do considered to be the supreme Korean poets. Examples of renowned Korean prose masterpieces include the Memoirs of Lady Hyegyeong, The Cloud Dream of the Nine and the Chunhyangjeon.
Nepali
Pakistani
Pakistani literature has a rich history, and draws influences from both Persian, Muslim and Indian literary traditions. The country has produced a large number of famed poets especially in the national Urdu language. The famous Muhammad Iqbal, regarded as the national poet, was often called "The Poet of the East" (Shair-e-Mashriq). Pakistani people wear their traditional and Islamic dress known as Shalwar Qameez.
Their Urdu poetry is widely famous in the whole world. Many times "Mushairas" are held. Pakistani women mostly prefer veil in normal routine days when going out somewhere and wear traditional "Burqa" or "Abaya".
Persian
Turkish
Indonesian
Vietnamese
The earliest surviving literature by Vietnamese writers is written in Chữ Hán or Văn ngôn (Classical Chinese). Almost all of the official documents in Vietnamese history were written in Chữ Hán, as were the first poems. Not only is the Chữ Hán foreign to modern Vietnamese speakers nowadays, these works are mostly unintelligible even when directly transliterated from Chinese into the modern Quốc ngữ script due to their Chinese syntax and vocabulary. As a result, these works must be translated into colloquial Vietnamese in order to be understood by the general public. These works include official proclamations by Vietnamese emperors, royal histories, and declarations of independence from China, as well as Vietnamese poetry. In chronological order notable works include:
Thiên đô chiếu (遷都詔) 1010, Edict on transfer the capital of Đại Cồ Việt from Hoa Lư (modern Ninh Bình) to Đại La (modern Hanoi).
Nam quốc sơn hà (南國山河) 1077, Mountains and rivers of the Southern country, poem by General Lý Thường Kiệt.
Đại Việt sử ký (大越史記) Annals of Đại Việt by Lê Văn Hưu, 1272.
Dụ chư tì tướng hịch văn 諭諸裨將檄文, Proclamation to the Officers, General Trần Hưng Đạo, 1284.
An Nam chí lược (安南志略) Abbreviated Records of Annam, anon. 1335
Gia huấn ca (家訓歌 The Family Training Ode), a 976-line Confucian morality poem attributed to Nguyễn Trãi 1420s
Lĩnh Nam chích quái (嶺南摭怪) "The wonderful tales of Lĩnh Nam" 14th century, edited Vũ Quỳnh (1452–1516)
Đại Việt sử lược (大越史略) Abbreviated History of Đại Việt, anon. 1377
Việt điện u linh tập (越甸幽靈集), Spirits of the Departed in the Viet Realm, Lý Tế Xuyên 1400
Bình Ngô đại cáo (平吳大誥), Great Proclamation upon the Pacification of the Wu Forces, Nguyễn Trãi 1428
Đại Việt sử ký toàn thư (大越史記全書) Complete Annals of Đại Việt, Ngô Sĩ Liên 1479.
Truyền kỳ mạn lục (傳奇漫錄, Collection of Strange Tales, partly by Nguyễn Dữ, 16th century
Hoàng Lê nhất thống chí (皇黎一統志) Unification Records of the Le Emperor, historical novel ending with Gia Long. anon.
Chinh phụ ngâm (征婦吟) "Lament of the soldier's wife", the original Chữ Hán version by Đặng Trần Côn d.1745
Đại Việt thông sử (大越通史) history by Lê Quý Đôn 1749
Vân đài loại ngữ (芸臺類語) encyclopedia Lê Quý Đôn 1773
Phủ biên tạp lục (撫邊雜錄) Frontier Chronicles Lê Quý Đôn 1776
Việt Nam vong quốc sử (越南亡國史), by Phan Bội Châu in Japan in 1905
Works written in chữ Nôm - a locally invented script based on chữ Hán - was developed for writing the spoken Vietnamese language from the 13th century onwards. For the most part, these chữ Nôm texts can be directly transliterated into the modern chữ Quốc ngữ and be readily understood by modern Vietnamese speakers. However, since chữ Nôm was never standardized, there are ambiguities as to which words are meant when a writer used certain characters. This resulted in many variations when transliterating works in chữ Nôm into chữ Quốc ngữ. Some highly regarded works in Vietnamese literature were written in chữ Nôm, including Nguyễn Du's Truyện Kiều, Đoàn Thị Điểm's chữ Nôm translation of the poem Chinh Phụ Ngâm Khúc (征婦吟曲 - Lament of a Warrior Wife) from the Classical Chinese poem composed by her friend Đặng Trần Côn (famous in its own right), and poems by the renowned poet Hồ Xuân Hương.
Other notable works include
Chinh phụ ngâm (征婦吟) "Lament of the soldier's wife", translations from Chữ Hán into vernacular chữ Nôm by several translators including Phan Huy Ích and Đoàn Thị Điểm
Cung oán ngâm khúc (宮怨吟曲) "Lament of the Concubine" by Nguyễn Gia Thiều d.1798
Truyện Kiều (傳翹) "Tale of Kiều" epic poem by the blind poet Nguyễn Du
Hạnh Thục ca (行蜀歌) "Song of Exile to Thục" Nguyễn Thị Bích, 1885
Lục súc tranh công (六畜爭功) "The Quarrel of the Six Beasts"
Lục Vân Tiên (蓼雲仙傳) epic poem by the blind poet Nguyễn Đình Chiểu d.1888
Nhị độ mai (貳度梅) "The Plum Tree Blossoms Twice"
Phạm Công – Cúc Hoa (范公菊花) Tale of Phạm Công and Cúc Hoa
Phạm Tải – Ngọc Hoa (范子玉花) Tale of the orphan Phạm Tải and princess Ngọc Hoa
Phan Trần (潘陳) The clan of Phan and the clan of Trần
Quốc âm thi tập (國音詩集) "National pronunciation poetry collection" attributed to Nguyễn Trãi after retirement
Thạch Sanh tân truyện (石生新傳) anon. 18th century
Tống Trân and Cúc Hoa (宋珍菊花) Tale of Tống Trân and his wife Cúc Hoa
Trinh thử (貞鼠) "The Virgin Mouse"
Hoa tiên (花箋) The Flowered Letter
Modern literature
While created in the seventeenth century, chữ Quốc ngữ was not widely used outside of missionary circles until the early 20th century, when the French colonial government mandated its use in French Indochina. During the early years of the twentieth century, many periodicals in chữ Quốc ngữ flourished and their popularity helped popularize chữ Quốc ngữ. While some leaders resisted the popularity of chữ Quốc ngữ as an imposition from the French, others embraced it as a convenient tool to boost literacy. After declaring independence from France in 1945, Empire of Vietnam's provisional government adopted a policy of increasing literacy with chữ Quốc ngữ. Their efforts were hugely successful, as the literacy rate jumped overnight.
In those early years, there were many variations in orthography and there was no consensus on how to write certain words. After some conferences, the issues were mostly settled, but some still linger to this day. By the mid-20th century, all Vietnamese works of literature are written in chữ Quốc ngữ, while works written in earlier scripts are transliterated into chữ quốc ngữ for accessibility to modern Vietnamese speakers. The use of the earlier scripts is now limited to historical references.
Works in modern Vietnamese include
Việt Nam sử lược (越南史略) by Trần Trọng Kim 1921
Số đỏ by Vũ Trọng Phụng 1936
Modern literature
The polymath Rabindranath Tagore, a Bengali poet, dramatist, and writer from India, became in 1913 the first Asian Nobel laureate. He won his Nobel Prize in Literature for notable impact his prose works and poetic thought had on English, French, and other national literature of Europe and the Americas. He also wrote Jana Gana Mana the national anthem of India as well as Amar Sonar Bangla the national anthem of Bangladesh. Moreover, translation of his another song “Namo Namo Matha" is the national anthem of Sri Lanka. This song was collected by his student Mr. Ananda Samarakoon and M. Nallathamby translated in Tamil language. Other Asian writers won Nobel Prizes in literature, including Yasunari Kawabata (Japan, 1966), and Kenzaburō Ōe (Japan, 1994). Yasunari Kawabata wrote novels and short stories distinguished by their elegant and spartan diction such as the novels Snow Country and The Master of Go.
Family
Families have very great importance in Asian cultures. They teach their kids that the family is their protection and the major source of their identity. They expect loyalty from their children. Parents define the law and the children are expected to obey them. This is called filial piety, the respect for one's parents and elders, and it is a concept that originated in China as 孝 (xiao) with Confucian's teachings. They are expected to have self-control, thus making it hard for them to express emotions, they are also expected to show respect through their motions and the way they speak. Children are expected to look after their parents when they grow older. Sons are expected to stay home, while daughters go and live with their husband's family. In Chinese culture, sometimes children are expected to care for their elders (赡养), and in various diaspora communities one may find Chinese children living with even their grandparents.
The practice of matrilocality in Korea started in the Goguryeo period, continued through the Goryeo period and ended in the early Joseon period. The Korean saying that when a man gets married, he is "entering jangga" (the house of his father-in-law), stems from the Goguryeo period.
Philosophy
Asian philosophical traditions originated in India and China, and has been classified as Eastern philosophy covering a large spectrum of philosophical thoughts and writings, including those popular within India and China. The Indian philosophy include Jain philosophy, Hindu and Buddhist philosophies. They include elements of non-material pursuits, whereas another school of thought Cārvāka, which originated in India, and was propounded by Charvak around 2500 years before, preached the enjoyment of material world. Middle Eastern philosophy includes Islamic philosophy as well as Jewish and Iranian philosophy.
During the 20th century, in the two most populous countries of Asia, two dramatically different political philosophies took shape. Mahatma Gandhi gave a new meaning to Ahimsa, and redefined the concepts of nonviolence and nonresistance. During the same period, Mao Zedong's communist philosophy was crystallized.
Religions
Asia is the birthplace of many religions such as Buddhism, Christianity, Confucianism, Druze, Hinduism, Islam, Jainism, Judaism, Mandaeism, Shintoism, Sikhism, Taoism, Yazdânism, and Zoroastrianism. All major religious traditions are practiced in the region and new forms are constantly emerging. The largest religions in Asia are Islam and Hinduism, both with approximately 1.1 billion adherents. In 2010, the Pew Research Center found five of the ten most religiously diverse regions in the world to be in Asia.
Hinduism, Buddhism, Jainism and Sikhism originated in India, a country of South Asia. In East Asia, particularly in China and Japan, Confucianism, Taoism, Zen Buddhism and Shinto took shape. Other religions of Asia include the Baháʼí Faith, Shamanism practiced in Siberia, and Animism practiced in the eastern parts of the Indian subcontinent.
Over 60% of the global Muslim population is in Asia. About 25% of Muslims live in the South Asian region, mainly in Pakistan, India, Bangladesh and the Maldives. If Afghanistan is counted, this number is even higher. The world's largest single Muslim community (within the bounds of one nation) is in Indonesia. There are also significant Muslim populations in the Philippines, Brunei, Malaysia, China, Russia, Central Asia and West Asia.
Christianity is a widespread religion in Asia with more than 286 million adherents according to Pew Research Center in 2010, and nearly 364 million according to Britannica Book of the Year 2014. In the Philippines and East Timor, Roman Catholicism is the predominant religion; it was introduced by the Spaniards and the Portuguese, respectively. In Russia, Georgia, and Armenia, Orthodox Christianity is the predominant religion. Eastern Christian sects are the most dominant denomination in Asia, having adherents in portions of the Middle East (the Levant Anatolia and Fars) and South Asia. Eastern churches include Assyrian Church of the East, Syriac Orthodox Church, Maronite Church, Syriac Catholic Church, Chaldean Catholic Church and Syro-Malabar Catholic Church, among others. Significant Christian communities also found in Central Asia, South Asia, Southeast Asia and East Asia. Judaism is the major religion of Israel.
Religions founded in Asia and with a majority of their contemporary adherents in Asia include:
Festivals and celebrations
Asia has a variety of festivals and celebrations. In China, Chinese New Year, Dragon Boat Festival, and Mid-Autumn Moon Festival are traditional holidays, while National Day is a holiday of the People's Republic of China.
In Japan, Japanese New Year, National Foundation Day, Children's Day, O-bon, The Emperor's Birthday and Christmas are popular. According to Japanese syncretism, most Japanese celebrate Buddhism's O-bon in midsummer, Shinto's Shichi-Go-San in November, and Christmas and Hatsumoude in winter together.
In India, Republic Day and Independence Day are important national festivals celebrated by people irrespective of faith. Major Hindu festivals of India include Diwali, Dussehra or Daserra, Holi, Makar Sankranti, Pongal, Mahashivratri, Ugadi, Navratri, Ramanavami, Baisakhi, Onam, Rathayatra, Ganesh Chaturthi and Krishna Janmaashtami. Islamic festivals such as Eid ul-Fitr and Eid ul-Adha, Sikh festivals such as Vaisakhi, and Christian festivals such as Christmas, are also celebrated in India.
The Philippines is also tagged as the "Fiesta Country" because of its all-year-round celebrations nationwide. There is a very strong Spanish influence in their festivals, thus making the Philippines distinctively "Western", yet retaining its native Asian characteristics. Fiesta is the term used to refer to a festival. Most of these fiestas are celebrated in honor of a patron saint. To summarize it all, at least every city or municipality has a fiesta. Some prime examples include Sinulog from Cebu and Dinagyang of Iloilo. Other famous Philippine festivals include the MassKara Festival of Bacolod and Panagbenga Festival of Baguio.
In Indonesia, the Independence Day and the birthday of Pancasila are important. This Muslim majority country also celebrates Islamic celebrations and festivals, such as Eid ul-Fitr, Eid ul-Adha, Mawlid, Islamic New Year, Ashura, Tabuik, and Tasyrik day.
Sports
Due to the vastness of Asia, popularity of sports varies greatly across the continent.
Association football is widely popular in Asia. Boxing, badminton, and table tennis are very popular in East Asia. Baseball is popular in Japan, South Korea, and Taiwan. Cricket is especially popular in South Asia, having mainly been played in India, Pakistan, Bangladesh, and Sri Lanka, and more recently, in Nepal and Afghanistan. Traditional sports such as kabaddi and kho-kho also are watched to a significant extent in South Asia.
Cuisine
In many parts of Asia, rice is a staple food, and it is mostly served steamed or as a porridge known as congee. China is the world largest producer and consumer of rice. While grain flatbread were consumed in the Middle East to the Indian subcontinent.
Traditionally, it is a common practice in Central, South, and West Asia to eat using their bare hand. However, Western cutlery such as spoons and forks are currently being used increasingly and have also become widely available. With the advent of western cutlery, it may be viewed as rude in these nations to eat using the bare hands in some public places. In Indonesia and the Philippines, people usually use western cutlery such as the spoon, fork, and knife. While in China, Japan, Taiwan, Korea, and Vietnam, people usually use chopsticks to eat traditional food, but the shape of chopsticks are different in these countries. For example, Chinese chopsticks are long and square; Vietnamese chopsticks are long, being thick at one end and then gradually getting thinner at the other end, and are made of wood or bamboo; Japanese chopsticks are rounder, short, and spiral, having been designed to eat bony fish easily; Taiwanese chopsticks are made of materials such as bamboo, wood, and metal; Korean chopsticks are short, flat, and made of metal. It is said that wood is rarer than metal on the Korean Peninsula and that metal chopsticks can prevent poisoning. Fresh raw fish cuisines, such as sushi and sashimi are very popular in East Asia (especially Japan). These raw fish dishes were influenced by two major cultures: Chinese and Japanese.
In India, people often eat food with their hands, and many spices such as cardamom, cumin, and fennel seeds are used in every dish. Most spices originated within the Indian subcontinent. Durians are a common fruit in Southeast Asia, which, Alfred Russel Wallace, attested to its delicious flavor as worth the entire cost of his trip there.
The cuisine of Indonesia possess rich and diverse collection of dishes and recipes with regional cooking tradition flourished, such as Minang Sundanese to Balinese. Most Indonesians consume steamed rice with flavorful meat, fish, and vegetables in one serving such as in Nasi Padang and nasi campur. Other notable example include rendang, satay, soto, and nasi goreng.
In Filipino banquet, many unique dishes have arisen because of the country's long years of colonization and interactions with other neighboring cultures and nations; it has inherited Latin, Malay, Chinese, and American influences to its people's local blend.
Cultural spheres
The culture of Asia is divided into several overlapping cultural spheres, including:
Sinosphere
Indosphere
Persosphere
Arabsphere
Culture by people
Afghan people
Arab people
Armenian people
Assyrian people
Aryan
Azerbaijani people
Baloch people
Bangladeshi people
Bengali people
Betawi people
Bihari people
Buginese people
Burmese people
Chinese people
Cambodian people
Dravidian people
Filipino people
Georgian people
Hadhrami people
Haryanvi people
Hmong people
Hong Kong people
Iranian people
Indian people
Indonesian people
Igorot people
Israeli people
Japanese people
Jat people
Jewish people
Korean people
Kurdish people
Lao people
Macanese people
Malay people
Malaysian people
Miao people
Minangkabau people
Mongolian people
Moro people
Marathi people
Punjabi people
Pakistani people
Pashtun People
Peranakan people
Tibetan people
Rajasthani people
Rohingya people
Romani people
Russian people
Sindhi people
Tajik people
Turkic peoples
Taiwanese people
Thai people
Vietnamese people
See also
Culture of Africa
Culture of Europe
Culture of North America
Culture of Oceania
Culture of South America
Notes
John Lindley (1889), Treasury of Botany vol 1. p. 435. Longmans, Green, & Co; New and rev. ed edition (1889)
References
Bibliography
Further reading
The Mandate of Heaven and The Great Ming Code
External links
Yin Yu Tang: A Chinese Home showcases Chinese culture through a detailed examination of a family residence located in the Anhui province of East China.
Fukuoka Asian Culture Prize was established to honor the outstanding work of individuals or groups/organizations to preserve and create unique and diverse cultures of Asia.
Asian cultural art and antique showcases the cultural ornaments used by the tribes in south east Asia during ancient time. | 0.771309 | 0.9929 | 0.765832 |
Irredentism | Irredentism is one state's desire to annex the territory of another state. This desire can be motivated by ethnic reasons because the population of the territory is ethnically similar or the same to the population of the parent state. Historical reasons may also be responsible, i.e., that the territory previously formed part of the parent state. However, difficulties in applying the concept to concrete cases have given rise to academic debates about its precise definition. Disagreements concern whether either or both ethnic and historical reasons have to be present and whether non-state actors can also engage in irredentism. A further dispute is whether attempts to absorb a full neighboring state are also included. There are various types of irredentism. For typical forms of irredentism, the parent state already exists before the territorial conflict with a neighboring state arises. However, there are also forms of irredentism in which the parent state is newly created by uniting an ethnic group spread across several countries. Another distinction concerns whether the country to which the disputed territory currently belongs is a regular state, a former colony, or a collapsed state.
A central research topic concerning irredentism is the question of how it is to be explained or what causes it. Many explanations hold that ethnic homogeneity within a state makes irredentism more likely. Discrimination against the ethnic group in the neighboring territory is another contributing factor. A closely related explanation argues that national identities based primarily on ethnicity, culture, and history increase irredentist tendencies. Another approach is to explain irredentism as an attempt to increase power and wealth. In this regard, it is argued that irredentist claims are more likely if the neighboring territory is relatively rich. Many explanations also focus on the regime type and hold that democracies are less likely to engage in irredentism while anocracies are particularly open to it.
Irredentism has been an influential force in world politics since the mid-nineteenth century. It has been responsible for many armed conflicts, even though international law is hostile to it and irredentist movements often fail to achieve their goals. The term was originally coined from the Italian phrase Italia irredenta and referred to an Italian movement after 1878 claiming parts of Switzerland and the Austro-Hungarian Empire. Often discussed cases of irredentism include Nazi Germany's annexation of the Sudetenland in 1938, Somalia's invasion of Ethiopia in 1977, and Argentina's invasion of the Falkland Islands in 1982. Further examples are attempts to establish a Greater Serbia following the breakup of Yugoslavia in the early 1990s and Russia's annexation of Crimea in 2014. Irredentism is closely related to revanchism and secession. Revanchism is an attempt to annex territory belonging to another state. It is motivated by the goal of taking revenge for a previous grievance, in contrast to the goal of irredentism of building an ethnically unified nation-state. In the case of secession, a territory breaks away and forms an independent state instead of merging with another state.
Definition and etymology
The term irredentism was coined from the Italian phrase (unredeemed Italy). This phrase originally referred to territory in Austria-Hungary that was mostly or partly inhabited by ethnic Italians. In particular, it applies to Trentino and Trieste, but also Gorizia, Istria, Fiume, and Dalmatia during the 19th and early 20th centuries. Irredentist projects often use the term "Greater" to label the desired outcome of their expansion, as in "Greater Serbia" or "Greater Russia".
Irredentism is often understood as the claim that territories belonging to one state should be incorporated into another state because their population is ethnically similar or because it historically belonged to the other state before. Many definitions of irredentism have been proposed to give a more precise formulation. Despite a wide overlap concerning its general features, there is no consensus about its exact characterization. The disagreements matter for evaluating whether irredentism was the cause of war which is difficult in many cases and different definitions often lead to opposite conclusions.
There is wide consensus that irredentism is a form of territorial dispute involving the attempt to annex territories belonging to a neighboring state. However, not all such attempts constitute forms of irredentism and there is no academic consensus on precisely what other features need to be present. This concerns disagreements about who claims the territory, for what reasons they do so, and how much territory is claimed. Most scholars define irredentism as a claim made by one state on the territory of another state. In this regard, there are three essential entities to irredentism: (1) an irredentist state or parent state, (2) a neighboring host state or target state, and (3) the disputed territory belonging to the host state, often referred to as . According to this definition, popular movements demanding territorial change by non-state actors do not count as irredentist in the strict sense. A different definition characterizes irredentism as the attempt of an ethnic minority to break away and join their "real" motherland even though this minority is a non-state actor.
The reason for engaging in territorial conflict is another issue, with some scholars stating that irredentism is primarily motivated by ethnicity. In this view, the population in the neighboring territory is ethnically similar and the intention is to retrieve the area to unite the people. This definition implies, for example, that the majority of the border disputes in the history of Latin America were not forms of irredentism. Usually, irredentism is defined in terms of the motivation of the irredentist state, even if the territory is annexed against the will of the local population. Other theorists focus more on the historical claim that the disputed territory used to be part of the state's ancestral homeland. This is close to the literal meaning of the original Italian expression as "unredeemed land". In this view, the ethnicity of the people inhabiting this territory is not important. However, it is also possible to combine both characterizations, i.e. that the motivation is either ethnic or historical or both. Some scholars, like Benjamin Neuberger, include geographical reasons in their definitions.
A further disagreement concerns the amount of area that is to be annexed. Usually, irredentism is restricted to the attempt to incorporate some parts of another state. In this regard, irredentism challenges established borders with the neighboring state but does not challenge the existence of the neighboring state in general. However, some definitions of irredentism also include attempts to absorb the whole neighboring state and not just a part of it. In this sense, claims by both South Korea and North Korea to incorporate the whole of the Korean Peninsula would be considered a form of irredentism.
A popular view combining many of the elements listed above holds that irredentism is based on incongruence between the borders of a state and the boundaries of the corresponding nation. State borders are usually clearly delimited, both physically and on maps. National boundaries, on the other hand, are less tangible since they correspond to a group's perception of its historic, cultural, and ethnic boundaries. Irredentism may manifest if state borders do not correspond to national boundaries. The objective of irredentism is to enlarge a state to establish a congruence between its borders and the boundaries of the corresponding nation.
Types
Various types of irredentism have been proposed. However, not everyone agrees that all the types listed here constitute forms of irredentism and it often depends on what definition is used. According to political theorists Naomi Chazan and Donald L. Horowitz, there are two types of irredentism. The typical case involves one state that intends to annex territories belonging to a neighboring state. Nazi Germany’s claim on the Sudetenland of Czechoslovakia is an example of this form of irredentism.
For the second type, there is no pre-existing parent state. Instead, a cohesive group existing as a minority in multiple countries intends to unify to form a new parent state. The intended creation of a Kurdistan state uniting the Kurds living in Turkey, Syria, Iraq, and Iran is an example of the second type. If such a project is successful for only one segment, the result is secession and not irredentism. This happened, for example, during the breakup of Yugoslavia when Yugoslavian Slovenes formed the new state of Slovenia while the Austrian Slovenes did not join them and remained part of Austria. Not all theorists accept that the second type constitutes a form of irredentism. In this regard, it is often argued that it is too similar to secession to maintain a distinction between the two. For example, political scholar Benyamin Neuberger holds that a pre-existing parent state is necessary for irredentism.
Political scientist Thomas Ambrosio restricts his definition to cases involving a pre-existing parent state and distinguishes three types of irredentism: (1) between two states, (2) between a state and a former colony, and (3) between a state and a collapsed state. The typical case is between two states. A textbook example of this is Somalia's invasion of Ethiopia. In the second case of decolonization, the territory to be annexed is a former colony of another state and not a regular part of it. An example is the Indonesian invasion and occupation of the former Portuguese colony of East Timor. In the case of state collapse, one state disintegrates and a neighboring state absorbs some of its former territories. This was the case for the irredentist movements by Croatia and Serbia during the breakup of Yugoslavia.
Explanations
Explanations of irredentism try to determine what causes irredentism, how it unfolds, and how it can be peacefully resolved. Various hypotheses have been proposed but there is still very little consensus on how irredentism is to be explained despite its prevalence and its long history of provoking armed conflicts. Some of these proposals can be combined but others conflict with each other and the available evidence may not be sufficient to decide between them. An active research topic in this regard concerns the reasons for irredentism. Many countries have ethnic kin outside their borders. But only a few are willing to engage in violent conflicts to annex foreign territory in an attempt to unite their kin. Research on the causes of irredentism tries to explain why some countries pursue irredentism but others do not. Relevant factors often discussed include ethnicity, nationalism, economic considerations, the desire to increase power, and the type of regime.
Ethnicity and nationalism
A common explanation of irredentism focuses on ethnic arguments. It is based on the observation that irredentist claims are primarily advanced by states with a homogenous ethnic population. This is explained by the idea that, if a state is composed of several ethnic groups, then annexing a territory inhabited primarily by one of those groups would shift the power balance in favor of this group. For this reason, other groups in the state are likely to internally reject the irredentist claims. This inhibiting factor is not present for homogeneous states. A similar argument is also offered for the enclave to be annexed: an ethnically heterogenous enclave is less likely to desire to be absorbed by another state for ethnic reasons since this would only benefit one ethnic group. These considerations explain, for example, why irredentism is not very common in Africa since most African states are ethnically heterogeneous. Relevant factors for the ethnic motivation for irredentism are how large the dominant ethnic group is relative to other groups and how large it is in absolute terms. It also matters whether the ethnic group is relatively dispersed or located in a small core area and whether it is politically disadvantaged.
Explanations focusing on nationalism are closely related to ethnicity-based explanations. Nationalism can be defined as the claim that the boundaries of a state should match those of the nation. According to constructivist accounts, for example, the dominant national identity is one of the central factors behind irredentism. In this view, identities based on ethnicity, culture, and history can easily invite tendencies to enlarge national borders. They may justify the goal of integrating ethnically and culturally similar territories. Civic national identities focusing more on a political nature, on the other hand, are more closely tied to pre-existing national boundaries.
Structural accounts use a slightly different approach and focus on the relationship between nationalism and the regional context. They focus on the tension between state sovereignty and national self-determination. State sovereignty is the principle of international law holding that each state has sovereignty over its own territory. It means that states are not allowed to interfere with essentially domestic affairs of other states. National self-determination, on the other hand, concerns the right of people to determine their own international political status. According to the structural explanation, emphasis on national self-determination may legitimize irredentist claims while the principle of state sovereignty defends the status quo of the existing sovereign borders. This position is supported by the observation that irredentist conflicts are much more common during times of international upheavals.
Another factor commonly cited as a force fueling irredentism is discrimination against the main ethnic group in the enclave. Irredentist states often try to legitimize their aggression against neighbors by presenting them as humanitarian interventions aimed at protecting their discriminated ethnic kin. This justification was used, for example, in Armenia's engagement in the Nagorno-Karabakh conflict, in Serbia's involvement in the Croatian War of Independence, and in Russia's annexation of Crimea. Some political theorists, like David S. Siroky and Christopher W. Hale, hold that there is little empirical evidence for arguments based on ethnic homogeneity and discrimination. In this view, they are mainly used as a pretext to hide other goals, such as material gain.
Another relevant factor is the outlook of the population inhabiting the territory to be annexed. The desire of the irredentist state to annex a foreign territory and the desire of that territory to be annexed do not always overlap. In some cases, a minority group does not want to be annexed, as was the case for the Crimean Tatars in Russia's annexation of Crimea. In other cases, a minority group would want to be annexed but the intended parent state is not interested.
Power and economy
Various accounts stress the role of power and economic benefits as reasons for irredentism. Realist explanations focus on the power balance between the irredentist state and the target state: the more this power balance shifts in favor of the irredentist state, the more likely violent conflicts become. A key factor in this regard is also the reaction of the international community, i.e. whether irredentist claims are tolerated or rejected. Irredentism can be used as a tool or pretext to increase the parent state's power. Rational choice theories study how irredentism is caused by decision-making processes of certain groups within a state. In this view, irredentism is a tool used by elites to secure their political interests. They do so by appealing to popular nationalist sentiments. This can be used, for example, to gain public support against political rivals or to divert attention away from domestic problems.
Other explanations focus on economic factors. For example, larger states enjoy advantages that come with having an increased market and decreased per capita cost of defense. However, there are also disadvantages to having a bigger state, such as the challenges that come with accommodating a wider range of citizens' preferences. Based on these lines of thought, it has been argued that states are more likely to advocate irredentist claims if the enclave is a relatively rich territory.
Regime type
An additional relevant factor is the regime type of both the irredentist state and the neighboring state. In this regard, it is often argued that democratic states are less likely to engage in irredentism. One reason cited is that democracies often are more inclusive of other ethnic groups. Another is that democracies are in general less likely to engage in violent conflicts. This is closely related to democratic peace theory, which claims that democracies try to avoid armed conflicts with other democracies. This is also supported by the observation that most irredentist conflicts are started by authoritarian regimes. However, irredentism constitutes a paradox for democratic systems. The reason is that democratic ideals pertaining to the ethnic group can often be used to justify its claim, which may be interpreted as the expression of a popular will toward unification. But there are also cases of irredentism made primarily by a government that is not broadly supported by the population.
According to Siroky and Hale, anocratic regimes are most likely to engage in irredentist conflicts and to become their victim. This is based on the idea that they share some democratic ideals favoring irredentism but often lack institutional stability and accountability. This makes it more likely for the elites to consolidate their power using ethno-nationalist appeals to the masses.
Importance, reactions, and consequences
Irredentism is a widespread phenomenon and has been an influential force in world politics since the mid-nineteenth century. It has been responsible for countless conflicts. There are still many unresolved irredentist disputes today that constitute discords between nations. In this regard, irredentism is a potential source of conflict in many places and often escalates into military confrontations between states. For example, international relation theorist Markus Kornprobst argues that "no other issue over which states fight is as war-prone as irredentism". Political scholar Rachel Walker points out that "there is scarcely a country in the world that is not involved in some sort of irredentist quarrel ... although few would admit to this". Political theorists Stephen M. Saideman and R. William Ayres argue that many of the most important conflicts of the 1990s were caused by irredentism, such as the wars for a Greater Serbia and a Greater Croatia. Irredentism carries a lot of potential for future conflicts since many states have kin groups in adjacent countries. It has been argued that it poses a significant danger to human security and the international order. For these reasons, irredentism has been a central topic in the field of international relations.
For the most part, international law is hostile to irredentism. For example, the United Nations Charter calls for respect for established territorial borders and defends state sovereignty. Similar outlooks are taken by the Organization of African Unity, the Organization of American States, and the Helsinki Final Act. Since irredentist claims are based on conflicting sovereignty assertions, it is often difficult to find a working compromise. Peaceful resolutions of irredentist conflicts often result in mutual recognition of de facto borders rather than territorial change. International relation theorists Martin Griffiths et al. argue that the threat of rising irredentism may be reduced by focusing on political pluralism and respect for minority rights.
Irredentist movements, peaceful or violent, are rarely successful. In many cases, despite aiming to help ethnic minorities, irredentism often has the opposite effect and ends up worsening their living conditions. On the one hand, the state still in control of those territories may decide to further discriminate against them as an attempt to decrease the threat to its national security. On the other hand, the irredentist state may merely claim to care about the ethnic minorities but, in truth, use such claims only as a pretext to increase its territory or to destabilize an opponent.
Often-discussed historical examples
The emergence of irredentism is tied to the rise of modern nationalism and the idea of a nation-state, which are often linked to the French Revolution. However, some political scholars, like Griffiths et al., argue that phenomena similar to irredentism existed even before. For example, part of the justification for the crusades was to liberate fellow Christians from Muslim rule and to redeem the Holy Land. Nonetheless, most theorists see irredentism as a more recent phenomenon. The term was coined in the 19th century and is linked to border disputes between modern states.
Nazi Germany's annexation of the Sudetenland in 1938 is an often-cited example of irredentism. At the time, the Sudetenland formed part of Czechoslovakia but had a majority German population. Adolf Hitler justified the annexation based on his allegation that Sudeten Germans were being mistreated by the Czechoslovak government. The Sudetenland was yielded to Germany following the Munich Agreement in an attempt to prevent the outbreak of a major war.
Somalia's invasion of Ethiopia in 1977 is frequently discussed as a case of African irredentism. The goal of this attack was to unite the significant Somali population living in the Ogaden region with their kin by annexing this area to create a Greater Somalia. The invasion escalated into a war of attrition that lasted about eight months. Somalia was close to reaching its goal but failed in the end, mainly due to an intervention by socialist countries.
Argentina's invasion of the Falkland Islands in 1982 is cited as an example of irredentism in South America, where the Argentine military government sought to exploit national sentiment over the islands to deflect attention from domestic concerns. President Juan Perón exploited the issue to reduce British influence in Argentina, instituting educational reform teaching the islands were Argentine and creating a strong nationalist sentiment over the issue. The war ended with a victory for the UK after about two months even though many analysts considered the Argentine military position unassailable. Although defeated, Argentina did not officially declare the cessation of hostilities until 1989 and successive Argentine Governments have continued to claim the islands. The islands are now self-governing with the UK responsible for defence and foreign relations. Referenda in 1986 and 2013 show a preference for British sovereignty among the population. Both the UK and Spain claimed sovereignty in the 18th Century and Argentina claims the islands as a colonial legacy from independence in 1816.
The breakup of Yugoslavia in the early 1990s resulted in various irredentist projects. They include Slobodan Milošević's attempts to establish a Greater Serbia by absorbing some regions of neighboring states that were part of former Yugoslavia. A simultaneous similar project aimed at the establishment of a Greater Croatia.
Russia's annexation of Crimea in 2014 is a more recent example of irredentism. Beginning in the 15th century CE, the Crimean peninsula was a Tartar Khanate. However, in 1783 the Russian Empire broke a previous treaty and annexed Crimea. In 1954, when both Russia and Ukraine were part of the Soviet Union, it was transferred from Russia to Ukraine. Sixty years later, Russia alleged that the Ukrainian government did not uphold the rights of ethnic Russians inhabiting Crimea, using this as a justification for the annexation in March 2014. However, it has been claimed that this was only a pretext to increase its territory and power. Ultimately, Russia invaded the mainland territory of Ukraine in February 2022, thereby escalating the war that continues to the present day.
Other frequently discussed cases of irredentism include disputes between Pakistan and India over Jammu and Kashmir as well as China's claims on Taiwan.
Related concepts
Ethnicity
Ethnicity plays a central role in irredentism since most irredentist states justify their expansionist agenda based on shared ethnicity. In this regard, the goal of unifying parts of an ethnic group in a common nation-state is used as a justification for annexing foreign territories and going to war if the neighboring state resists. Ethnicity is a grouping of people according to a set of shared attributes and similarities. It divides people into groups based on attributes like physical features, customs, tradition, historical background, language, culture, religion, and values. Not all these factors are equally relevant for every ethnic group. For some groups, one factor may predominate, as in ethno-linguistic, ethno-racial, and ethno-religious identities. In most cases, ethnic identities are based on a set of common features.
A central aspect of many ethnic identities is that all members share a common homeland or place of origin. This place of origin does not have to correspond to the area where the majority of the ethnic group currently lives in case they migrated from their homeland. Another feature is a common language or dialect. In many cases, religion also forms a vital aspect of ethnicity. Shared culture is another significant factor. It is a wide term and can include characteristic social institutions, diet, dress, and other practices. It is often difficult to draw clear boundaries between people based on their ethnicity. For this reason, some definitions focus less on actual objective features and stress instead that what unites an ethnic group is a subjective belief that such common features exist. In this view, the common belief matters more than the extent to which those shared features actually exist. Examples of large ethnic groups are the Han Chinese, the Arabs, the Bengalis, the Punjabis, and the Turks.
Some theorists, like sociologist John Milton Yinger, use terms like ethnic group or ethnicity as near-synonyms for nation. Nations are usually based on ethnicity but what sets them apart from ethnicity is their political form as a state or a state-like entity. The physical and visible aspects of ethnicity, such as skin color and facial features, are often referred to as race, which may thus be understood as a subset of ethnicity. However, some theorists, like sociologist Pierre van den Berghe, contrast the two by restricting ethnicity to cultural traits and race to physical traits.
Ethnic solidarity can provide a sense of belonging as well as physical and mental security. It can help people identify with a common purpose. However, ethnicity has also been the source of many conflicts. It has been responsible for various forms of mass violence, including ethnic cleansing and genocide. The perpetrators usually form part of the ruling majority and target ethnic minority groups. Not all ethnic-based conflicts involve mass violence, like many forms of ethnic discrimination.
Nationalism and nation-state
Irredentism is often seen as a product of modern nationalism, i.e. the claim that a nation should have its own sovereign state. In this regard, irredentism emerged with and depends on the modern idea of nation-states. The start of modern nationalism is often associated with the French Revolution in 1789. This spawned various nationalist revolutions in Europe around the mid-nineteenth century. They often resulted in a replacement of dynastic imperial governments. A central aspect of nationalism is that it sees states as entities with clearly delimited borders that should correspond to national boundaries. Irredentism reflects the importance people ascribe to these borders and how exactly they are drawn. One difficulty in this regard is that the exact boundaries are often difficult to justify and are therefore challenged in favor of alternatives. Irredentism manifests some of the most aggressive aspects of modern nationalism. It can be seen as a side effect of nationalism paired with the importance it ascribes to borders and the difficulties in agreeing on them.
Secession
Irredentism is closely related to secession. Secession can be defined as "an attempt by an ethnic group claiming a homeland to withdraw with its territory from the authority of a larger state of which it is a part." Irredentism, by contrast, is initiated by members of an ethnic group in one state to incorporate territories across their border housing ethnically kindred people. Secession happens when a part of an existing state breaks away to form an independent entity. This was the case, for example, in the United States, when many of the slaveholding southern states decided to secede from the Union to form the Confederate States of America in 1861.
In the case of irredentism, the break-away area does not become independent but merges into another entity. Irredentism is often seen as a government decision, unlike secession. Both movements are influential phenomena in contemporary politics but, as Horowitz argues, secession movements are much more frequent in postcolonial states. However, he also holds that secession movements are less likely to succeed since they usually have very few military resources compared to irredentist states. For this reason, they normally need prolonged external assistance, often from another state. However, such state policies are subject to change. For example, the Indian government supported the Sri Lankan Tamil secessionists up to 1987 but then reach an agreement with the Sri Lankan government and helped suppress the movement.
Horowitz holds that it is important to distinguish secessionist and irredentist movements since they differ significantly concerning their motivation, context, and goals. Despite these differences, irredentism and secessionism are closely related nonetheless. In some cases, the two tendencies may exist side by side. It is also possible that the advocates of one movement change their outlook and promote the other. Whether a movement favors irredentism or secessionism is determined, among other things, by the prospects of forming an independent state in contrast to joining another state. A further factor is whether the irredentist state is likely to espouse a similar ideology to the one found in the territory intending to break away. The anticipated reaction of the international community is an additional factor, i.e. whether it would embrace, tolerate, or reject the detachment or the absorption by another state.
Revanchism
Irredentism and revanchism are two closely related phenomena because both of them involve the attempt to annex territory which belongs to another state. They differ concerning the motivation fuelling this attempt. Irredentism has a positive goal of building a "greater" state that fulfills the ideals of a nation-state. It aims to unify people claimed to belong together because of their shared national identity based on ethnic, cultural, and historical aspects.
For revanchism, on the other hand, the goal is more negative because it focuses on taking revenge for some form of grievance or injustice suffered earlier. In this regard, it is motivated by resentment and aims to reverse territorial losses due to a previous defeat. In an attempt to contrast irredentism with revanchism, political scientist Anna M. Wittmann argues that Germany's annexation of the Sudetenland in 1938 constitutes a form of irredentism because of its emphasis on a shared language and ethnicity. But she characterizes Germany's invasion of Poland the following year as a form of revanchism due to its justification as a revenge intended to reverse previous territorial losses. The term "revanchism" comes from the French term , meaning revenge. It was originally used in the aftermath of the Franco-Prussian War for nationalists intending to reclaim the lost territory of Alsace-Lorraine. Saddam Hussein justified the Iraqi invasion of Kuwait in 1990 by claiming that Kuwait had always been an integral part of Iraq and only became an independent nation due to the interference of the British Empire.
See also
References
Notes
Citations
Sources
External links
International relations theory
Pan-nationalism
Territorial disputes | 0.767131 | 0.998303 | 0.765829 |
Medieval cuisine | Medieval cuisine includes foods, eating habits, and cooking methods of various European cultures during the Middle Ages, which lasted from the 5th to the 15th century. During this period, diets and cooking changed less than they did in the early modern period that followed, when those changes helped lay the foundations for modern European cuisines.
Cereals remained the most important staple during the Early Middle Ages as rice was introduced to Europe late, with the potato first used in the 16th century, and much later for the wider population. Barley, oats, and rye were eaten by the poor while wheat was generally more expensive. These were consumed as bread, porridge, gruel, and pasta by people of all classes. Cheese, fruits, and vegetables were important supplements for the lower orders while meat was more expensive and generally more prestigious. Game, a form of meat acquired from hunting, was common only on the nobility's tables. The most prevalent butcher's meats were pork, chicken, and other poultry. Beef, which required greater investment in land, was less common. A wide variety of freshwater and saltwater fish was also eaten, with cod and herring being mainstays among the northern populations.
Slow and inefficient transports made long-distance trade of many foods very expensive (perishability made other foods untransportable). Because of this, the nobility's food was more prone to foreign influence than the cuisine of the poor; it was dependent on exotic spices and expensive imports. As each level of society attempted to imitate the one above it, innovations from international trade and foreign wars from the 12th century onward gradually disseminated through the upper middle class of medieval cities. Aside from economic unavailability of luxuries such as spices, decrees outlawed consumption of certain foods among certain social classes and sumptuary laws limited conspicuous consumption among the nouveau riche. Social norms also dictated that the food of the working class be less refined, since it was believed there was a natural resemblance between one's way of life and one's food; hard manual labor required coarser, cheaper food.
A type of refined cooking that developed in the Late Middle Ages set the standard among the nobility all over Europe. Common seasonings in the highly spiced sweet-sour repertory typical of upper-class medieval food included verjuice, wine, and vinegar in combination with spices such as black pepper, saffron, and ginger. These, along with the widespread use of honey or sugar, gave many dishes a sweet-sour flavor. Almonds were very popular as a thickener in soups, stews, and sauces, particularly as almond milk.
Dietary norms
The cuisines of the cultures of the Mediterranean Basin since antiquity had been based on cereals, particularly various types of wheat. Porridge, gruel, and later bread became the basic staple foods that made up the majority of calorie intake for most of the population. From the 8th to the 11th centuries, the proportion of various cereals in the diet rose from about a third to three-quarters. Dependence on wheat remained significant throughout the medieval era, and spread northward with the rise of Christianity. In colder climates, however, it was usually unaffordable for the majority population, and was associated with the higher classes. The centrality of bread in religious rituals such as the Eucharist meant that it enjoyed an especially high prestige among foodstuffs. Only olive oil and wine had a comparable value, but both remained quite exclusive outside the warmer grape- and olive-growing regions. The symbolic role of bread as both sustenance and substance is illustrated in a sermon given by Saint Augustine:
The Church
The Catholic and Orthodox Churches, and their calendars, had great influence on eating habits; consumption of meat was forbidden for a full third of the year for most Christians. All animal products, including eggs and dairy products (during the strictest fasting periods also fish), were generally prohibited during Lent and fast. Additionally, it was customary for all citizens to fast before taking the Eucharist. These fasts were occasionally for a full day and required total abstinence.
Both the Eastern and the Western churches ordained that feast should alternate with fast. In most of Europe, Fridays were fast days, and fasting was observed on various other days and periods, including Lent and Advent. Meat, and animal products such as milk, cheese, butter, and eggs, were not allowed, and at times also fish. The fast was intended to mortify the body and invigorate the soul, and also to remind the faster of Christ's sacrifice for humanity. The intention was not to portray certain foods as unclean, but rather to teach a spiritual lesson in self-restraint through abstention. During particularly severe fast days, the number of daily meals was also reduced to one. Even if most people respected these restrictions and usually made penance when they violated them, there were also numerous ways of circumventing them, a conflict of ideals and practice summarized by writer Bridget Ann Henisch:
While animal products were to be avoided during times of penance, pragmatic compromises often prevailed. The definition of "fish" was often extended to marine and semi-aquatic animals such as whales, barnacle geese, puffins, and even beavers. The choice of ingredients may have been limited, but that did not mean that meals were smaller. Neither were there any restrictions against (moderate) drinking or eating sweets. Banquets held on fish days could be splendid, and were popular occasions for serving illusion food that imitated meat, cheese, and eggs in various ingenious ways; fish could be moulded to look like venison and fake eggs could be made by stuffing empty egg shells with fish roe and almond milk and cooking them in coals. While Byzantine church officials took a hard-line approach, and discouraged any culinary refinement for the clergy, their Western counterparts were far more lenient. There was also no lack of grumbling about the rigours of fasting among the laity. During Lent, kings and schoolboys, commoners and nobility, all complained about being deprived of meat for the long, hard weeks of solemn contemplation of their sins. At Lent, owners of livestock were even warned to keep an eye out for hungry dogs frustrated by a "hard siege by Lent and fish bones".
The trend from the 13th century onward was toward a more legalistic interpretation of fasting. Nobles were careful not to eat meat on fast days, but still dined in style; fish replaced meat, often as imitation hams and bacon; almond milk replaced animal milk as an expensive non-dairy alternative; faux eggs made from almond milk were cooked in blown-out eggshells, flavoured and coloured with exclusive spices. In some cases, the lavishness of noble tables was outdone by Benedictine monasteries, which served as many as sixteen courses during certain feast days. Exceptions from fasting were frequently made for very broadly defined groups. Thomas Aquinas (about 1225–1274) believed dispensation should be provided for children, the old, pilgrims, workers and beggars, but not the poor as long as they had some sort of shelter. There are many accounts of members of monastic orders who flouted fasting restrictions through clever interpretations of the Bible. Since the sick were exempt from fasting, there often evolved the notion that fasting restrictions only applied to the main dining area, and many Benedictine friars would simply eat their fast day meals in what was called the misericord (at those times) rather than the refectory. Newly-assigned Catholic monastery officials sought to amend the problem of fast evasion not merely with moral condemnations, but by making sure that well-prepared non-meat dishes were available on fast days.
Class constraints
Medieval society was highly stratified. In a time when famine was commonplace and social hierarchies were often brutally enforced, food was an important marker of social status in a way that has no equivalent today in most developed countries. According to the ideological norm, society consisted of the three estates of the realm: commoners, that is, the working classes—by far the largest group; the clergy, and the nobility. The relationship between the classes was strictly hierarchical, with the nobility and clergy claiming worldly and spiritual overlordship over commoners. Within the nobility and clergy there were also a number of ranks ranging from kings and popes to dukes, bishops and their subordinates, such as squires and priests. One was expected to remain in one's social class and to respect the authority of the ruling classes. Political power was displayed not just by rule, but also by displaying wealth. Refined nobles dined on fresh game seasoned with exotic spices, and displayed refined table manners. Rough laborers could make do with coarse barley bread, salt pork and beans and were not expected to display etiquette. Even dietary recommendations were different: the diet of the upper classes was considered to be as much a requirement of their refined physical constitution as a sign of economic reality. The digestive system of a lord was considered to be more refined than that of lower-class subordinates and therefore required finer foods.
In the late Middle Ages, the increasing wealth of middle class merchants and traders meant that commoners began emulating the aristocracy. This threatened to break down some of the symbolic barriers between the nobility and the lower classes. The response came in two forms: literature warning of the dangers of adapting a diet inappropriate for one's class, and sumptuary laws that put a cap on the lavishness of commoners' banquets. Animal parts were even assigned to different social classes.
Dietetics
Medical science of the Middle Ages had a considerable influence on what was considered healthy and nutritious among the upper classes. One's lifestyle—including diet, exercise, appropriate social behavior, and approved medical remedies—was the way to good health, and all types of food were assigned certain properties that affected a person's health. All foodstuffs were also classified on scales ranging from hot to cold and moist to dry, according to the four bodily humours theory proposed by Galen that dominated Western medical science from late Antiquity and throughout the Middle Ages.
Medieval scholars considered human digestion to be a process similar to cooking. The processing of food in the stomach was seen as a continuation of the preparation initiated by the cook. In order for the food to be properly "cooked" and for the nutrients to be properly absorbed, it was important that the stomach be filled in an appropriate manner. Easily digestible foods would be consumed first, followed by gradually heavier dishes. If this regimen were not respected it was believed that heavy foods would sink to the bottom of the stomach, thus blocking the digestion duct, so that food would digest very slowly and cause putrefaction of the body and draw bad humours into the stomach. It was also of vital importance that food of differing properties not be mixed.
Before a meal, the stomach would preferably be "opened" with an apéritif (from Latin , 'to open') that was preferably of a hot and dry nature: confections made from honey- or sugar-coated spices like ginger, caraway, and seeds of anise, fennel, or cumin, wine and sweetened fortified milk drinks. As the stomach had been opened, it should then be "closed" at the end of the meal with the help of a digestive, most commonly a dragée, which during the Middle Ages consisted of lumps of spiced sugar, or hypocras, a wine flavoured with fragrant spices, along with aged cheese. A meal would ideally begin with easily digestible fruit, such as apples. It would then be followed by vegetables such as cabbage, lettuce, purslane, herbs, moist fruits, and light meats, such as chicken or goat kid, with pottages and broths. After that came the "heavy" meats, such as pork and beef, as well as vegetables and nuts, including pears and chestnuts, both considered difficult to digest. It was popular, and recommended by medical expertise, to finish the meal with aged cheese and various digestives.
The most ideal food was that which most closely matched the humour of human beings, i.e. moderately warm and moist. Food should preferably also be finely chopped, ground, pounded and strained to achieve a true mixture of all the ingredients. White wine was believed to be cooler than red and the same distinction was applied to red and white vinegar. Milk was moderately warm and moist, but the milk of different animals was often believed to differ. Egg yolks were considered to be warm and moist while the whites were cold and moist. Skilled cooks were expected to conform to the regimen of humoral medicine. Even if this limited the combinations of food they could prepare, there was still ample room for artistic variation by the chef.
Calorie structure
The calorie content and structure of medieval diet varied over time, from region to region, and between classes. However, for most people, the diet tended to be high-carbohydrate, with most of the budget spent on, and the majority of calories provided by, cereals and alcohol (such as beer). Even though meat was highly valued by all, lower classes often could not afford it, nor were they allowed by the church to consume it every day. In England in the 13th century, meat contributed a negligible portion of calories to a typical harvest worker's diet; however, its share increased after the Black Death and, by the 15th century, it provided about 20% of the total. Even among the lay nobility of medieval England, grain provided 65–70% of calories in the early 14th century, though a generous provision of meat and fish was included, and their consumption of meat increased in the aftermath of the Black Death as well. In one early-15th-century English aristocratic household for which detailed records are available (that of the Earl of Warwick), gentle members of the household received a staggering of assorted meats in a typical meat meal in the autumn and in the winter, in addition to of bread and of beer or possibly wine (and there would have been two meat meals per day, five days a week, except during Lent). In the household of Henry Stafford in 1469, gentle members received of meat per meal, and all others received , and everyone was given of bread and of alcohol. On top of these quantities, some members of these households (usually, a minority) ate breakfast, which would not include any meat, but would probably include another of beer; and uncertain quantities of bread and ale could have been consumed in between meals. The diet of the lord of the household differed somewhat from this structure, including less red meat, more high-quality wild game, fresh fish, fruit, and wine.
In monasteries, the basic structure of the diet was laid down by the Rule of Saint Benedict in the 7th century and tightened by Pope Benedict XII in 1336, but (as mentioned above) monks were adept at "working around" these rules. Wine was restricted to about per day, but there was no corresponding limit on beer, and, at Westminster Abbey, each monk was given an allowance of of beer per day. Meat of "four-footed animals" was prohibited altogether, year-round, for everyone but the very weak and the sick. This was circumvented in part by declaring that offal, and various processed foods such as bacon, were not meat. Secondly, Benedictine monasteries contained a room called the misericord, where the Rule of Saint Benedict did not apply, and where a large number of monks ate. Each monk would be regularly sent either to the misericord or to the refectory. When Pope Benedict XII ruled that at least half of all monks should be required to eat in the refectory on any given day, monks responded by excluding the sick and those invited to the abbot's table from the reckoning. Overall, a monk at Westminster Abbey in the late 15th century would have been allowed of bread per day; 5 eggs per day, except on Fridays and in Lent; of meat per day, four days per week (excluding Wednesday, Friday, and Saturday), except in Advent and Lent; and of fish per day, three days per week and every day during Advent and Lent.
The overall calorie intake is subject to some debate. One typical estimate is that an adult peasant male needed per day, and an adult female needed . Both lower and higher estimates have been proposed. Those engaged in particularly heavy physical labor, as well as sailors and soldiers, may have consumed or more per day. Intakes of aristocrats may have reached per day. Monks consumed per day on "normal" days, and per day when fasting. As a consequence of these excesses, obesity was common among upper classes. Monks, especially, frequently suffered from conditions that were more common among the obese, such as arthritis.
Regional variation
The regional specialties that are a feature of early modern and contemporary cuisine were not in evidence in the sparser documentation that survives. Instead, medieval cuisine can be differentiated by the cereals and the oils that shaped dietary norms and crossed ethnic and, later, national boundaries. Geographical variation in eating was primarily the result of differences in climate, political administration, and local customs that varied across the continent. Though sweeping generalizations should be avoided, more or less distinct areas where certain foodstuffs dominated can be discerned. In the British Isles, northern France, the Low Countries, the northern German-speaking areas, Scandinavia and the Baltic, the climate was generally too harsh for the cultivation of grapes and olives. In the south, wine was the common drink for both rich and poor alike (though the commoner usually had to settle for cheap second-pressing wine) while beer was the commoner's drink in the north and wine an expensive import. Citrus fruits (though not the kinds most common today) and pomegranates were common around the Mediterranean. Dried figs and dates were available in the north, but were used rather sparingly in cooking.
Olive oil was a ubiquitous ingredient in Mediterranean cultures, but remained an expensive import in the north where oils of poppy, walnut, hazel, and filbert were the most affordable alternatives. Butter and lard, especially after the terrible mortality during the Black Death made them less scarce, were used in considerable quantities in the northern and northwestern regions, especially in the Low Countries. Almost universal in middle and upper class cooking all over Europe was the almond, which was in the ubiquitous and highly versatile almond milk, which was used as a substitute in dishes that otherwise required eggs or milk, though the bitter variety of almonds came along much later.
Meals
In Europe, there were typically two meals a day: dinner at mid-day and a lighter supper in the evening. The two-meal system remained consistent throughout the late Middle Ages. Smaller intermediate meals were common, but became a matter of social status, as those who did not have to perform manual labor could go without them. Moralists frowned on breaking the overnight fast too early, and members of the church and cultivated gentry avoided it. For practical reasons, breakfast was still eaten by working men, and was tolerated for young children, women, the elderly and the sick. Because the church preached against gluttony and other weaknesses of the flesh, men tended to be ashamed of the weak practicality of breakfast. Lavish dinner banquets and late-night reresopers (from Occitan rèire-sopar, "late supper") with considerable alcoholic beverage consumption were considered immoral. The latter were especially associated with gambling, crude language, drunkenness, and lewd behavior. Minor meals and snacks were common (although also disliked by the church), and working men commonly received an allowance from their employers in order to buy nuncheons, small morsels to be eaten during breaks.
Etiquette
As with almost every part of life at the time, a medieval meal was generally a communal affair. The entire household, including servants, would ideally dine together. To sneak off to enjoy private company was considered a haughty and inefficient egotism in a world where people depended very much on each other. In the 13th century, English bishop Robert Grosseteste advised the Countess of Lincoln: "forbid dinners and suppers out of hall, in secret and in private rooms, for from this arises waste and no honor to the lord and lady." He also recommended watching that the servants not make off with leftovers to make merry at rere-suppers, rather than giving it as alms. Towards the end of the Middle Ages, the wealthy increasingly sought to escape this regime of stern collectivism. When possible, rich hosts retired with their consorts to private chambers where the meal could be enjoyed in greater exclusivity and privacy. Being invited to a lord's chambers was a great privilege and could be used as a way to reward friends and allies and to awe subordinates. It allowed lords to distance themselves further from the household and to enjoy more luxurious treats while serving inferior food to the rest of the household that still dined in the great hall. At major occasions and banquets, however, the host and hostess generally dined in the great hall with the other diners. Although there are descriptions of dining etiquette on special occasions, less is known about the details of day-to-day meals of the elite or about the table manners of the common people and the destitute. However, it can be assumed there were no such extravagant luxuries as multiple courses, luxurious spices or hand-washing in scented water in everyday meals.
Things were different for the wealthy. Before the meal and between courses, shallow basins and linen towels were offered to guests so they could wash their hands, as cleanliness was emphasized. Social codes made it difficult for women to uphold the ideal of immaculate neatness and delicacy while enjoying a meal, so the wife of the host often dined in private with her entourage or ate very little at such feasts. She could then join dinner only after the potentially messy business of eating was done. Overall, fine dining was a predominantly male affair, and it was uncommon for anyone but the most honored of guests to bring his wife or her ladies-in-waiting. The hierarchical nature of society was reinforced by etiquette where the lower ranked were expected to help the higher, the younger to assist the elder, and men to spare women the risk of sullying dress and reputation by having to handle food in an unwomanly fashion. Shared drinking cups were common even at lavish banquets for all but those who sat at the high table, as was the standard etiquette of breaking bread and carving meat for one's fellow diners.
Food was mostly served on plates or in stew pots, and diners would take their share from the dishes and place it on trenchers of stale bread, wood or pewter with the help of spoons or bare hands. In lower-class households it was common to eat food straight off the table. Knives were used at the table, but most people were expected to bring their own, and only highly favoured guests would be given a personal knife. A knife was usually shared with at least one other dinner guest, unless one was of very high rank or well acquainted with the host. Forks for eating were not in widespread usage in Europe until the early modern period, and early on were limited to Italy. Even there it was not until the 14th century that the fork became common among Italians of all social classes. The change in attitudes can be illustrated by the reactions to the table manners of the Byzantine princess Theodora Doukaina in the late 11th century. She was the wife of Domenico Selvo, the Doge of Venice, and caused considerable dismay among upstanding Venetians. The princess' insistence on having her food cut up by her eunuch servants and then eating the pieces with a golden fork shocked and upset the diners so much that there was a claim that Peter Damian, Cardinal Bishop of Ostia, later interpreted her refined foreign manners as pride and referred to her as "... the Venetian Doge's wife, whose body, after her excessive delicacy, entirely rotted away."
Food preparation
All types of cooking involved the direct use of fire. Kitchen stoves did not appear until the 18th century, and cooks had to know how to cook directly over an open fire. Ovens were used, but they were expensive to construct and existed only in fairly large households and bakeries. It was common for a community to have shared ownership of an oven to ensure that the bread baking essential to everyone was made communal rather than private. There were also portable ovens designed to be filled with food and then buried in hot coals, and even larger ones on wheels that were used to sell pies in the streets of medieval towns. But for most people, almost all cooking was done in simple stewpots, since this was the most efficient use of firewood and did not waste precious cooking juices, making potages and stews the most common dishes. Overall, most evidence suggests that medieval dishes had a fairly high fat content, or at least when fat could be afforded. This was considered less of a problem in a time of back-breaking toil, famine, and a greater acceptance—even desirability—of plumpness; only the poor or sick, and devout ascetics, were thin.
Fruit was readily combined with meat, fish and eggs. The recipe for Tart de brymlent, a fish pie from the recipe collection The Forme of Cury, includes a mix of figs, raisins, apples, and pears with fish (salmon, cod, or haddock) and pitted damson plums under the top crust. It was considered important to make sure that the dish agreed with contemporary standards of medicine and dietetics. This meant that food had to be "tempered" according to its nature by an appropriate combination of preparation and mixing certain ingredients, condiments and spices; fish was seen as being cold and moist, and best cooked in a way that heated and dried it, such as frying or oven baking, and seasoned with hot and dry spices; beef was dry and hot and should therefore be boiled; pork was hot and moist and should therefore always be roasted. In some recipe collections, alternative ingredients were assigned with more consideration to the humoral nature than what a modern cook would consider to be similarity in taste. In a recipe for quince pie, cabbage is said to work equally well, and in another turnips could be replaced by pears.
The completely edible shortcrust pie did not appear in recipes until the 15th century. Before that the pastry was primarily used as a cooking container in a technique known as huff paste. Extant recipe collections show that gastronomy in the Late Middle Ages developed significantly. New techniques, like the shortcrust pie and the clarification of jelly with egg whites began to appear in recipes in the late 14th century and recipes began to include detailed instructions instead of being mere memory aids to an already skilled cook.
Medieval kitchens
In most households, cooking was done on an open hearth in the middle of the main living area, to make efficient use of the heat. This was the most common arrangement, even in wealthy households, for most of the Middle Ages, where the kitchen was combined with the dining hall. Towards the Late Middle Ages a separate kitchen area began to evolve. The first step was to move the fireplaces towards the walls of the main hall, and later to build a separate building or wing that contained a dedicated kitchen area, often separated from the main building by a covered arcade. This way, the smoke, odors and bustle of the kitchen could be kept out of sight of guests, and the fire risk lessened. Few medieval kitchens survive as they were "notoriously ephemeral structures".
Many basic variations of cooking utensils available today, such as frying pans, pots, kettles, and waffle irons, already existed, although they were often too expensive for poorer households. Other tools more specific to cooking over an open fire were spits of various sizes, and material for skewering anything from delicate quails to whole oxen. There were also cranes with adjustable hooks so that pots and cauldrons could easily be swung away from the fire to keep them from burning or boiling over. Utensils were often held directly over the fire or placed into embers on tripods. To assist the cook there were also assorted knives, stirring spoons, ladles and graters. In wealthy households one of the most common tools was the mortar and sieve cloth, since many medieval recipes called for food to be finely chopped, mashed, strained and seasoned either before or after cooking. This was based on a belief among physicians that the finer the consistency of food, the more effectively the body would absorb the nourishment. It also gave skilled cooks the opportunity to elaborately shape the results. Fine-textured food was also associated with wealth; for example, finely milled flour was expensive, while the bread of commoners was typically brown and coarse. A typical procedure was farcing (from the Latin farcio 'to cram'), to skin and dress an animal, grind up the meat and mix it with spices and other ingredients and then return it into its own skin, or mold it into the shape of a completely different animal.
The kitchen staff of huge noble or royal courts occasionally numbered in the hundreds: pantlers, bakers, waferers, sauciers, larderers, butchers, carvers, page boys, milkmaids, butlers, and numerous scullions. While an average peasant household often made do with firewood collected from the surrounding woodlands, the major kitchens of households had to cope with the logistics of daily providing at least two meals for several hundred people. Guidelines on how to prepare for a two-day banquet can be found in the cookbook Du fait de cuisine ('On cookery') written in 1420 in part to compete with the court of Burgundy by Maistre Chiquart, master chef of Duke Amadeus VIII of Savoy. Chiquart recommends that the chief cook should have at hand at least 1,000 cartloads of "good, dry firewood" and a large barnful of coal.
Preservation
Food preservation methods were basically the same as had been used since antiquity, and did not change much until the invention of canning in the early 19th century. The most common and simplest method was to expose foodstuffs to heat or wind to remove moisture, thereby prolonging the durability if not the flavor of almost any type of food from cereals to meats; the drying of food worked by drastically reducing the activity of various water-dependent microorganisms that cause decay. In warm climates this was mostly achieved by leaving food out in the sun, and in the cooler northern climates by exposure to strong winds (especially common for the preparation of stockfish), or in warm ovens, cellars, attics, and at times even in living quarters. Subjecting food to a number of chemical processes such as smoking, salting, brining, conserving, or fermenting also made it keep longer. Most of these methods had the advantage of shorter preparation times and of introducing new flavors. Smoking or salting meat of livestock butchered in autumn was a common household strategy to avoid having to feed more animals than necessary during the lean winter months. Butter tended to be heavily salted (5–10%) in order not to spoil. Vegetables, eggs, or fish were also often pickled in tightly packed jars, containing brine and acidic liquids (lemon juice, verjuice, or vinegar). Another method was to seal the food by cooking it in honey, sugar, or fat, in which it was then stored. Microbial modification was also encouraged, however, by a number of methods; grains and fruits were turned into alcoholic drinks thus killing any pathogens, and milk was fermented and curdled into a multitude of cheeses or buttermilk.
Professional cooking
The majority of the European population before industrialization lived in rural communities or isolated farms and households. The norm was self-sufficiency with only a small percentage of production being exported or sold in markets. Large towns were exceptions and required their surrounding hinterlands to support them with food and fuel. The dense urban population could support a wide variety of food establishments that catered to various social groups. Many of the poor city dwellers had to live in cramped conditions without access to a kitchen or even a hearth, and many did not own the equipment for basic cooking. Food from vendors was in such cases the only option. Cookshops could either sell ready-made hot food, an early form of fast food, or offer cooking services while the customers supplied some or all of the ingredients. Travelers, such as pilgrims en route to a holy site, made use of professional cooks to avoid having to carry their provisions with them. For the more affluent, there were many types of specialist that could supply various foods and condiments: cheesemongers, pie bakers, saucers, and waferers, for example. Well-off citizens who had the means to cook at home could on special occasions hire professionals when their own kitchen or staff could not handle the burden of hosting a major banquet.
Urban cookshops that catered to workers or the destitute were regarded as unsavory and disreputable places by the well-to-do and professional cooks tended to have a bad reputation. Geoffrey Chaucer's Hodge of Ware, the London cook from the Canterbury Tales, is described as a sleazy purveyor of unpalatable food. French cardinal Jacques de Vitry's sermons from the early 13th century describe sellers of cooked meat as an outright health hazard. While the necessity of the cook's services was occasionally recognized and appreciated, they were often disparaged since they catered to the baser of bodily human needs rather than spiritual betterment. The stereotypical cook in art and literature was male, hot-tempered, prone to drunkenness, and often depicted guarding his stewpot from being pilfered by both humans and animals. In the early 15th century, the English monk John Lydgate articulated the beliefs of many of his contemporaries by proclaiming that "Hoot ffir [fire] and smoke makith many an angry cook."
Cereals
The period between 500 and 1300 saw a major change in diet that affected most of Europe. More intense agriculture on ever-increasing acreage resulted in a shift from animal products, like meat and dairy, to various grains and vegetables as the staple of the majority population. Before the 14th century, bread was not as common among the lower classes, especially in the north where wheat was more difficult to grow. A bread-based diet became gradually more common during the 15th century and replaced warm intermediate meals that were porridge- or gruel-based. Leavened bread was more common in wheat-growing regions in the south, while unleavened flatbread of barley, rye, or oats remained more common in northern and highland regions, and unleavened flatbread was also common as provisions for troops.
The most common grains were rye, barley, buckwheat, millet, and oats. Rice remained a fairly expensive import for most of the Middle Ages and was grown in northern Italy only towards the end of the period. Wheat was common all over Europe and was considered to be the most nutritious of all grains, but was more prestigious and thus more expensive. The finely sifted white flour that modern Europeans are most familiar with was reserved for the bread of the upper classes. As one descended the social ladder, bread became coarser, darker, and its bran content increased. In times of grain shortages or outright famine, grains could be supplemented with cheaper and less desirable substitutes like chestnuts, dried legumes, acorns, ferns, and a wide variety of more or less nutritious vegetable matter.
One of the common constituents of a medieval meal, either as part of a banquet or as a small snack, were sops, pieces of bread with which a liquid like wine, soup, broth, or sauce could be soaked up and eaten. Another common sight at the medieval dinner table was the frumenty, a thick wheat porridge often boiled in a meat broth and seasoned with spices. Porridges were also made of every type of grain and could be served as desserts or dishes for the sick, if boiled in milk (or almond milk) and sweetened with sugar. Pies filled with meats, eggs, vegetables, or fruit were common throughout Europe, as were turnovers, fritters, doughnuts, and many similar pastries. Grain, either as bread crumbs or flour, was also the most common thickener of soups and stews, alone or in combination with almond milk. By the Late Middle Ages biscuits (cookies in the U.S.) and especially wafers, eaten for dessert, had become high-prestige foods and came in many varieties.
The importance of bread as a daily staple meant that bakers played a crucial role in any medieval community. Bread consumption was high in most of Western Europe by the 14th century. Estimates of bread consumption from different regions are fairly similar: around of bread per person per day. Among the first town guilds to be organized were the bakers, and laws and regulations were passed to keep bread prices stable. The English Assize of Bread and Ale of 1266 listed extensive tables where the size, weight, and price of a loaf of bread were regulated in relation to grain prices. The baker's profit margin stipulated in the tables was later increased through successful lobbying from the London Baker's Company by adding the cost of everything from firewood and salt to the baker's wife, house, and dog. Since bread was such a central part of the medieval diet, swindling by those who were trusted with supplying the precious commodity to the community was considered a serious offense. Bakers who were caught tampering with weights or adulterating dough with less expensive ingredients could receive severe penalties. This gave rise to the "baker's dozen": a baker would give 13 for the price of 12, to be certain of not being known as a cheat.
Fruits and vegetables
Fruits were popular and could be served fresh, dried, or preserved, and was a common ingredient in many cooked dishes. Since honey and sugar were both expensive, it was common to include many types of fruit in dishes that called for sweeteners of some sort. The fruits of choice in the south were lemons, citrons, bitter oranges (the sweet type was not introduced until several hundred years later), pomegranates, quinces, and grapes. Farther north, apples, pears, plums, and wild strawberries were more common. Figs and dates were eaten all over Europe, but remained rather expensive imports in the north.
While grains were the primary constituent of most meals, vegetables such as cabbages, chard, onions, garlic, and carrots were common foodstuffs. Many of these were eaten daily by peasants and workers and were less prestigious than meat. Cookbooks, which appeared in the late Middle Ages and were intended mostly for those who could afford such luxuries, contained only a small number of recipes using vegetables as the main ingredient. The lack of recipes for many basic vegetable dishes, such as potages, has been interpreted not to mean that they were absent from the meals of the nobility, but rather that they were considered so basic that they did not require recording. Carrots were available in many variants during the Middle Ages: among them a tastier reddish-purple variety and a less prestigious green-yellow type. Various legumes, like chickpeas, fava beans, and field peas were also common and important sources of protein, especially among the lower classes. With the exception of peas, legumes were often viewed with some suspicion by the dietitians advising the upper class, partly because of their tendency to cause flatulence but also because they were associated with the coarse food of peasants. The importance of vegetables to the common people is illustrated by accounts from 16th century Germany stating that many peasants ate sauerkraut from three to four times a day.
Common and often basic ingredients in many modern European cuisines like potatoes, kidney beans, cacao, vanilla, tomatoes, chili peppers, and maize were not available to Europeans until after 1492, after European contact with the Americas. Even after their wider availability in Europe it often took considerable time (sometimes several centuries) for the new foodstuffs to be accepted by society at large.
Dairy products
Milk was an important source of animal protein for those who could not afford meat. It would mostly come from cows, but milk from goats and sheep was also common. Plain fresh milk was not consumed by adults except the poor or sick, and was usually reserved for the very young or elderly. Poor adults would sometimes drink buttermilk or whey or milk that was soured or watered down. Fresh milk was overall less common than other dairy products because of the lack of technology to keep it from spoiling. On occasion it was used in upper-class kitchens in stews, but it was difficult to keep fresh in bulk and almond milk was generally used in its place.
Cheese was far more important as a foodstuff, especially for common people, and it has been suggested that it was, during many periods, the chief supplier of animal protein among the lower classes. Many varieties of cheese eaten today, like Dutch Edam, Northern French Brie and Italian Parmesan, were available and well known in late medieval times. There were also whey cheeses, like ricotta, made from by-products of the production of harder cheeses. Cheese was used in cooking for pies and soups, the latter being common fare in German-speaking areas. Butter, another important dairy product, was in popular use in the regions of Northern Europe that specialized in cattle production in the latter half of the Middle Ages, the Low Countries and Southern Scandinavia. While most other regions used oil or lard as cooking fats, butter was the dominant cooking medium in these areas. Its production also allowed for a lucrative butter export from the 12th century onward.
Meats
While all forms of wild game were popular among those who could obtain it, most meat came from domestic animals. Domestic working animals that were no longer able to work were slaughtered but not particularly appetizing and therefore were less valued as meat. Beef was not as common as today because raising cattle was labor-intensive, requiring pastures and feed, and oxen and cows were much more valuable as draught animals and for producing milk. Lamb and mutton were fairly common, especially in areas with a sizeable wool industry, as was veal. Goat meat was consumed in some parts of medieval Europe. Far more common was pork, as domestic pigs required less attention and cheaper feed. Domestic pigs often ran freely even in towns and could be fed on just about any organic waste, and suckling pig was a sought-after delicacy. Just about every part of the pig was eaten, including ears, snout, tail, tongue, and womb. Intestines, bladder, and stomach could be used as casings for sausage or even illusion food such as giant eggs. Among the meats that today are rare or even considered inappropriate for human consumption are the hedgehog and porcupine, occasionally mentioned in late medieval recipe collections. Rabbits remained a rare and highly prized commodity. In England, they were deliberately introduced by the 13th century and their colonies were carefully protected. Further south, domesticated rabbits were commonly bred and raised both for their meat and fur. It is frequently and falsely claimed that they were of particular value for monasteries because newborn rabbits were allegedly declared fish (or at least not meat) by church officials, allowing them to be eaten during Lent.
A wide range of birds were eaten, including pheasants, swans, peafowl, quail, partridge, storks, cranes, pigeons, larks, finches, and just about any other wild bird that could be captured. Swans and peafowl were domesticated to some extent, but were only eaten by the social elite, and more praised for their fine appearance as stunning entertainment dishes, entremets, than for their meat. As today, ducks and geese had been domesticated but were not as popular as the chicken, the poultry equivalent of the pig. The barnacle goose was believed to reproduce not by laying eggs like other birds, but by growing in barnacles, and was hence considered acceptable food for fast and Lent. But at the Fourth Council of the Lateran (1215), Pope Innocent III explicitly prohibited the eating of barnacle geese during Lent, arguing that they lived and fed like ducks and so were of the same nature as other birds.
Meats were more expensive than plant foods and could be up to four times as expensive as bread. Fish was up to 16 times as costly, and was expensive even for coastal populations. This meant that fasts could mean an especially meager diet for those who could not afford alternatives to meat and animal products like milk and eggs. It was only after the Black Death had eradicated up to half of the European population that meat became more common even for poorer people. The drastic reduction in many populated areas resulted in a labor shortage, meaning that wages dramatically increased. It also left vast areas of farmland untended, making it available for pasture and putting more meat on the market.
Fish and seafood
Although less prestigious than other animal meats, and often seen as merely an alternative to meat on fast days, seafood was the mainstay of many coastal populations. "Fish" to the medieval person was also a general name for anything not considered a proper land-living animal, including marine mammals such as whales and porpoises. Also included were the beaver, due to its scaly tail and considerable time spent in water, and barnacle geese, due to the belief that they developed underwater in the form of barnacles. Such foods were also considered appropriate for fast days, though the rather contrived classification of barnacle geese as fish was not universally accepted. The Holy Roman Emperor Frederick II examined barnacles and noted no evidence of any bird-like embryo in them, and the secretary of Leo of Rozmital wrote a very skeptical account of his reaction to being served barnacle goose at a fish-day dinner in 1456.
Especially important was the fishing and trade in herring and cod in the Atlantic and the Baltic Sea. The herring was of unprecedented significance to the economy of much of Northern Europe, and it was one of the most common commodities traded by the Hanseatic League, a powerful north German alliance of trading guilds. Kippers made from herring caught in the North Sea could be found in markets as far away as Constantinople. While large quantities of fish were eaten fresh, a large proportion was salted, dried, and, to a lesser extent, smoked. Stockfish, cod that was split down the middle, fixed to a pole and dried, was very common, though preparation could be time-consuming, and meant beating the dried fish with a mallet before soaking it in water. A wide range of mollusks including oysters, mussels, and scallops were eaten by coastal and river-dwelling populations, and freshwater crayfish were seen as a desirable alternative to meat during fish days. Compared to meat, fish was much more expensive for inland populations, especially in Central Europe, and therefore not an option for most. Freshwater fish such as eel, pike, carp, bream, perch, lamprey, salmon, and trout were common.
Drink
While water is often drunk with a meal in modern times, in the Middle Ages concerns over purity, medical recommendations, and its low prestige value made it less favored. As such, alcoholic beverages were preferred. They were seen as more nutritious and beneficial to digestion than water, with the invaluable bonus of being less prone to putrefaction due to the alcohol content. Wine was consumed on a daily basis in most of France and all over the Western Mediterranean wherever grapes were cultivated. Further north it remained the preferred drink of the bourgeoisie and the nobility who could afford it, and far less common among peasants and workers. The drink of commoners in the northern parts of the continent was primarily beer or ale.
Juices, as well as wines, of a multitude of fruits and berries had been known at least since Roman antiquity and were still consumed in the Middle Ages: pomegranate, mulberry and blackberry wines, perry, and cider which was especially popular in the north where both apples and pears were plentiful. Medieval drinks that have survived to this day include prunellé from wild plums (modern-day slivovitz), mulberry gin and blackberry wine. Many variants of mead have been found in medieval recipes, with or without alcoholic content. However, the honey-based drink became less common as a table beverage towards the end of the period and was eventually relegated to medicinal use. Mead has often been presented as the common drink of the Slavs. This is partially true since mead bore great symbolic value at important occasions. When agreeing on treaties and other important affairs of state, mead was often presented as a ceremonial gift. It was also common at weddings and baptismal parties, though in limited quantity due to its high price. In medieval Poland, mead had a status equivalent to that of imported luxuries, such as spices and wines. Kumis, the fermented milk of mares or camels, was known in Europe, but as with mead was mostly something prescribed by physicians.
Plain milk was not consumed by adults except the poor or sick, being reserved for the very young or elderly, and then usually as buttermilk or whey. Fresh milk was overall less common than other dairy products because of the lack of technology to keep it from spoiling. Tea and coffee, both made from plants found in the Old World, were popular in East Asia and the Muslim world during the Middle Ages. However, neither of these non-alcoholic social drinks were consumed in Europe before the late 16th and early 17th centuries.
Wine
Wine was commonly drunk and was also regarded as the most prestigious and healthy choice. According to Galen's dietetics, it was considered hot and dry; however, these qualities were moderated when wine was watered down. Unlike water or beer, which were considered cold and moist, consumption of wine in moderation (especially red wine) was, among other things, believed to aid digestion, generate good blood and brighten the mood. The quality of wine differed considerably according to vintage, the type of grape and more importantly, the number of grape pressings. The first pressing was made into the finest and most expensive wines which were reserved for the upper classes. The second and third pressings were subsequently of lower quality and alcohol content. Common folk usually had to settle for a cheap white or rosé from a second or even third pressing, meaning that it could be consumed in quite generous amounts without leading to heavy intoxication. For the poorest (or the most pious), watered-down vinegar (similar to Ancient Roman posca) would often be the only available choice.
The aging of high-quality red wine required specialized knowledge as well as expensive storage and equipment, and resulted in an even more expensive end product. Judging from the advice given in many medieval documents on how to salvage wine that bore signs of going bad, preservation must have been a widespread problem. Even if vinegar was a common ingredient, there was only so much of it that could be used. The 14th-century cookbook Le Viandier describes several methods for salvaging spoiling wine; making sure that the wine barrels are always topped up or adding a mixture of dried and boiled white grape seeds with the ash of dried and burnt lees of white wine were both effective bactericides, even if the chemical processes were not understood at the time. Spiced or mulled wine was not only popular among the affluent, but was also considered especially healthy by physicians. Wine was believed to act as a kind of vaporizer and conduit of other foodstuffs to every part of the body, and the addition of fragrant and exotic spices would make it even more wholesome. Spiced wines were usually made by mixing an ordinary (red) wine with an assortment of spices such as ginger, cardamom, pepper, grains of paradise, nutmeg, cloves and sugar. These would be contained in small bags which were either steeped in wine or had liquid poured over them to produce hypocras and claré. By the 14th century, bagged spice mixes could be bought ready-made from spice merchants.
Beer
While wine was the most common table beverage in much of Europe, this was not the case in the northern regions where grapes were not cultivated. Those who could afford it drank imported wine; even for nobility in these areas, however, it was common to drink beer or ale, particularly towards the end of the Middle Ages. In England, the Low Countries, northern Germany, Poland and Scandinavia, beer was consumed on a daily basis by people of all social classes and age groups. By the mid-15th century, barley, a cereal known to be somewhat poorly suited for breadmaking but excellent for brewing, accounted for 27% of all cereal acreage in England. However, the heavy influence from Arab and Mediterranean culture on medical science (particularly due to the Reconquista and the influx of Arabic texts) meant that beer was often disfavoured. For most medieval Europeans, it was a humble brew compared with common southern drinks and cooking ingredients, such as wine, lemons and olive oil. Even comparatively exotic products like camel milk and gazelle meat generally received more positive attention in medical texts. Beer was just an acceptable alternative and was assigned various negative qualities. In 1256, the Sienese physician Aldobrandino described beer in the following way:
But from whichever it is made, whether from oats, barley or wheat, it harms the head and the stomach, it causes bad breath and ruins the teeth, it fills the stomach with bad fumes, and as a result anyone who drinks it along with wine becomes drunk quickly; but it does have the property of facilitating urination and makes one's flesh white and smooth.
The intoxicating effect of beer was believed to last longer than that of wine, but it was also admitted that it did not create the "false thirst" associated with wine. Though less prominent than in the north, beer was consumed in northern France and the Italian mainland. Perhaps as a consequence of the Norman Conquest and the travelling of nobles between France and England, one French variant described in the 14th century cookbook Le Menagier de Paris was called godale (most likely a direct borrowing from the English 'good ale') and was made from barley and spelt, but without hops. In England there were also the variants poset ale, made from hot milk and cold ale, and brakot or braggot, a spiced honey ale prepared much like hypocras.
That hops could be used for flavoring beer had been known at least since Carolingian times, but was adopted gradually due to difficulties in establishing the appropriate proportions. Before the widespread use of hops, gruit, a mix of various herbs, had been used. Gruit had the same preserving properties as hops, though less reliable depending on what herbs were in it; as such, the end result was much more variable. Another flavoring method was to increase the alcohol content, but this was more expensive and lent the beer the undesired characteristic of being a quick and heavy intoxicant. Hops may have been widely used in England in the tenth century; they were grown in Austria by 1208 and in Finland by 1249, and possibly much earlier.
Before hops became popular as an ingredient, it was difficult to preserve this beverage for any time, so it was mostly consumed fresh. It was unfiltered, and therefore cloudy, and likely had a lower alcohol content than the typical modern equivalent. Quantities of beer consumed by medieval residents of Europe, as recorded in contemporary literature, far exceed intakes in the modern world. For example, sailors in 16th-century England and Denmark received a ration of of beer per day. Polish peasants consumed up to of beer per day.
In the Early Middle Ages, beer was brewed primarily in monasteries, and on a smaller scale, in individual households. By the High Middle Ages, breweries in the fledgling medieval towns of northern Germany began to take over production. Though most of the breweries were small family businesses that employed at most eight to ten people, regular production allowed for investment in better equipment and increased experimentation with new recipes and brewing techniques. These operations later spread to the Netherlands in the 14th century, then to Flanders and Brabant, and reached England by the 15th century. Hopped beer became very popular in the last decades of the Late Middle Ages. In England and the Low Countries, the per capita annual consumption was around , and it was consumed with practically every meal: low alcohol-content beers for breakfast, and stronger ones later in the day. When perfected as an ingredient, hops could make beer keep for six months or more, and facilitated extensive exports. In Late Medieval England, the word beer came to mean a hopped beverage, whereas ale had to be unhopped. In turn, ale or beer was classified as "strong" or "small", the latter less intoxicating, regarded as a drink of temperate people, and suitable for consumption by children. As late as 1693, John Locke stated that the only drink he considered suitable for children of all ages was small beer, while criticizing the apparently common practice among Englishmen of the time to give their children wine and strong alcohol.
By modern standards, the brewing process was relatively inefficient, but capable of producing quite strong alcohol when that was desired. A 1998 attempt to recreate medieval English "strong ale" using recipes and techniques of the era (albeit with the use of modern yeast strains) yielded a strongly alcoholic brew with original gravity of 1.091 (corresponding to a potential alcohol content over 9%) and "pleasant, apple-like taste".
Distillates
The ancient Greeks and Romans knew of the technique of distillation, but it was not practiced on a major scale in Europe until after the invention of alembics, which feature in manuscripts from the ninth century onwards. Distillation was believed by medieval scholars to produce the essence of the liquid being purified, and the term aqua vitae ('water of life') was used as a generic term for all kinds of distillates. The early use of various distillates, alcoholic or not, was varied, but it was primarily culinary or medicinal; grape syrup mixed with sugar and spices was prescribed for a variety of ailments, and rosewater was used as a perfume and cooking ingredient and for hand washing. Alcoholic distillates were also occasionally used to create dazzling, fire-breathing entremets (a type of entertainment dish after a course) by soaking a piece of cotton in spirits. It would then be placed in the mouth of the stuffed, cooked and occasionally redressed animals, and lit just before presenting the creation.
Aqua vitae in its alcoholic forms was highly praised by medieval physicians. In 1309, Arnaldus of Villanova wrote that "[i]t prolongs good health, dissipates superfluous humours, reanimates the heart and maintains youth." In the Late Middle Ages, the production of moonshine started to pick up, especially in the German-speaking regions. By the 13th century, Hausbrand (literally 'home-burnt' from gebrannter wein, brandwein 'burnt [distilled] wine') was commonplace, marking the origin of brandy. Towards the end of the Late Middle Ages, the consumption of spirits became so ingrained even among the general population that restrictions on sales and production began to appear in the late 15th century. In 1496, the city of Nuremberg issued restrictions on the selling of aquavit on Sundays and official holidays.
Herbs, spices, and condiments
Spices were among the most luxurious products available in the Middle Ages, the most common being black pepper, cinnamon (and the cheaper alternative cassia), cumin, nutmeg, ginger, and cloves. They all had to be imported from plantations in Asia and Africa, which made them extremely expensive, and gave them social cachet such that pepper, for example, was hoarded, traded and conspicuously donated in the manner of gold bullion. It has been estimated that around 1,000 tons of pepper and 1,000 tons of the other common spices were imported into Western Europe each year during the late Middle Ages. The value of these goods was the equivalent of a yearly supply of grain for 1.5 million people. While pepper was the most common spice, the most exclusive (though not the most obscure in its origin) was saffron, used as much for its vivid yellow-red color as for its flavor, for according to the humours, yellow signified hot and dry, valued qualities; turmeric provided a yellow substitute, and touches of gilding at banquets supplied both the medieval love of ostentatious show and Galenic dietary lore: at the sumptuous banquet that Cardinal Riario offered to Eleanor of Naples in June 1473, the bread was gilded. Among the spices that have now fallen into obscurity are grains of paradise, a relative of cardamom which almost entirely replaced pepper in late medieval north French cooking, long pepper, mace, spikenard, galangal, and cubeb. Sugar, unlike today, was considered to be a type of spice due to its high cost and humoral qualities. Few dishes employed just one type of spice or herb, but rather a combination of several different ones. Even when a dish was dominated by a single flavor it was usually combined with another to produce a compound taste, for example parsley and cloves or pepper and ginger.
Common herbs such as sage, mustard, and parsley were grown and used in cooking all over Europe, as were caraway, mint, dill, and fennel. Many of these plants grew throughout all of Europe or were cultivated in gardens, and were a cheaper alternative to exotic spices. Mustard was particularly popular with meat dishes and was described by Hildegard of Bingen (1098–1179) as poor man's food. While locally grown herbs were less prestigious than spices, they were still used in upper-class food, but were then usually less prominent or included merely as coloring. Anise was used to flavor fish and chicken dishes, and its seeds were served as sugar-coated comfits.
Surviving medieval recipes frequently call for flavoring with a number of sour, tart liquids. Wine, verjuice (the juice of unripe grapes or fruits) vinegar and the juices of various fruits, especially those with tart flavors, were almost universal and a hallmark of late medieval cooking. In combination with sweeteners and spices, it produced a distinctive "pungeant, fruity" flavor. Equally common, and used to complement the tanginess of these ingredients, were (sweet) almonds. They were used in a variety of ways: whole, shelled or unshelled, slivered, ground and, most importantly, processed into almond milk. This last type of non-dairy milk product is probably the single most common ingredient in late medieval cooking and blended the aroma of spices and sour liquids with a mild taste and creamy texture.
Salt was ubiquitous and indispensable in medieval cooking. Salting and drying was the most common form of food preservation and meant that fish and meat in particular were often heavily salted. Many medieval recipes specifically warn against oversalting and there were recommendations for soaking certain products in water to get rid of excess salt. Salt was present during more elaborate or expensive meals. The richer the host, and the more prestigious the guest, the more elaborate would be the container in which it was served and the higher the quality and price of the salt. Wealthy guests were seated "above the salt", while others sat "below the salt", where salt cellars were made of pewter, precious metals or other fine materials, often intricately decorated. The rank of a diner also decided how finely ground and white the salt was. Salt for cooking, preservation or for use by common people was coarser; sea salt, or "bay salt", in particular, had more impurities, and was described in colors ranging from black to green. Expensive salt, on the other hand, looked like the standard commercial salt common today.
Sweets and desserts
The term "dessert" comes from the Old French desservir, 'to clear a table', literally 'to un-serve', and originated during the Middle Ages. It would typically consist of dragées and mulled wine accompanied by aged cheese, and by the Late Middle Ages could also include fresh fruit covered in honey, sugar, or syrup and boiled-down fruit pastes. Sugar, from its first appearance in Europe, was viewed as much as a drug as a sweetener; its long-lived medieval reputation as an exotic luxury encouraged its appearance in elite contexts accompanying meats and other dishes that to modern taste are more naturally savoury. There were a wide variety of fritters, crêpes with sugar, sweet custards and darioles, almond milk and eggs in a pastry shell that could also include fruit and sometimes even bone marrow or fish. German-speaking areas had a particular fondness for krapfen: fried pastries and dough with various sweet and savory fillings. Marzipan in many forms was well-known in Italy and southern France by the 1340s, and is assumed to be of Arab origin. Anglo-Norman cookbooks are full of recipes for sweet and savory custards, potages, sauces, and tarts with strawberries, cherries, apples, and plums. The English chefs also had a penchant for using flower petals such as roses, violets, and elder flowers. An early form of quiche can be found in Forme of Cury, a 14th-century recipe collection, as a Torte de Bry with a cheese and egg yolk filling.
Le Ménagier de Paris ("Parisian Household Book"), written in 1393, includes a quiche recipe made with three kinds of cheese, eggs, beet greens, spinach, fennel fronds, and parsley.
In northern France, a wide assortment of waffles and wafers was eaten with cheese and hypocras or a sweet malmsey as issue de table ('departure from the table'). The ever-present candied ginger, coriander, aniseed and other spices were referred to as épices de chambre ('parlor spices') and were taken as digestibles at the end of a meal to "close" the stomach. Like their Muslim counterparts in Spain, the Arab conquerors of Sicily introduced a wide variety of new sweets and desserts that eventually found their way to the rest of Europe. Just like Montpellier, Sicily was once famous for its comfits, nougat candy (torrone, or turrón in Spanish) and almond clusters (confetti). From the south, the Arabs also brought the art of ice cream-making that produced sorbet and several examples of sweet cakes and pastries; cassata alla Siciliana (from Arabic qas'ah, the term for the terracotta bowl with which it was shaped), made from marzipan, sponge cake with sweetened ricotta, and cannoli alla Siciliana, originally cappelli di turchi ('Turkish hats'), fried, chilled pastry tubes with a sweet cheese filling.
Historiography and sources
Research into medieval foodways was, until around 1980, a somewhat neglected field of study. Misconceptions and outright errors were quite common among historians, and are still present in as a part of the popular view of the Middle Ages as a backward, primitive and barbaric era. Medieval cookery was described as revolting due to the often unfamiliar combination of flavors, the perceived lack of vegetables and a liberal use of spices. The heavy use of spices has been popular as an argument to support the claim that spices were employed to disguise the flavor of spoiled meat, a conclusion without support in historical fact and contemporary sources. Fresh meat could be procured throughout the year by those who could afford it. The preservation techniques available at the time, although crude by today's standards, were perfectly adequate. The astronomical cost and high prestige of spices, and thereby the reputation of the host, would have been effectively undone if wasted on cheap and poorly handled foods.
The common method of grinding and mashing ingredients into pastes and the many potages and sauces has been used as an argument that most adults within the medieval nobility lost their teeth at an early age, and hence were forced to eat nothing but porridge, soup and ground-up meat. This has been demonstrated to be an unfounded theory by historians such as Terence Scully.
The numerous descriptions of banquets from the later Middle Ages concentrated on the pageantry of the event rather than the minutiae of the food, which was not the same for most banqueters as those choice mets served at the high table. Banquet dishes were apart from mainstream of cuisine, and have been described as "the outcome of grand banquets serving political ambition rather than gastronomy; today as yesterday" by historian Maguelonne Toussant-Samat.
Cookbooks
Cookbooks, or more specifically, recipe collections, compiled in the Middle Ages are among the most important historical sources for medieval cuisine. The first cookbooks began to appear towards the end of the 13th century. The Liber de Coquina, perhaps originating near Naples, and the Tractatus de modo preparandi have found a modern editor in Marianne Mulon, and a cookbook from Assisi found at Châlons-sur-Marne has been edited by Maguelonne Toussaint-Samat. Though it is assumed that they describe real dishes, food scholars do not believe they were used as cookbooks might be today, as a step-by-step guide through the cooking procedure that could be kept at hand while preparing a dish. Few in a kitchen, at those times, would have been able to read, and working texts have a low survival rate.
The recipes were often brief and did not give precise quantities. Cooking times and temperatures were seldom specified since accurate portable clocks were not available and since all cooking was done with fire. At best, cooking times could be specified as the time it took to say a certain number of prayers or how long it took to walk around a certain field. Professional cooks were taught their trade through apprenticeship and practical training, working their way up in the highly defined kitchen hierarchy. A medieval cook employed in a large household would most likely have been able to plan and produce a meal without the help of recipes or written instruction. Due to the generally good condition of surviving manuscripts it has been proposed by food historian Terence Scully that they were records of household practices intended for the wealthy and literate master of a household, such as Le Ménagier de Paris from the late 14th century. Over 70 collections of medieval recipes survive today, written in several major European languages.
The repertory of housekeeping instructions laid down by manuscripts like the Ménagier de Paris also include many details of overseeing correct preparations in the kitchen. Towards the onset of the early modern period, in 1474, the Vatican librarian Bartolomeo Platina wrote De honesta voluptate et valetudine ("On honorable pleasure and health") and the physician Iodocus Willich edited Apicius in Zürich in 1563.
High-status exotic spices and rarities like ginger, pepper, cloves, sesame, citron leaves and "onions of Escalon" all appear in an eighth-century list of spices that the Carolingian cook should have at hand. It was written by Vinidarius, whose excerpts of Apicius survive in an eighth-century uncial manuscript. Vinidarius's own dates may not be much earlier.
See also
Early modern European cuisine
Guillaume Tirel
Guild feasts in medieval England
Tudor food and drink
Medieval household
Pre-Columbian cuisine
Notes
References
External links
Academic resources about medieval cuisine
Medieval cookbooks at the British Library
The Forme of Cury, a medieval cookbook on Project Gutenberg
Resources on medieval and early modern European food
Information on medieval bread
Book chapter on feeding the poor in medieval Catalonia
Journal article on early-medieval diet | 0.767747 | 0.997481 | 0.765813 |
Balkanization | Balkanization or Balkanisation is the process involving the fragmentation of an area, country, or region into multiple smaller and hostile units. It is usually caused by differences in ethnicity, culture, religion, and geopolitical interests.
The term was first coined in the early 20th century, and found its roots in the depiction of events during the Balkan Wars (1912–1913) and World War I (1914–1918), specifically referring to incidents that transpired earlier in the Balkan Peninsula.
The term is pejorative; when sponsored or encouraged by a sovereign third party, it has been used as an accusation against such third-party nations. Controversially, the term is often used by reactionary voices for the status quo to underscore the dangers of acrimonious or runaway secessionism. The Balkan peninsula is seen as an example of shatter belts in geopolitics.
Origins of the term
Coined in the early 20th century, the term "Balkanization" traces its origins to the depiction of events during the Balkan Wars (1912–1913) and the First World War (1914–1918). It did not emerge during the gradual secession of Balkan nations from the Ottoman Empire over the 19th century, but was coined at the end of the First World War. Albania was the only addition to the existing Balkan map at that time, as other nations had already formed in the nineteenth century. The term was initially employed by journalists and politicians, who used it as a conceptual tool to interpret the evolving global order resulting from the collapse of the Habsburg and Romanov Empires and the subsequent secession of Balkan nations following the Ottoman Empire's disintegration in the nineteenth century. After the Second World War (1939–1945), the term underwent significant development, expanding beyond its original context to encompass diverse fields such as linguistics, demography, information technology, gastronomy, and more. This expansion extended its descriptive reach to various phenomena, often with pejorative connotations. In response, critical scholars in the late 20th and early 21st centuries sought to denaturalize and reclaim 'balkanization'.
Nations and societies
The term (coined in the early 20th century in the aftermath of the collapse of the Ottoman Empire) refers to the division of the Balkan peninsula, which was ruled almost entirely by the Ottoman Empire, into a number of smaller states between 1817 and 1912. It came into common use in the immediate aftermath of the First World War, with reference to the many new states that arose from the collapse of the Austro-Hungarian Empire and the Ottoman Empire.
Uses to stir opinion
Countries in Europe, where uniting quite recently historically distinct peoples or nations, have seen outspoken separatists. These have prompted reactionary voices fearing Balkanization. The Iberian Peninsula, especially Spain, has from the time of Al-Andalus (ending in 1492) seen voices fearing disorderly rupture. Its main separatist movements today are Basque separatism and Catalan independentism.
Canada is a stable country but has separatist movements, the strongest of which is the Quebec sovereignty movement, which seeks to create a nation-state in Quebec, which encompasses the majority of Canada's French Canadian population. Two referendums have been held to decide the question, one in 1980 and one in 1995. Both were lost by the separatists, the latter by a small margin. Less mainstream and smaller movements also exist in the Canadian Prairie, especially Alberta, to protest what is seen as domination by Quebec and Ontario of Canadian politics. Saskatchewan Premier Roy Romanow also considered separation from Canada if the 1995 referendum had succeeded, which would have led to the balkanization of Canada.
Quebec has been the scene of a small but vociferous partition movement from the part of Anglo-Quebecers activist groups opposed to the idea of independence of Quebec since 80% of the province is francophone. One such project is the Proposal for the Province of Montreal for the establishment of a separate province from Quebec for Montreal's strongly-anglophone and allophone (mother tongue neither English nor French) communities.
In January 2007, the growing support for Scottish independence made Chancellor of the Exchequer of the United Kingdom and later Prime Minister Gordon Brown talk of a "Balkanisation of Britain". Independence movements in the United Kingdom also exist in England, Cornwall and Northern England (themselves parts of England), Wales, and Northern Ireland.
In Africa
Bates, Coatsworth & Williamson argued Balkanization was observed greatly in West Africa then British East Africa. In the 1960s, countries in the started to opt for "autonomy within the French community" in the postcolonial era. Countries in the CFA franc zone were allowed to impose tariffs, regulate trade and manage transport services.
Zambia, Zimbabwe, Malawi, Uganda and Tanzania achieved independence toward the end of when the Great Powers postcolonial era came about. The period also saw the breakdown of the Federation of the Rhodesias and Nyasaland as well as the East African High Commission. Splintering into today's nations was a result of the movement towards a closed economy. Countries were adopting antitrade and anti-market policies. Tariff rates were 15% higher than in OECD countries during the 1970s and 1980s. Furthermore, countries took approaches to subsidise their own local industries, but the interior markets were small in scale. Transport networks were fragmented; regulations on labor and capital flow were increased; price controls were introduced. Between 1960 and 1990, balkanization led to disastrous results. The GDP of these regions were one tenth of OECD countries. Balkanization also resulted in what van de Valle called "typically fairly overvalued exchanged rates" in Africa. Balkanization contributed to what Bates, Coatsworth & Williamson claimed to be a lost decade in Africa.
Economic stagnation ended only in the mid-1990s. Countries within the region started to input more stabilization policies. What was originally a high exchange rate eventually fell to a more reasonable exchange rate after devaluations in 1994. By 1994, the number of countries with an exchange rate 50 percent higher than the official exchange rate had decreased from 18 to four. However, there is still limited progress in improving trade policies within the region, according to van de Walle. In addition, the post-independent countries still rely heavily on donors for development plans. Balkanization still has an impact on today's Africa. However, this causation narrative is not popular in many circles.
In the Levant
During the 1980s, the Lebanese academic and writer Georges Corm used the term balkanization to describe attempts by supporters of Israel to create buffer states based on ethnic backgrounds in the Levant to protect Israeli sovereignty. In 2013 the French journalist Bernard Guetta writing in the Libération newspaper applied the term to:
Lebanon's political division between Muslims, Christians and Druze.
The Syrian Civil War.
See also
Balkan Wars
Balkan Federation
Breakup of Yugoslavia
Cuius regio, eius religio
Cyber-balkanization
Detachment (territory)
Dissolution of the Ottoman Empire
Dissolution of Austria-Hungary
Dissolution of the Soviet Union
Divide and rule
Feudal fragmentation
Kleinstaaterei
Lebanonization
Levantinization
Pillarisation
Protracted social conflict
Secession
Self-determination
Self-governance
Shatter belt (geopolitics)
Sovereignty
Treaty of Sèvres
Treaty of Trianon
Westphalian sovereignty
References
Citations
Bibliography
External links
1810s neologisms
19th century in the Ottoman Empire
Balkans
Geopolitical terminology
Metaphors referring to places
Political terminology
Politics by region
Sectarian violence
Separatism
Political pejoratives | 0.768187 | 0.996904 | 0.765808 |
La Tène culture | The La Tène culture (; ) was a European Iron Age culture. It developed and flourished during the late Iron Age (from about 450 BC to the Roman conquest in the 1st century BC), succeeding the early Iron Age Hallstatt culture without any definite cultural break, under considerable Mediterranean influence from the Greeks in pre-Roman Gaul, the Etruscans, and the Golasecca culture, but whose artistic style nevertheless did not depend on those Mediterranean influences.
La Tène culture's territorial extent corresponded to what is now France, Belgium, Switzerland, Austria, England, Southern Germany, the Czech Republic, Northern Italy and Central Italy, Slovenia, Hungary and Liechtenstein, as well as adjacent parts of the Netherlands, Slovakia, Serbia, Croatia, Transylvania (western Romania), and Transcarpathia (western Ukraine). The Celtiberians of western Iberia shared many aspects of the culture, though not generally the artistic style. To the north extended the contemporary Pre-Roman Iron Age of Northern Europe, including the Jastorf culture of Northern Germany and Denmark and all the way to Galatia in Asia Minor (today Turkey).
Centered on ancient Gaul, the culture became very widespread, and encompasses a wide variety of local differences. It is often distinguished from earlier and neighbouring cultures mainly by the La Tène style of Celtic art, characterized by curving "swirly" decoration, especially of metalwork.
It is named after the type site of La Tène on the north side of Lake Neuchâtel in Switzerland, where thousands of objects had been deposited in the lake, as was discovered after the water level dropped in 1857 (due to the Jura water correction). La Tène is the type site and the term archaeologists use for the later period of the culture and art of the ancient Celts, a term that is firmly entrenched in the popular understanding, but it is considered controversial by modern scholarship.
Periodization
Extensive contacts through trade are recognized in foreign objects deposited in elite burials; stylistic influences on La Tène material culture can be recognized in Etruscan, Italic, Greek, Dacian and Scythian sources. Date-able Greek pottery and analysis employing scientific techniques such as dendrochronology and thermoluminescence help provide date ranges for an absolute chronology at some La Tène sites.
La Tène history was originally divided into "early", "middle" and "late" stages based on the typology of the metal finds (Otto Tischler 1885), with the Roman occupation greatly disrupting the culture, although many elements remain in Gallo-Roman and Romano-British culture. A broad cultural unity was not paralleled by overarching social-political unifying structures, and the extent to which the material culture can be linguistically linked is debated. The art history of La Tène culture has various schemes of periodization.
The archaeological period is now mostly divided into four sub-periods, following Paul Reinecke.
History
The preceding final phase of the Hallstatt culture, HaD, c. 650–450 BC, was also widespread across Central Europe, and the transition over this area was gradual, being mainly detected through La Tène style elite artefacts, which first appear on the western edge of the old Hallstatt region.
Though there is no agreement on the precise region in which La Tène culture first developed, there is a broad consensus that the centre of the culture lay on the northwest edges of Hallstatt culture, north of the Alps, within the region between in the West the valleys of the Marne and Moselle, and the part of the Rhineland nearby. In the east the western end of the old Hallstatt core area in modern Bavaria, the Czech Republic, Austria and Switzerland formed a somewhat separate "eastern style Province" in the early La Tène, joining with the western area in Alsace. In 1994 a prototypical ensemble of elite grave sites of the early 5th century BCE was excavated at Glauberg in Hesse, northeast of Frankfurt-am-Main, in a region that had formerly been considered peripheral to the La Tène sphere. The site at La Tène itself was therefore near the southern edge of the original "core" area (as is also the case for the Hallstatt site for its core).
The establishment of a Greek colony, soon very successful, at Massalia (modern Marseilles) on the Mediterranean coast of France led to great trade with the Hallstatt areas up the Rhone and Saone river systems, and early La Tène elite burials like the Vix Grave in Burgundy contain imported luxury goods along with artifacts produced locally. Most areas were probably controlled by tribal chiefs living in hilltop forts, while the bulk of the population lived in small villages or farmsteads in the countryside.
By 500 BCE the Etruscans expanded to border Celts in north Italy, and trade across the Alps began to overhaul trade with the Greeks, and the Rhone route declined. Booming areas included the middle Rhine, with large iron ore deposits, the Marne and Champagne regions, and also Bohemia, although here trade with the Mediterranean area was much less important. Trading connections and wealth no doubt played a part in the origin of the La Tène style, though how large a part remains much discussed; specific Mediterranean-derived motifs are evident, but the new style does not depend on them.
Barry Cunliffe notes localization of La Tène culture during the 5th century BCE when there arose "two zones of power and innovation: a Marne – Moselle zone in the west with trading links to the Po Valley via the central Alpine passes and the Golasecca culture, and a Bohemian zone in the east with separate links to the Adriatic via the eastern Alpine routes and the Venetic culture".
From their homeland, La Tène culture expanded in the 4th century BCE to more of modern France, Germany, and Central Europe, and beyond to Hispania, northern and central Italy, the Balkans, and even as far as Asia Minor, in the course of several major migrations. La Tène style artefacts start to appear in Britain around the same time, and Ireland rather later. The style of "Insular La Tène" art is somewhat different and the artefacts are initially found in some parts of the islands but not others. Migratory movements seem at best only partly responsible for the diffusion of La Tène culture there, and perhaps other parts of Europe.
By about 400 BCE, the evidence for Mediterranean trade becomes sparse; this may be because the expanding Celtic populations began to migrate south and west, coming into violent conflict with the established populations, including the Etruscans and Romans.
The settled life in much of the La Tène homelands also seems to have become much more unstable and prone to wars. In about 387 BCE, the Celts under Brennus defeated the Romans and then sacked Rome, establishing themselves as the most prominent threats to the Roman homeland, a status they would retain through a series of Roman-Gallic wars until Julius Caesar's final conquest of Gaul in 58–50 BCE. The Romans prevented the Celts from reaching very far south of Rome, but on the other side of the Adriatic Sea groups passed through the Balkans to reach Greece, where Delphi was attacked and sacked in 279 BCE, and Asia, where Galatia was established as a Celtic area of Anatolia. By this time, the La Tène style was spreading to the British Isles, though apparently without any significant movements in population.
After about 275 BCE, Roman expansion into the La Tène area began with the conquest of Gallia Cisalpina. The conquest of Gallia Celtica followed in 121 BCE and was complete with the Gallic Wars of the 50s BCE. Gaulish culture quickly assimilated to Roman culture, giving rise to the hybrid Gallo-Roman culture of Late Antiquity.
Ethnology
The bearers of the La Tène culture were the people known as Celts or Gauls to ancient ethnographers. Ancient Celtic culture had no written literature of its own, but rare examples of epigraphy in the Greek or Latin alphabets exist allowing the fragmentary reconstruction of Continental Celtic.
Current knowledge of this cultural area is derived from three sources comprising archaeological evidence, Greek and Latin literary records, and ethnographical evidence suggesting some La Tène artistic and cultural survivals in traditionally Celtic regions of far western Europe.
Some of the societies that are archaeologically identified with La Tène material culture were identified by Greek and Roman authors from the 5th century onwards as Keltoi ("Celts") and Galli ("Gauls"). Herodotus (iv.49) correctly placed Keltoi at the source of the Ister/Danube, in the heartland of La Tène material culture: "The Ister flows right across Europe, rising in the country of the Celts".
Whether the usage of classical sources means that the whole of La Tène culture can be attributed to a unified Celtic people is difficult to assess; archaeologists have repeatedly concluded that language, material culture, and political affiliation do not necessarily run parallel. Frey (2004) notes that in the 5th century, "burial customs in the Celtic world were not uniform; rather, localised groups had their own beliefs, which, in consequence, also gave rise to distinct artistic expressions".
Material culture
La Tène metalwork in bronze, iron and gold, developing technologically out of Hallstatt culture, is stylistically characterized by inscribed and inlaid intricate spirals and interlace, on fine bronze vessels, helmets and shields, horse trappings, and elite jewelry, especially the neck rings called torcs and elaborate clasps called fibulae. It is characterized by elegant, stylized curvilinear animal and vegetal forms, allied with the Hallstatt traditions of geometric patterning.
The Early Style of La Tène art and culture mainly featured static, geometric decoration, while the transition to the Developed Style constituted a shift to movement-based forms, such as triskeles. Some subsets within the Developed Style contain more specific design trends, such as the recurrent serpentine scroll of the Waldalgesheim Style.
Initially La Tène people lived in open settlements that were dominated by the chieftains' hill forts. The development of towns—oppida—appears in mid-La Tène culture. La Tène dwellings were carpenter-built rather than of masonry. La Tène peoples also dug ritual shafts, in which votive offerings and even human sacrifices were cast. Severed heads appear to have held great power and were often represented in carvings. Burial sites included weapons, carts, and both elite and household goods, evoking a strong continuity with an afterlife.
Elaborate burials also reveal a wide network of trade. In Vix, France, an elite woman of the 6th century BCE was buried with a very large bronze "wine-mixer" made in Greece. Exports from La Tène cultural areas to the Mediterranean cultures were based on salt, tin, copper, amber, wool, leather, furs and gold. Artefacts typical of the La Tène culture were also discovered in stray finds as far afield as Scandinavia, Northern Germany, Poland and in the Balkans. It is therefore common to also talk of the "La Tène period" in the context of those regions even though they were never part of the La Tène culture proper, but connected to its core area via trade.
Type site
The La Tène type site is on the northern shore of Lake Neuchâtel, Switzerland, where the small river Thielle, connecting to another lake, enters the Lake Neuchâtel. In 1857, prolonged drought lowered the waters of the lake by about .
On the northernmost tip of the lake, between the river and a point south of the village of Epagnier, Hansli Kopp, looking for antiquities for Colonel Frédéric Schwab, discovered several rows of wooden piles that still reached up about into the water. From among these, Kopp collected about forty iron swords.
The Swiss archaeologist Ferdinand Keller published his findings in 1868 in his influential first report on the Swiss pile dwellings (Pfahlbaubericht). In 1863 he interpreted the remains as a Celtic village built on piles. Eduard Desor, a geologist from Neuchâtel, started excavations on the lakeshore soon afterwards. He interpreted the site as an armory, erected on platforms on piles over the lake and later destroyed by enemy action. Another interpretation accounting for the presence of cast iron swords that had not been sharpened, was of a site for ritual depositions.
With the first systematic lowering of the Swiss lakes from 1868 to 1883, the site fell completely dry. In 1880, Emile Vouga, a teacher from Marin-Epagnier, uncovered the wooden remains of two bridges (designated "Pont Desor" and "Pont Vouga") originally over long, that crossed the little Thielle River (today a nature reserve) and the remains of five houses on the shore. After Vouga had finished, F. Borel, curator of the Marin museum, began to excavate as well. In 1885 the canton asked the Société d'Histoire of Neuchâtel to continue the excavations, the results of which were published by Vouga in the same year.
All in all, over 2500 objects, mainly made from metal, have been excavated in La Tène. Weapons predominate, there being 166 swords (most without traces of wear), 270 lanceheads, and 22 shield bosses, along with 385 brooches, tools, and parts of chariots. Numerous human and animal bones were found as well. The site was used from the 3rd century, with a peak of activity around 200 BCE and abandonment by about 60 BCE. Interpretations of the site vary. Some scholars believe the bridge was destroyed by high water, while others see it as a place of sacrifice after a successful battle (there are almost no female ornaments).
An exhibition marking the 150th anniversary of the discovery of the La Tène site opened in 2007 at the Musée Schwab in Biel/Bienne, Switzerland, then Zürich in 2008 and Mont Beuvray in Burgundy in 2009.
Sites
Some sites are:
Gallery
Artifacts
Some outstanding La Tène artifacts are:
Mšecké Žehrovice Head, a stone head from the modern Czech Republic
A life-sized sculpture of a warrior that stood above the Glauberg burials
Chariot burial found at La Gorge Meillet (St-Germain-en-Laye: Musée des Antiquités Nationales)
Basse Yutz Flagons 5th century
Agris Helmet, with gold covering, c. 350
Waldalgesheim chariot burial, Bad Kreuznach, Germany, late 4th century BCE, Rheinisches Landesmuseum Bonn; the "Waldalgesheim phase/style" of the art takes its name from the jewellery found here.
A gold-and-bronze model of an oak tree (3rd century BCE) found at the Oppidum of Manching.
Sculptures from Roquepertuse, a sanctuary in the south of France
The silver Gundestrup cauldron (2nd or 1st century BCE), found ritually broken in a peat bog near Gundestrup, Denmark, but probably made near the Black Sea, perhaps in Thrace. (National Museum of Denmark, Copenhagen)
Battersea Shield (350–50 BCE), found in London in the Thames, made of bronze with red enamel. (British Museum, London)
Waterloo Helmet, 150–50 BCE, Thames
"Witham Shield" (4th century BCE). (British Museum, London)
Torrs Pony-cap and Horns, from Scotland
Cordoba Treasure
Turoe stone in Galway and Killycluggin Stone in Cavan Ireland
Great Torc from Snettisham, 100–75 BCE, gold, the most elaborate of the British style of torcs
Meyrick Helmet, post-conquest Roman helmet shape, with La Tène decoration
Noric steel
Genetics
A genetic study published in PLOS One in December 2018 examined 45 individuals buried at a La Tène necropolis in Urville-Nacqueville, France. The people buried there were identified as Gauls. The mtDNA of the examined individuals belonged primarily to haplotypes of H and U. They were found to be carrying a large amount of steppe ancestry, and to have been closely related to peoples of the preceding Bell Beaker culture, suggesting genetic continuity between Bronze Age and Iron Age France. Significant gene flow with Great Britain and Iberia was detected. The results of the study partially supported the notion that French people are largely descended from the Gauls.
A genetic study published in the Journal of Archaeological Science in October 2019 examined 43 maternal and 17 paternal lineages for the La Tène necropolis in Urville-Nacqueville, France, and 27 maternal and 19 paternal lineages for La Tène tumulus of Gurgy Les Noisats near modern Paris, France. The examined individuals displayed strong genetic resemblance to peoples of the earlier Yamnaya culture, Corded Ware culture and Bell Beaker culture. They carried a diverse set of maternal lineages associated with steppe ancestry. The paternal lineages were on the other hand characterized by a "striking homogeneity", belonging entirely to haplogroup R and R1b, both of whom are associated with steppe ancestry. The evidence suggested that the Gauls of the La Tène culture were patrilineal and patrilocal, which is in agreement with archaeological and literary evidence.
A genetic study published in the Proceedings of the National Academy of Sciences of the United States of America in June 2020 examined the remains of 25 individuals ascribed to the La Tène culture. The nine examples of individual Y-DNA extracted were determined to belong to either the paragroups or subclades of haplogroups R1b1a1a2 (R-M269; three examples), R1b1a1a2a1a2c1a1a1a1a1 (R-M222), R1b1 (R-L278), R1b1a1a (R-P297), I1 (I-M253), E1b1b (E-M215), or other, unspecified, subclades of haplogroup R. The 25 samples of mtDNA extracted was determined to belong to various subclades of haplogroup H, HV, U, K, J, V and W. The examined individuals of the Hallstatt culture and La Tène culture were genetically highly homogeneous and displayed continuity with the earlier Bell Beaker culture. They carried about 50% steppe-related ancestry.
A genetic study published in iScience in April 2022 examined 49 genomes from 27 sites in Bronze Age and Iron Age France. The study found evidence of strong genetic continuity between the two periods, particularly in southern France. The samples from northern and southern France were highly homogeneous, with northern samples displaying links to contemporary samples form Great Britain and Sweden, and southern samples displaying links to Celtiberians. The northern French samples were distinguished from the southern ones by elevated levels of steppe-related ancestry. R1b was by far the most dominant paternal lineage, while H was the most common maternal lineage. The Iron Age samples resembled those of modern-day populations of France, Great Britain and Spain. The evidence suggested that the Gauls of the La Tène culture largely evolved from local Bronze Age populations.
Gallery
See also
Archaeology of Northern Europe
Iron Age Britain
Iron Age France
Iron Age Iberia
Jublains archeological site
Krakus Mound, Poland
Tasciaca
Notes
References
Garrow, Duncan (ed), Rethinking Celtic Art, 2008, Oxbow Books, , 9781842173183, google books
Green, Miranda, Celtic Art, Reading the Messages, 1996, The Everyman Art Library,
Laing, Lloyd and Jenifer. Art of the Celts, Thames and Hudson, London 1992
McIntosh, Jane, Handbook to Life in Prehistoric Europe, 2009, Oxford University Press (USA),
Megaw, Ruth and Vincent (2001). Celtic Art.
Further reading
Cunliffe, Barry. The Ancient Celts. Oxford: Oxford University Press. 1997
Collis, John. The Celts: Origins, Myths, Invention. London: Tempus, 2003.
Kruta, Venceslas, La grande storia dei Celti. La nascita, l'affermazione, la decadenza, Newton & Compton, Roma, 2003 (492 pp. - a translation of Les Celtes, histoire et dictionnaire. Des origines à la romanisation et au christianisme, Robert Laffont, Paris, 2000, without the dictionary)
James, Simon. The Atlantic Celts. London: British Museum Press, 1999.
James, Simon & Rigby, Valery. Britain and the Celtic Iron Age. London: British Museum Press, 1997.
Reginelli Servais Gianna and Béat Arnold, La Tène, un site, un mythe, Hauterive : Laténium - Parc et musée d'archéologie de Neuchâtel, 2007, Cahiers d'archéologie romande de la Bibliothèque historique vaudoise, 3 vols,
External links
Charles Bergengren, Cleveland Institute of Art, 1999: illustrations of La Tène artifacts
Les Premieres Villes de l'Ouest - Exhibition on La Tene period towns and cities
La Tène Archaeological Sites in Romania
Celtic archaeological cultures
Iron Age cultures of Europe
Archaeological cultures of Europe
Archaeological cultures in Austria
Archaeological cultures in Belgium
Archaeological cultures in Bulgaria
Archaeological cultures in Croatia
Archaeological cultures in the Czech Republic
Archaeological cultures in England
Archaeological cultures in France
Archaeological cultures in Germany
Archaeological cultures in Hungary
Archaeological cultures in Ireland
Archaeological cultures in the Netherlands
Archaeological cultures in Portugal
Archaeological cultures in Romania
Archaeological cultures in Scotland
Archaeological cultures in Serbia
Archaeological cultures in Slovakia
Archaeological cultures in Slovenia
Archaeological cultures in Spain
Archaeological cultures in Switzerland
Archaeological cultures in Turkey | 0.767928 | 0.99722 | 0.765794 |
Primary source | In the study of history as an academic discipline, a primary source (also called an original source) is an artifact, document, diary, manuscript, autobiography, recording, or any other source of information that was created at the time under study. It serves as an original source of information about the topic. Similar definitions can be used in library science and other areas of scholarship, although different fields have somewhat different definitions.
In journalism, a primary source can be a person with direct knowledge of a situation, or a document written by such a person.
Primary sources are distinguished from secondary sources, which cite, comment on, or build upon primary sources. Generally, accounts written after the fact with the benefit of hindsight are secondary. A secondary source may also be a primary source depending on how it is used. For example, a memoir would be considered a primary source in research concerning its author or about their friends characterized within it, but the same memoir would be a secondary source if it were used to examine the culture in which its author lived. "Primary" and "secondary" should be understood as relative terms, with sources categorized according to specific historical contexts and what is being studied.
Significance of source classification
History
In scholarly writing, an important objective of classifying sources is to determine their independence and reliability. In contexts such as historical writing, it is almost always advisable to use primary sources and that "if none are available, it is only with great caution that [the author] may proceed to make use of secondary sources." Sreedharan believes that primary sources have the most direct connection to the past and that they "speak for themselves" in ways that cannot be captured through the filter of secondary sources.
Other fields
In scholarly writing, the objective of classifying sources is to determine the independence and reliability of sources. Though the terms primary source and secondary source originated in historiography as a way to trace the history of historical ideas, they have been applied to many other fields. For example, these ideas may be used to trace the history of scientific theories, literary elements, and other information that is passed from one author to another.
In scientific literature, a primary source, or the "primary literature", is the original publication of a scientist's new data, results, and theories. In political history, primary sources are documents such as official reports, speeches, pamphlets, posters, or letters by participants, official election returns, and eyewitness accounts. In the history of ideas or intellectual history, the main primary sources are books, essays, and letters written by intellectuals; these intellectuals may include historians, whose books and essays are therefore considered primary sources for the intellectual historian, though they are secondary sources in their own topical fields. In religious history, the primary sources are religious texts and descriptions of religious ceremonies and rituals.
A study of cultural history could include fictional sources such as novels or plays. In a broader sense primary sources also include artifacts like photographs, newsreels, coins, paintings or buildings created at the time. Historians may also take archaeological artifacts and oral reports and interviews into consideration. Written sources may be divided into three types.
Narrative sources or literary sources tell a story or message. They are not limited to fictional sources (which can be sources of information for contemporary attitudes) but include diaries, films, biographies, leading philosophical works, and scientific works.
Diplomatic sources include charters and other legal documents which usually follow a set format.
Social documents are records created by organizations, such as registers of births and tax records.
In historiography, when the study of history is subject to historical scrutiny, a secondary source becomes a primary source. For a biography of a historian, that historian's publications would be primary sources. Documentary films can be considered a secondary source or primary source, depending on how much the filmmaker modifies the original sources.
The Lafayette College Library provides a synopsis of primary sources in several areas of study:
The definition of a primary source varies depending upon the academic discipline and the context in which it is used.
In the humanities, a primary source could be defined as something that was created either during the time period being studied or afterward by individuals reflecting on their involvement in the events of that time.
In the social sciences, the definition of a primary source would be expanded to include numerical data that has been gathered to analyze relationships between people, events, and their environment.
In the natural sciences, a primary source could be defined as a report of original findings or ideas. These sources often appear in the form of research articles with sections on methods and results.
Finding primary sources
Although many primary sources remain in private hands, others are located in archives, libraries, museums, historical societies, and special collections. These can be public or private. Some are affiliated with universities and colleges, while others are government entities. Materials relating to one area might be located in many different institutions. These can be distant from the original source of the document. For example, the Huntington Library in California houses many documents from the United Kingdom.
In the US, digital copies of primary sources can be retrieved from a number of places. The Library of Congress maintains several digital collections where they can be retrieved. Some examples are American Memory and Chronicling America. The National Archives and Records Administration also has digital collections in Digital Vaults. The Digital Public Library of America searches across the digitized primary source collections of many libraries, archives, and museums. The Internet Archive also has primary source materials in many formats.
In the UK, the National Archives provides a consolidated search of its own catalog and a wide variety of other archives listed on the Access to Archives index. Digital copies of various classes of documents at the National Archives (including wills) are available from DocumentsOnline. Most of the available documents relate to England and Wales. Some digital copies of primary sources are available from the National Archives of Scotland. Many County Record Offices collections are included in Access to Archives, while others have their own online catalogs. Many County Record Offices will supply digital copies of documents.
In other regions, Europeana has digitized materials from across Europe while the World Digital Library and Flickr Commons have items from all over the world. Trove has primary sources from Australia.
Most primary source materials are not digitized and may only be represented online with a record or finding aid. Both digitized and not digitized materials can be found through catalogs such as WorldCat, the Library of Congress catalog, the National Archives catalog, and so on.
Using primary sources
History as an academic discipline is based on primary sources, as evaluated by the community of scholars, who report their findings in books, articles, and papers. Arthur Marwick says "Primary sources are absolutely fundamental to history." Ideally, a historian will use all available primary sources that were created by the people involved at the time being studied. In practice, some sources have been destroyed, while others are not available for research. Perhaps the only eyewitness reports of an event may be memoirs, autobiographies, or oral interviews that were taken years later. Sometimes the only evidence relating to an event or person in the distant past was written or copied decades or centuries later. Manuscripts that are sources for classical texts can be copies of documents or fragments of copies of documents. This is a common problem in classical studies, where sometimes only a summary of a book or letter has survived. Potential difficulties with primary sources have the result that history is usually taught in schools using secondary sources.
Historians studying the modern period with the intention of publishing an academic article prefer to go back to available primary sources and to seek new (in other words, forgotten or lost) ones. Primary sources, whether accurate or not, offer new input into historical questions and most modern history revolves around heavy use of archives and special collections for the purpose of finding useful primary sources. A work on history is not likely to be taken seriously as a scholarship if it only cites secondary sources, as it does not indicate that original research has been done.
However, primary sources – particularly those from before the 20th century – may have hidden challenges. "Primary sources, in fact, are usually fragmentary, ambiguous, and very difficult to analyze and interpret." Obsolete meanings of familiar words and social context are among the traps that await the newcomer to historical studies. For this reason, the interpretation of primary texts is typically taught as part of an advanced college or postgraduate history course, although advanced self-study or informal training is also possible.
Strengths and weaknesses
In many fields and contexts, such as historical writing, it is almost always advisable to use primary sources if possible, and "if none are available, it is only with great caution that [the author] may proceed to make use of secondary sources." In addition, primary sources avoid the problem inherent in secondary sources in which each new author may distort and put a new spin on the findings of prior cited authors.
However, a primary source is not necessarily more of an authority or better than a secondary source. There can be bias and tacit unconscious views that twist historical information.
The errors may be corrected in secondary sources, which are often subjected to peer review, can be well documented, and are often written by historians working in institutions where methodological accuracy is important to the future of the author's career and reputation. Historians consider the accuracy and objectivity of the primary sources that they are using and historians subject both primary and secondary sources to a high level of scrutiny. A primary source such as a journal entry (or the online version, a blog), at best, may only reflect one individual's opinion on events, which may or may not be truthful, accurate, or complete.
Participants and eyewitnesses may misunderstand events or distort their reports, deliberately or not, to enhance their own image or importance. Such effects can increase over time, as people create a narrative that may not be accurate. For any source, primary or secondary, it is important for the researcher to evaluate the amount and direction of bias. As an example, a government report may be an accurate and unbiased description of events, but it may be censored or altered for propaganda or cover-up purposes. The facts can be distorted to present the opposing sides in a negative light. Barristers are taught that evidence in a court case may be truthful but may still be distorted to support or oppose the position of one of the parties.
Classifying sources
Many sources can be considered either primary or secondary, depending on the context in which they are examined. Moreover, the distinction between primary and secondary sources is subjective and contextual, so that precise definitions are difficult to make. A book review, when it contains the opinion of the reviewer about the book rather than a summary of the book, becomes a primary source.
If a historical text discusses old documents to derive a new historical conclusion, it is considered to be a primary source for the new conclusion. Examples in which a source can be both primary and secondary include an obituary or a survey of several volumes of a journal counting the frequency of articles on a certain topic.
Whether a source is regarded as primary or secondary in a given context may change, depending upon the present state of knowledge within the field. For example, if a document refers to the contents of a previous but undiscovered letter, that document may be considered "primary", since it is the closest known thing to an original source; but if the letter is later found, it may then be considered "secondary"
In some instances, the reason for identifying a text as the "primary source" may devolve from the fact that no copy of the original source material exists, or that it is the oldest extant source for the information cited.
Forgeries
Historians must occasionally contend with forged documents that purport to be primary sources. These forgeries have usually been constructed with a fraudulent purpose, such as promulgating legal rights, supporting false pedigrees, or promoting particular interpretations of historic events. The investigation of documents to determine their authenticity is called diplomatics.
For centuries, popes used the forged Donation of Constantine to bolster the Papacy's secular power. Among the earliest forgeries are false Anglo-Saxon charters, a number of 11th- and 12th-century forgeries produced by monasteries and abbeys to support a claim to land where the original document had been lost or never existed. One particularly unusual forgery of a primary source was perpetrated by Sir Edward Dering, who placed false monumental brasses in a parish church. In 1986, Hugh Trevor-Roper authenticated the Hitler Diaries, which were later proved to be forgeries. Recently, forged documents have been placed within the UK National Archives in the hope of establishing a false provenance. However, historians dealing with recent centuries rarely encounter forgeries of any importance.
See also
Examples
Monograph
Others
Archival research
Historiography
Source criticism
Source literature
Source text
Historical document
Secondary source
Tertiary source
Original research
UNISIST model
Scientific journalism
Scholarly method
References
Bibliography
External links
Primary sources repositories
Primary Sources from World War One and Two: War-letters.com Database of mailed letters to and from soldiers during major world conflicts from the Napoleonic Wars to World War Two.
Fold3.com – Over 60,000,000 Primary Source Documents created by Ancestry.com
A listing of over 5000 websites describing holdings of manuscripts, archives, rare books, historical photographs, and other primary sources from the University of Idaho.
Find primary sources in the collections of major research libraries using ArchiveGrid
Shapell Manuscript Foundation Digitalized Primary Sources and Historical Artifacts from 1786 – present
Sacred Texts.com A collection of religious texts and books from the Internet Sacred Text Archive
All sources repositories
Wikisource – The Free Library – the Wikimedia Foundation project that collects, edits, and catalogs all source texts
Essays and descriptions of primary, secondary, and other sources
"Research Using Primary Sources" from the University of Maryland Libraries (accessed 16 Jul 2013)
"How to distinguish between primary and secondary sources" from the University of California, Santa Cruz Library
Joan of Arc: Primary Sources Series – Example of a publication focusing on primary source documents- the Historical Association of Joan of Arc Studies
Finding Historical Primary Sources from the University of California, Berkeley library
"Primary versus secondary sources" from the Bowling Green State University library
Finding primary sources in world history from the Center for History and New Media, George Mason University
Guide to Terminology used when describing archival and other primary source materials on Archivopedia
Thehistorysite.org Links to many online history archival sources.
Historiography
Information science
History resources | 0.767944 | 0.997197 | 0.765792 |
Region | In geography, regions, otherwise referred to as areas, zones, lands or territories, are portions of the Earth's surface that are broadly divided by physical characteristics (physical geography), human impact characteristics (human geography), and the interaction of humanity and the environment (environmental geography). Geographic regions and sub-regions are mostly described by their imprecisely defined, and sometimes transitory boundaries, except in human geography, where jurisdiction areas such as national borders are defined in law. More confined or well bounded portions are called locations or places.
Apart from the global continental regions, there are also hydrospheric and atmospheric regions that cover the oceans, and discrete climates above the land and water masses of the planet. The land and water global regions are divided into subregions geographically bounded by large geological features that influence large-scale ecologies, such as plains and features.
As a way of describing spatial areas, the concept of regions is important and widely used among the many branches of geography, each of which can describe areas in regional terms. For example, ecoregion is a term used in environmental geography, cultural region in cultural geography, bioregion in biogeography, and so on. The field of geography that studies regions themselves is called regional geography. Regions are an area or division, especially part of a country or the world having definable characteristics but not always fixed boundaries.
In the fields of physical geography, ecology, biogeography, zoogeography, and environmental geography, regions tend to be based on natural features such as ecosystems or biotopes, biomes, drainage basins, natural regions, mountain ranges, soil types. Where human geography is concerned, the regions and subregions are described by the discipline of ethnography.
Globalization
Global regions are distinguishable from space, and are therefore clearly distinguished by the two basic terrestrial environments, land and water. However, they have been generally recognized as such much earlier by terrestrial cartography because of their impact on human geography. They are divided into the largest of land regions, known as continents and the largest of water regions known as oceans. There are also significant regions that do not belong to either classification, such as archipelago regions that are littoral regions, or earthquake regions that are defined in geology.
Continental regions
Continental regions are usually based on broad experiences in human history and attempt to reduce very large areas to more manageable regionalization for the purpose of the study. As such they are conceptual constructs, usually lacking distinct boundaries. The oceanic division into maritime regions is used in conjunction with the relationship to the central area of the continent, using directions of the compass.
Some continental regions are defined by the major continental feature of their identity, such as the Amazon basin, or the Sahara, which both occupy a significant percentage of their respective continental land area.
To a large extent, major continental regions are mental constructs created by considering an efficient way to define large areas of the continents. For the most part, the images of the world are derived as much from academic studies, from all types of media, or from personal experience of global exploration. They are a matter of collective human knowledge of their own planet and are attempts to better understand their environments.
Regional geography
Regional geography is a branch of geography that studies regions of all sizes across the Earth. It has a prevailing descriptive character. The main aim is to understand or define the uniqueness or character of a particular region, which consists of natural as well as human elements. Attention is paid also to regionalization, which covers the proper techniques of space delimitation into regions.
Regional geography is also considered as a certain approach to study in geographical sciences (similar to quantitative or critical geographies; for more information, see history of geography).
Human geography
Human geography is a branch of geography that focuses on the study of patterns and processes that shape human interaction with various discrete environments. It encompasses human, political, cultural, social, and economic aspects among others that are often clearly delineated. While the major focus of human geography is not the physical landscape of the Earth (see physical geography), it is hardly possible to discuss human geography without referring to the physical landscape on which human activities are being played out, and environmental geography is emerging as a link between the two. Regions of human geography can be divided into many broad categories:
Historical regions
The field of historical geography involves the study of human history as it relates to places and regions, or the study of how places and regions have changed over time.
D. W. Meinig, a historical geographer of America, describes many historical regions in his book The Shaping of America: A Geographical Perspective on 500 Years of History. For example, in identifying European "source regions" in early American colonization efforts, he defines and describes the Northwest European Atlantic Protestant Region, which includes sub-regions such as the "Western Channel Community", which itself is made of sub-regions such as the English West Country of Cornwall, Devon, Somerset, and Dorset.
In describing historic regions of America, Meinig writes of "The Great Fishery" off the coast of Newfoundland and New England, an oceanic region that includes the Grand Banks. He rejects regions traditionally used in describing American history, like New France, "West Indies", the Middle Colonies, and the individual colonies themselves (Province of Maryland, for example). Instead he writes of "discrete colonization areas", which may be named after colonies but rarely adhere strictly to political boundaries. Among other historic regions of this type, he writes about "Greater New England" and its major sub-regions of "Plymouth", "New Haven shores" (including parts of Long Island), "Rhode Island" (or "Narragansett Bay"), "the Piscataqua", "Massachusetts Bay", "Connecticut Valley", and to a lesser degree, regions in the sphere of influence of Greater New England, "Acadia" (Nova Scotia), "Newfoundland and The Fishery/The Banks".
Other examples of historical regions are Iroquoia, Ohio Country, Illinois Country, and Rupert's Land.
In Russia, historical regions include Siberia and the Russian North, as well as the Ural Mountains. These regions had an identity that developed from the early modern period and led to Siberian regionalism.
Tourism region
A tourism region is a geographical region that has been designated by a governmental organization or tourism bureau as having common cultural or environmental characteristics. These regions are often named after a geographical, former, or current administrative region or may have a name created for tourism purposes. The names often evoke certain positive qualities of the area and suggest a coherent tourism experience to visitors. Countries, states, provinces, and other administrative regions are often carved up into tourism regions to facilitate attracting visitors.
Some of the more famous tourism regions based on historical or current administrative regions include Tuscany in Italy and Yucatán in Mexico. Famous examples of regions created by a government or tourism bureau include the United Kingdom's Lake District and California's Wine Country.
great plains region
Natural resource regions
Natural resources often occur in distinct regions. Natural resource regions can be a topic of physical geography or environmental geography, but also have a strong element of human geography and economic geography. A coal region, for example, is a physical or geomorphological region, but its development and exploitation can make it into an economic and a cultural region. Examples of natural resource regions are the Rumaila Field, the oil field that lies along the border or Iraq and Kuwait and played a role in the Gulf War; the Coal Region of Pennsylvania, which is a historical region as well as a cultural, physical, and natural resource region; the South Wales Coalfield, which like Pennsylvania's coal region is a historical, cultural, and natural region; the Kuznetsk Basin, a similarly important coal mining region in Russia; Kryvbas, the economic and iron ore mining region of Ukraine; and the James Bay Project, a large region of Quebec where one of the largest hydroelectric systems in the world has been developed.
Religious regions
Sometimes a region associated with a religion is given a name, like Christendom, a term with medieval and renaissance connotations of Christianity as a sort of social and political polity. The term Muslim world is sometimes used to refer to the region of the world where Islam is dominant. These broad terms are somewhat vague when used to describe regions.
Within some religions there are clearly defined regions. The Roman Catholic Church, the Church of England, the Eastern Orthodox Church, and others, define ecclesiastical regions with names such as diocese, eparchy, ecclesiastical provinces, and parish.
For example, the United States is divided into 32 Roman Catholic ecclesiastical provinces. The Lutheran Church–Missouri Synod is organized into 33 geographic districts, which are subdivided into circuits (the Atlantic District (LCMS), for example). The Church of Jesus Christ of Latter-day Saints uses regions similar to dioceses and parishes, but uses terms like ward and stake.
Political regions
In the field of political geography, regions tend to be based on political units such as sovereign states; subnational units such as administrative regions, provinces, states (in the United States), counties, townships, territories, etc.; and multinational groupings, including formally defined units such as the European Union, the Association of Southeast Asian Nations, and NATO, as well as informally defined regions such as the Third World, Western Europe, and the Middle East.
Administrative regions
The word "region" is taken from the Latin regio (derived from regere, 'to rule'), and a number of countries have borrowed the term as the formal name for a type of subnational entity (e.g., the , used in Chile). In English, the word is also used as the conventional translation for equivalent terms in other languages (e.g., the область (oblast), used in Russia alongside a broader term регион).
The following countries use the term "region" (or its cognate) as the name of a type of subnational administrative unit:
Belgium (in French, ; in German, ; the Dutch term is often mistakenly translated as "regio")
Chad (', effective from 2002)
Chile
Côte d'Ivoire
Denmark (effective from 2007)
England (not the United Kingdom as a whole)
Eritrea
France
Ghana
Guinea (région)
Guinea-Bissau (região)
Guyana
Hungary (régió)
Italy (regione)
Madagascar (région)
Mali (région)
Malta (reġjun)
Namibia
New Zealand
Peru
Portugal (região)
Philippines
Senegal
Tanzania
Thailand
Togo
Trinidad and Tobago (Regional Corporation)
The Canadian province of Québec also uses the "administrative region" (région administrative).
Scotland had local government regions from 1975 to 1996.
In Spain the official name of the autonomous community of Murcia is Región de Murcia. Also, some single-province autonomous communities such as Madrid use the term región interchangeably with comunidad autónoma.
Two län (counties) in Sweden are officially called 'regions': Skåne and Västra Götaland, and there is currently a controversial proposal to divide the rest of Sweden into large regions, replacing the current counties.
The government of the Philippines uses the term "region" (in Filipino, ) when it is necessary to group provinces, the primary administrative subdivision of the country. This is also the case in Brazil, which groups its primary administrative divisions (estados; "states") into grandes regiões (greater regions) for statistical purposes, while Russia uses экономические районы (economic regions) in a similar way, as does Romania and Venezuela.
The government of Singapore makes use of the term "region" for its own administrative purposes.
The following countries use an administrative subdivision conventionally referred to as a region in English:
Bulgaria, which uses the област (oblast)
Greece, which uses the Περιφέρεια (periferia)
Russia, which uses the область (oblast'), and for some regions the край (krai)
Ukraine, which uses the область (oblast')
Slovakia (kraj)
China has five 自治区 (zìzhìqū) and two 特別行政區 (or 特别行政区; tèbiéxíngzhèngqū), which are translated as "autonomous region" and "special administrative region", respectively.
Local administrative regions
There are many relatively small regions based on local government agencies such as districts, agencies, or regions. In general, they are all regions in the general sense of being bounded spatial units. Examples include electoral districts such as Washington's 6th congressional district and Tennessee's 1st congressional district; school districts such as Granite School District and Los Angeles Unified School District; economic districts such as the Reedy Creek Improvement District; metropolitan areas such as the Seattle metropolitan area, and metropolitan districts such as the Metropolitan Water Reclamation District of Greater Chicago, the Las Vegas-Clark County Library District, the Metropolitan Police Service of Greater London, as well as other local districts like the York Rural Sanitary District, the Delaware River Port Authority, the Nassau County Soil and Water Conservation District, and C-TRAN.
Traditional or informal regions
The traditional territorial divisions of some countries are also commonly rendered in English as "regions". These informal divisions do not form the basis of the modern administrative divisions of these countries, but still define and delimit local regional identity and sense of belonging. Examples are:
England
Finland
Japan
Korea
Norway (landsdeler)
Romania
Slovakia
United States
Functional regions
Functional regions are usually understood to be the areas organised by the horizontal functional relations (flows, interactions) that are maximised within a region and minimised across its borders so that the principles of internal cohesiveness and external separation regarding spatial interactions are met (see, for instance, Farmer and Fotheringham, 2011; Klapka, Halas, 2016; Smart, 1974). A functional region is not an abstract spatial concept, but to a certain extent it can be regarded as a reflection of the spatial behaviour of individuals in a geographic space.
The functional region is conceived as a general concept while its inner structure, inner spatial flows, and interactions need not necessarily show any regular pattern, only selfcontainment. The concept of self-containment remains the only crucial defining characteristic of a functional region. Nodal regions, functional urban regions, daily urban systems, local labour-market areas (LLMAs), or travel-to-work areas (TTWAs) are considered to be special instances of a general functional region that need to fulfil some specific conditions regarding, for instance, the character of the region-organising interaction or the presence of urban cores, (Halas et al., 2015).
Military regions
In military usage, a region is shorthand for the name of a military formation larger than an Army Group and smaller than a Theater. The full name of the military formation is Army Region. The size of an Army Region can vary widely but is generally somewhere between about 1 million and 3 million soldiers. Two or more Army Regions could make up a Theater. An Army Region is typically commanded by a full General (US four stars), a Field Marshal or General of the Army (US five stars), or Generalissimo (Soviet Union); and in the US Armed Forces an Admiral (typically four stars) may also command a region. Due to the large size of this formation, its use is rarely employed. Some of the very few examples of an Army Region are each of the Eastern, Western, and southern (mostly in Italy) fronts in Europe during World War II. The military map unit symbol for this echelon of formation (see Military organization and APP-6A) is identified with six Xs.
Media geography
Media geography is a spatio-temporal understanding, brought through different gadgets of media, nowadays, media became inevitable at different proportions and everyone supposed to consumed at different gravity. The spatial attributes are studied with the help of media outputs in shape of images which are contested in nature and pattern as well where politics is inseparable. Media geography is giving spatial understanding of mediated image.
See also
Autonomous region
Committee of the Regions
Continent
Continental fragment
Euroregion
Field (geography)
Latin names of regions
Military district
Regional district
Regionalism (disambiguation)
Regional municipality
Subcontinent
Submerged continents
Subregion
Supercontinent
United Nations geoscheme
Notes
References
Bailey, Robert G. (1996) Ecosystem Geography. New York: Springer-Verlag.
Meinig, D.W. (1986). The Shaping of America: A Geographical Perspective on 500 Years of History, Volume 1: Atlantic America, 1492-1800. New Haven: Yale University Press.
Moinuddin Shekh. (2017) " Mediascape and the State: A Geographical Interpretation of Image Politics in Uttar Pradesh, India. Netherland, Springer.
Smith-Peter, Susan (2018) Imagining Russian Regions: Subnational Identity and Civil Society in Nineteenth-Century Russia''. Leiden: Brill, 2017.
External links
Map and descriptions of hydrologic unit regions of the United States
Federal Standards for Delineation of Hydrologic Unit Boundaries
Physiographic regions of the United States
Geography
Geography terminology
Regional geography | 0.767683 | 0.997533 | 0.765789 |
Saga | Sagas are prose stories and histories, composed in Iceland and to a lesser extent elsewhere in Scandinavia.
The most famous saga-genre is the (sagas concerning Icelanders), which feature Viking voyages, migration to Iceland, and feuds between Icelandic families. However, sagas' subject matter is diverse, including pre-Christian Scandinavian legends; saints and bishops both from Scandinavia and elsewhere; Scandinavian kings and contemporary Icelandic politics; and chivalric romances either translated from Continental European languages or composed locally.
Sagas originated in the Middle Ages, but continued to be composed in the ensuing centuries. Whereas the dominant language of history-writing in medieval Europe was Latin, sagas were composed in the vernacular: Old Norse and its later descendants, primarily Icelandic.
While sagas are written in prose, they share some similarities with epic poetry, and often include stanzas or whole poems in alliterative verse embedded in the text.
Etymology and meaning of saga
The main meanings of the Old Norse word saga (plural sǫgur) are 'what is said, utterance, oral account, notification' and the sense used in this article: '(structured) narrative, story (about somebody)'. It is cognate with the English words say and saw (in the sense 'a saying', as in old saw), and the German Sage; but the modern English term saga was borrowed directly into English from Old Norse by scholars in the eighteenth century to refer to Old Norse prose narratives.
The word continues to be used in this sense in the modern Scandinavian languages: Icelandic (plural ), Faroese søga (plural søgur), Norwegian soge (plural soger), Danish saga (plural sagaer), and Swedish saga (plural sagor). It usually also has wider meanings such as 'history', 'tale', and 'story'. It can also be used of a genre of novels telling stories spanning multiple generations, or to refer to saga-inspired fantasy fiction. Swedish folksaga means folk tale or fairy tale, while konstsaga is the Swedish term for a fairy tale by a known author, such as Hans Christian Andersen. In Swedish historiography, the term sagokung, "saga king", is intended to be ambiguous, as it describes the semi-legendary kings of Sweden, who are known only from unreliable sources.
Genres
Norse sagas are generally classified as follows.
Kings' sagas
Kings' sagas (konungasögur) are of the lives of Scandinavian kings. They were composed in the twelfth to fourteenth centuries. A pre-eminent example is Heimskringla, probably compiled and composed by Snorri Sturluson. These sagas frequently quote verse, invariably occasional and praise poetry in the form of skaldic verse.
Sagas of Icelanders and short tales of Icelanders
The Icelanders' sagas (Íslendingasögur), sometimes also called "family sagas" in English, are purportedly (and sometimes actually) stories of real events, which usually take place from around the settlement of Iceland in the 870s to the generation or two following the conversion of Iceland to Christianity in 1000. They are noted for frequently exhibiting a realistic style. It seems that stories from these times were passed on in oral form until they eventually were recorded in writing as Íslendingasögur, whose form was influenced both by these oral stories and by literary models in both Old Norse and other languages. The majority — perhaps two thirds of the medieval corpus — seem to have been composed in the thirteenth century, with the remainder in the fourteenth and fifteenth centuries. These sagas usually span multiple generations and often feature everyday people (e.g. Bandamanna saga) and larger-than-life characters (e.g. Egils saga). Key works of this genre have been viewed in modern scholarship as the highest-quality saga-writing. While primarily set in Iceland, the sagas follow their characters' adventures abroad, for example in other Nordic countries, the British Isles, northern France and North America. Some well-known examples include Njáls saga, Laxdæla saga and Grettis saga.
The material of the short tales of Icelanders (þættir or Íslendingaþættir) is similar to Íslendinga sögur, in shorter form, often preserved as episodes about Icelanders in the kings' sagas.
Like kings' sagas, when sagas of Icelanders quote verse, as they often do, it is almost invariably skaldic verse.
Contemporary sagas
Contemporary sagas (samtíðarsögur or samtímasögur) are set in twelfth- and thirteenth-century Iceland, and were written soon after the events they describe. Most are preserved in the compilation Sturlunga saga, from around 1270–80, though some, such as Arons saga Hjörleifssonar are preserved separately. The verse quoted in contemporary sagas is skaldic verse.
According to historian Jón Viðar Sigurðsson, "Scholars generally agree that the contemporary sagas are rather reliable sources, based on the short time between the events and the recording of the sagas, normally twenty to seventy years... The main argument for this view on the reliability of these sources is that the audience would have noticed if the saga authors were slandering and not faithfully portraying the past."
Legendary sagas
Legendary sagas (fornaldarsögur) blend remote history, set on the Continent before the settlement of Iceland, with myth or legend. Their aim is usually to offer a lively narrative and entertainment. They often portray Scandinavia's pagan past as a proud and heroic history. Some legendary sagas quote verse — particularly Vǫlsunga saga and Heiðreks saga — and when they do it is invariably Eddaic verse.
Some legendary sagas overlap generically with the next category, chivalric sagas.
Chivalric sagas
Chivalric sagas (riddarasögur) are translations of Latin pseudo-historical works and French chansons de geste as well as Icelandic compositions in the same style. Norse translations of Continental romances seem to have begun in the first half of the thirteenth century; Icelandic writers seem to have begun producing their own romances in the late thirteenth century, with production peaking in the fourteenth century and continuing into the nineteenth.
While often translated from verse, sagas in this genre almost never quote verse, and when they do it is often unusual in form: for example, Jarlmanns saga ok Hermanns contains the first recorded quotation of a refrain from an Icelandic dance-song, and a metrically irregular riddle in Þjalar-Jóns saga.
Saints' and bishops' sagas
Saints' sagas (heilagra manna sögur) and bishops' sagas (biskupa sögur) are vernacular Icelandic translations and compositions, to a greater or lesser extent influenced by saga-style, in the widespread genres of hagiography and episcopal biographies. The genre seems to have begun in the mid-twelfth century.
History
Icelandic sagas are based on oral traditions and much research has focused on what is real and what is fiction within each tale. The accuracy of the sagas is often hotly disputed.
Most of the medieval manuscripts which are the earliest surviving witnesses to the sagas were taken to Denmark and Sweden in the seventeenth century, but later returned to Iceland. Classical sagas were composed in the thirteenth century. Scholars once believed that these sagas were transmitted orally from generation to generation until scribes wrote them down in the thirteenth century. However, most scholars now believe the sagas were conscious artistic creations, based on both oral and written tradition. A study focusing on the description of the items of clothing mentioned in the sagas concludes that the authors attempted to create a historic "feel" to the story, by dressing the characters in what was at the time thought to be "old fashioned clothing". However, this clothing is not contemporary with the events of the saga as it is a closer match to the clothing worn in the 12th century. It was only recently (start of 20th century) that the tales of the voyages to North America (modern day Canada) were authenticated.
Most sagas of Icelanders take place in the period 930–1030, which is called söguöld (Age of the Sagas) in Icelandic history. The sagas of kings, bishops, contemporary sagas have their own time frame. Most were written down between 1190 and 1320, sometimes existing as oral traditions long before, others are pure fiction, and for some we do know the sources: the author of King Sverrir's saga had met the king and used him as a source.
While sagas are generally anonymous, a distinctive literary movement in the 14th century involves sagas, mostly on religious topics, with identifiable authors and a distinctive Latinate style. Associated with Iceland's northern diocese of Hólar, this movement is known as the North Icelandic Benedictine School (Norðlenski Benediktskólinn).
The vast majority of texts referred to today as "sagas" were composed in Iceland. One exception is Þiðreks saga, translated/composed in Norway; another is Hjalmars och Hramers saga, a post-medieval forgery composed in Sweden. While the term saga is usually associated with medieval texts, sagas — particularly in the legendary and chivalric saga genres — continued to be composed in Iceland on the pattern of medieval texts into the nineteenth century.
Explanations for saga writing
Icelanders produced a high volume of literature relative to the size of the population. Historians have proposed various theories for the high volume of saga writing.
Early, nationalist historians argued that the ethnic characteristics of the Icelanders were conducive to a literary culture, but these types of explanations have fallen out of favor with academics in modern times. It has also been proposed that the Icelandic settlers were so prolific at writing in order to capture their settler history. Historian Gunnar Karlsson does not find that explanation reasonable though, given that other settler communities have not been as prolific as the early Icelanders were.
Pragmatic explanations were once also favoured: it has been argued that a combination of readily available parchment (due to extensive cattle farming and the necessity of culling before winter) and long winters encouraged Icelanders to take up writing.
More recently, Icelandic saga-production has been seen as motivated more by social and political factors.
The unique nature of the political system of the Icelandic Commonwealth created incentives for aristocrats to produce literature, offering a way for chieftains to create and maintain social differentiation between them and the rest of the population. Gunnar Karlsson and Jesse Byock argued that the Icelanders wrote the Sagas as a way to establish commonly agreed norms and rules in the decentralized Icelandic Commonwealth by documenting past feuds, while Iceland's peripheral location put it out of reach of the continental kings of Europe and that those kings could therefore not ban subversive forms of literature. Because new principalities lacked internal cohesion, a leader typically produced Sagas "to create or enhance amongst his subjects or followers a feeling of solidarity and common identity by emphasizing their common history and legends". Leaders from old and established principalities did not produce any Sagas, as they were already cohesive political units.
Later (late thirteenth- and fourteenth-century) saga-writing was motivated by the desire of the Icelandic aristocracy to maintain or reconnect links with the Nordic countries by tracing the ancestry of Icelandic aristocrats to well-known kings and heroes to which the contemporary Nordic kings could also trace their origins.
Editions and translations
The corpus of Old Norse sagas is gradually being edited in the Íslenzk fornrit series, which covers all the Íslendingasögur and a growing range of other ones. Where available, the Íslenzk fornrit edition is usually the standard one. The standard edition of most of the chivalric sagas composed in Iceland is by Agnete Loth.
A list, intended to be comprehensive, of translations of Icelandic sagas is provided by the National Library of Iceland's Bibliography of Saga Translations.
Popular culture
Many modern artists working in different creative fields have drawn inspiration from the sagas. Among some well-known writers, for example, who adapted saga narratives in their works are Poul Anderson, Laurent Binet, Margaret Elphinstone, Friedrich de la Motte Fouqué, Gunnar Gunnarsson, Henrik Ibsen, Halldór Laxness, Ottilie Liljencrantz, Henry Wadsworth Longfellow, George Mackay Brown, William Morris, Adam Oehlenschläger, Robert Louis Stevenson, August Strindberg, Rosemary Sutcliff, Esaias Tegnér, J.R.R. Tolkien, and William T. Vollmann.
See also
Prose Edda
Beowulf
References and notes
Sources
Primary:
The Skaldic Project, An international project to edit the corpus of medieval Norse-Icelandic skaldic poetry
Other:
Clover, Carol J. et al. Old Norse-Icelandic Literature: A critical guide (University of Toronto Press, 2005)
Gade, Kari Ellen (ed.) Poetry from the Kings' Sagas 2 From c. 1035 to c. 1300 (Brepols Publishers. 2009)
Gordon, E. V. (ed) An Introduction to Old Norse (Oxford University Press; 2nd ed. 1981)
Jakobsson, Armann; Fredrik Heinemann (trans) A Sense of Belonging: Morkinskinna and Icelandic Identity, c. 1220 (Syddansk Universitetsforlag. 2014)
Jakobsson, Ármann Icelandic sagas (The Oxford Dictionary of the Middle Ages 2nd Ed. Robert E. Bjork. 2010)
McTurk, Rory (ed) A Companion to Old Norse-Icelandic Literature and Culture (Wiley-Blackwell, 2005)
Ross, Margaret Clunies The Cambridge Introduction to the Old Norse-Icelandic Saga (Cambridge University Press, 2010)
Thorsson, Örnólfur The Sagas of Icelanders (Penguin. 2001)
Whaley, Diana (ed.) Poetry from the Kings' Sagas 1 From Mythical Times to c. 1035 (Brepols Publishers. 2012)
Further reading
In Norwegian:
Haugen, Odd Einar Handbok i norrøn filologi (Bergen: Fagbokforlaget, 2004)
External links
Icelandic Saga Database – The Icelandic sagas in the original old Norse along with translations into many languages
Old Norse Prose and Poetry
The Icelandic sagas at Netútgáfan
Medieval literature
Icelandic literature
North Germanic languages
Old Norse literature | 0.76722 | 0.998124 | 0.765781 |
Traditionalist conservatism | Traditionalist conservatism, often known as classical conservatism, is a political and social philosophy that emphasizes the importance of transcendent moral principles, manifested through certain posited natural laws to which it is claimed society should adhere. It is one of many different forms of conservatism. Traditionalist conservatism, as known today, is rooted in Edmund Burke's political philosophy, which represented a combination of Whiggism and Jacobitism, as well as the similar views of Joseph de Maistre, who attributed the rationalist rejection of Christianity during previous decades of being directly responsible for the Reign of Terror which followed the French Revolution. Traditionalists value social ties and the preservation of ancestral institutions above what they perceive as excessive rationalism and individualism. One of the first uses of the phrase "conservatism" began around 1818 with a monarchist newspaper named "Le Conservateur", written by Francois Rene de Chateaubriand with the help of Louis de Bonald.
The modern concepts of nation, culture, custom, convention, religious roots, language revival, and tradition are heavily emphasized in traditionalist conservatism. Theoretical reason is regarded as of secondary importance to practical reason. The state is also viewed as a social endeavor with spiritual and organic characteristics. Traditionalists think that any positive change arises based within the community's traditions rather than as a consequence of seeking a complete and deliberate break with the past. Leadership, authority, and hierarchy are seen as natural to humans. Traditionalism, in the forms of Jacobitism, the Counter-Enlightenment and early Romanticism, arose in Europe during the 18th century as a backlash against the Enlightenment, as well as the English and French Revolutions. More recent forms have included early German Romanticism, Carlism, and the Gaelic revival. Traditionalist conservatism began to establish itself as an intellectual and political force in the mid-20th century.
Key principles
Religious faith and natural law
A number of traditionalist conservatives embrace high church Christianity (e.g., T. S. Eliot, an Anglo-Catholic; Russell Kirk, a Roman Catholic; Rod Dreher, an Eastern Orthodox Christian). Another traditionalist who has stated his faith tradition publicly is Caleb Stegall, an evangelical Protestant. A number of conservative mainline Protestants are also traditionalists, such as Peter Hitchens and Roger Scruton, and some traditionalists are Jewish, such as the late Will Herberg, Irving Louis Horowitz, Mordecai Roshwald, and Paul Gottfried. A small portion of traditionalists are also Muslim, such as Mohammed Hijab.
Natural law is championed by Thomas Aquinas in the Summa Theologiae. There, he affirms the principle of noncontradiction ("the same thing cannot be affirmed and denied at the same time") as being the first principle of theoretical reason, and ("good is to be done and pursued and evil avoided") as the first principle of practical reason, or that which precedes and determines one's actions. The account of Medieval Christian philosophy is the appreciation of the concept of the summum bonum or "highest good". It is only through the silent contemplation that someone is able to achieve the idea of the good. The rest of natural law was first developed somewhat in Aristotle's work, also was referenced and affirmed in the works by Cicero, and it has been developed by the Christian Albert the Great. This is not meant to imply that traditionalist conservatives must be Thomists and embrace a robustly Thomistic natural law theory. Individuals who embrace non-Thomistic understandings of natural law rooted in, e.g., non-Aristotelian accounts affirmed in segments of Greco-Roman, patristic, medieval, and Reformation thought, can identify with traditionalist conservatism.
Tradition and custom
Traditionalists think that tradition and custom should guide man and his worldview, as their names imply. Each generation inherits its ancestors' experience and culture, which man is able to transmit down to his offspring through custom and precedent. Edmund Burke, noted that "the individual is foolish, but the species is wise." Furthermore, according to John Kekes, "tradition represents for conservatives a continuum enmeshing the individual and social, and is immune to reasoned critique." Traditional conservatism typically prefers practical reason instead of theoretical reason.
Conservatism, it has been argued, is based on living tradition rather than abstract political thinking. Within conservatism, political journalist Edmund Fawcett argues the existence of two strains of conservative thought, a flexible conservatism associated with Edmund Burke (which allows for limited reform), and an inflexible conservatism associated with Joseph de Maistre (which is more reactionary).
Within flexible conservatism, some commentators may break it down further, contrasting the "pragmatic conservatism" which is still quite skeptical of abstract theoretical reason, vs. the "rational conservatism" which does not have skepticism of said reason, and simply favors some sort of hierarchy as sufficient.
Hierarchy, organicism, and authority
Traditionalist conservatives believe that human society is essentially hierarchical (i.e., it always involves various interdependent inequalities, degrees, and classes) and that political structures that recognize this fact prove the most just, thriving, and generally beneficial. Hierarchy allows for the preservation of the whole community simultaneously, instead of protecting one part at the expense of the others.
Organicism also characterizes conservative thought. Edmund Burke notably viewed society from an organicist standpoint, as opposed to a more mechanistic view developed by liberal thinkers. Two concepts play a role in organicism in conservative thought:
The internal elements of the organic society cannot be randomly reconfigured (similar to a living creature).
The organic society is based upon natural needs and instincts, rather than that of a new ideological blueprint conceived by political theorists.
Traditional authority is a common tenet of conservatism, albeit expressed in different forms. Alexandre Kojève distinguished between two forms of traditional authority: the father (fathers, priests, monarchs) and the master (aristocrats, military commanders). Obedience to said authority, whether familial or religious, continues to be a central tenet of conservatism to this day.
Integralism and divine law
Integralism, typically a Catholic idea but also a broader religious one, asserts that faith and religious principles ought to be the basis for public law and policy when possible. The goal of such a system is to integrate religious authority with political power. While integralist principles have been sporadically associated with traditionalism, it was largely popularized by the works of Joseph de Maistre.
Agrarianism
The countryside, as well as the values associated with it, are greatly valued (sometimes even being romanticized as in pastoral poetry). Agrarian ideals (such as conserving small family farms, open land, natural resource conservation, and land stewardship) are important to certain traditionalists' conception of rural life. Louis de Bonald wrote a short piece on a comparison of the agriculturalism to industrialism.
Family structure
The importance of proper family structures is a common value expressed in conservatism. The concept of traditional morality is often coalesced with familialism and family values, being viewed as the bedrock of society within traditionalist thought. Louis de Bonald wrote a piece on marital dissolution named "On Divorce" in 1802, outlining his opposition to the practise. Bonald stated that the broader human society was composed of three subunits (religious society - the church, domestic society - the family, public society - the state). He added that since the family made up one of these core categories, divorce would thereby represent an assault on the social order.
Morality
Morality, specifically traditional moral values, is a common area of importance within traditional conservatism, going back to Edmund Burke. Burke believed that a notion of sensibility was at the root of man's moral intuition. Furthermore, he theorized that divine moral law was both transcendent and immanent within humans. Moralism, as a movement largely still exists within mainstream conservative circles with a focus on inherent or deontological suppositions. While moral discussions exist across the political aisle, conservatism is distinct for including notions of purity-based reasoning. The type of morality attributed to Edmund Burke is referred to some as moral traditionalism.
Communitarianism
Communitarianism is an ideology that broadly prioritizes the importance of the community over the individual's freedoms. Joseph de Maistre was notably against individualism, and blamed Rousseau's individualism on the destructive nature of the French revolution.Some may argue that the communitarian ethic has considerable overlap with the conservative movement, although they remain distinct. While communitarians may draw upon similar elements of moral infrastructure to make their arguments, the communitarian opposition to liberalism is still more limited than that of conservatives. Furthermore, the communitarian prescription for society is more limited in scope than that of social conservatives. The term is typically used in two different senses; philosophical communitarianism which rejects liberal precepts and atomistic theory, vs. ideological communitarianism which is a syncretistic belief that holds in priority the positive right to social services for members of said community. Communitarianism may overlap with stewardship, in an environmental sense as well.
Social order
Social order is a common tenet of conservatism, namely the maintenance of social ties, whether the family or the law. The concept may also tie into social cohesion. Joseph de Maistre defended the necessity of the public executioner as encouraging stability. In the St Petersburg Dialogues, he wrote: "all power, all subordination rests on the executioner: he is the horror and the bond of human association. Remove this incomprehensible agent from the world, and the very moment order gives way to chaos, thrones topple, and society disappears."
The concept of social order is not exclusive to conservatism, although it tends to be fairly prevalent within it. Both Jean Jacques Rousseau and Joseph de Maistre believed in social order, the difference was that Maistre preferred the status quo, indivisibility of law and rule, and the mesh of Church with State. Meanwhile, Rousseau preferred social contract and the ability to withdraw from such (and pick the ruler) as well as a separation of Church and state. Furthermore, Rousseau went on the criticize the "cult of the state" as well.
Classicism and high culture
Traditionalists defend classical Western civilization and value an education informed by the sifting of texts starting in the Roman World and refined under Medieval Scholasticism and Renaissance humanism. Similarly, traditionalist conservatives are Classicists who revere high culture in all of its manifestations (e.g. literature, Classical music, architecture, art, and theatre).
Localism
Traditionalists consider localism a core principle, described as a sense of devotion to one's homeland, in contrast to nationalists, who value the role of the state or nation over the local community. Traditionalist conservatives believe that allegiance to family, local community, and region is often more important than political commitments. Traditionalists also prioritize community closeness above nationalist state interest, preferring the civil society of Burke's "little platoons". However, this does not mean that Conservatives are against state authority. Quite the opposite, rather Conservatives prefer simply that the state allow and encourage units like families and churches to thrive and develop.
Alternatively, some theorists state that nationalism can easily be radicalized and lead to jingoism, which sees the state as apart from the local community and family structure rather than as a product of both.
An example of a traditionalist conservative approach to immigration may be seen in Bishop John Joseph Frederick Otto Zardetti's September 21, 1892 "Sermon on the Mother and the Bride", which was a defence of Roman Catholic German-Americans desire to preserve their faith, ancestral culture, and to continue speaking their heritage language of the German language in the United States, against both the English only movement and accusations of being Hyphenated Americans.
History
British influences
Edmund Burke, an Anglo-Irish Whig statesman and philosopher whose political principles were rooted in moral natural law and the Western heritage, is the one of the first expositors of traditionalist conservatism, although Toryism represented an even earlier, more primitive form of traditionalist conservatism. Burke believed in prescriptive rights, which he considered to be "God-given". He argued for what he called "ordered liberty" (best reflected in the unwritten law of the British constitutional monarchy). He also fought for universal ideals that were supported by institutions such as the church, the family, and the state. He was a fierce critic of the principles behind the French Revolution, and in 1790, his observations on its excesses and radicalism were collected in Reflections on the Revolution in France. In Reflections, Burke called for the constitutional enactment of specific, concrete rights and warned that abstract rights could be easily abused to justify tyranny. American social critic and historian Russell Kirk wrote: "The Reflections burns with all the wrath and anguish of a prophet who saw the traditions of Christendom and the fabric of civil society dissolving before his eyes."
Burke's influence was felt by later intellectuals and authors in both Britain and continental Europe. The English Romantic poets Samuel Taylor Coleridge, William Wordsworth and Robert Southey, as well as Scottish Romantic author Sir Walter Scott, and the counter-revolutionary writers François-René de Chateaubriand, Louis de Bonald and Joseph de Maistre were all affected by his ideas. Burke's legacy was best represented in the United States by the Federalist Party and its leaders, such as President John Adams and Secretary of the Treasury Alexander Hamilton.
French influences
Joseph de Maistre, a French lawyer, was another founder of conservatism. He was an ultramontane Catholic, and thoroughly rejected progressivism and rationalism. In 1796, he published a political pamphlet entitled, Considerations on France, that mirrored Burke's Reflections. Maistre viewed the French revolution as "evil schism", and a movement premised on the "sentiment of hatred". After the demise of Napoleon, Maistre returned to France to meet with pro-royalist circles. In 1819, Maistre published a piece called Du Pape which outlined the Pope as the key sovereign, unto which authority derives from.
Critics of material progress
Three cultural conservatives and skeptics of material development, Samuel Taylor Coleridge, Thomas Carlyle, and John Henry Newman, were staunch supporters of Burke's classical conservatism.
According to conservative scholar Peter Viereck, Coleridge and his colleague and fellow poet William Wordsworth began as followers of the French Revolution and the radical utopianism it engendered. Their collection of poems, Lyrical Ballads, published in 1798, however, rejected the Enlightenment notion of reason triumphing over faith and tradition. Later works by Coleridge, such as Lay Sermons (1816), Biographia Literaria (1817) and Aids to Reflection (1825), defended traditional conservative positions on hierarchy and organic society, criticism of materialism and the merchant class, and the need for "inner growth" that is rooted in a traditional and religious culture. Coleridge was a strong supporter of social institutions and an outspoken opponent of Jeremy Bentham and his utilitarian theory.
Thomas Carlyle, a writer, historian, and essayist, was an early traditionalist thinker, defending medieval ideals such as aristocracy, hierarchy, organic society, and class unity against communism and laissez-faire capitalism's "cash nexus." The "cash nexus," according to Carlyle, occurs when social interactions are reduced to economic gain. Carlyle, a lover of the poor, claimed that mobs, plutocrats, anarchists, communists, socialists, liberals, and others were threatening the fabric of British society by exploiting them and perpetuating class animosity. A devotee of Germanic culture and Romanticism, Carlyle is best known for his works, Sartor Resartus (1833–1834) and Past and Present (1843).
The Oxford Movement, a religious movement aimed at restoring Anglicanism's Catholic nature, gave the Church of England a "catholic rebirth" in the mid-19th century. The Tractarians (so named for the publication of their Tracts for the Times) criticized theological liberalism while preserving "dogma, ritual, poetry, [and] tradition," led by John Keble, Edward Pusey, and John Henry Newman. Newman (who converted to Roman Catholicism in 1845 and was later made a Cardinal and a canonized saint) and the Tractarians, like Coleridge and Carlyle, were critical of material progress, or the idea that money, prosperity, and economic gain constituted the totality of human existence.
Cultural and artistic criticism
Culture and the arts were also important to British traditionalist conservatives, and two of the most prominent defenders of tradition in culture and the arts were Matthew Arnold and John Ruskin.
A poet and cultural commentator, Matthew Arnold is most recognized for his poems and literary, social, and religious criticism. His book Culture and Anarchy (1869) criticized Victorian middle-class norms (Arnold referred to middle class tastes in literature as "philistinism") and advocated a return to ancient literature. Arnold was likewise skeptical of the plutocratic grasping at socioeconomic issues that had been denounced by Coleridge, Carlyle, and the Oxford Movement. Arnold was a vehement critic of the Liberal Party and its Nonconformist base. He mocked Liberal efforts to disestablish the Anglican Church in Ireland, establish a Catholic university there, allow dissenters to be buried in Church of England cemeteries, demand temperance, and ignore the need to improve middle class members rather than impose their unreasonable beliefs on society. Education was essential, and by that, Arnold meant a close reading and attachment to the cultural classics, coupled with critical reflection. He feared anarchy—the fragmentation of life into isolated facts that is caused by dangerous educational panaceas that emerge from materialistic and utilitarian philosophies. He was appalled at the shamelessness of the sensationalistic new journalism of the sort he witnessed on his tour of the United States in 1888. He prophesied, "If one were searching for the best means to efface and kill in a whole nation the discipline of self-respect, the feeling for what is elevated, he could do no better than take the American newspapers."
One of the issues that traditionalist conservatives have often emphasized is that capitalism is just as suspect as the classical liberalism that gave birth to it. Cultural and artistic critic John Ruskin, a medievalist who considered himself a "Christian communist" and cared much about standards in culture, the arts, and society, continued this tradition. The Industrial Revolution, according to Ruskin (and all 19th-century cultural conservatives), had caused dislocation, rootlessness, and vast urbanization of the poor. He wrote The Stones of Venice (1851–1853), a work of art criticism that attacked the Classical heritage while upholding Gothic art and architecture. The Seven Lamps of Architecture and Unto This Last (1860) were two of his other masterpieces.
One-nation conservatism
Burke, Coleridge, Carlyle, Newman, and other traditionalist conservatives' beliefs were distilled into former British Prime Minister Benjamin Disraeli's politics and ideology. When he was younger, Disraeli was an outspoken opponent of middle-class capitalism and the Manchester liberals' industrial policies (the Reform Bill and the Corn Laws). In order to ameliorate the suffering of the urban poor in the aftermath of the Industrial Revolution, Disraeli proposed "one-nation conservatism," in which a coalition of aristocrats and commoners would band together to counter the liberal middle class's influence. This new coalition would be a way to interact with disenfranchised people while also rooting them in old conservative principles. Disraeli's ideas (especially his critique of utilitarianism) were popularized in the "Young England" movement and in books like Vindication of the English Constitution (1835), The Radical Tory (1837), and his "social novels," Coningsby (1844) and Sybil (1845). His one-nation conservatism was revived a few years later in Lord Randolph Churchill's Tory democracy and in the early 21st century in British philosopher Phillip Blond's Red Tory thesis.
Distributism
In the early 20th century, traditionalist conservatism found its defenders through the efforts of Hilaire Belloc, G. K. Chesterton and other proponents of the socioeconomic system they advocated: distributism. Originating in the papal encyclical Rerum novarum, distributism employed the concept of subsidiarity as a "third way" solution to the twin evils of communism and capitalism. It favors local economies, small business, the agrarian way of life and craftsmen and artists. Otto von Bismarck implemented one of the first modern welfare systems in Germany during the 1880s. Traditional communities akin to those found in the Middle Ages were advocated in books like Belloc's The Servile State (1912), Economics for Helen (1924), and An Essay on the Restoration of Property (1936), and Chesterton's The Outline of Sanity (1926), while big business and big government were condemned. Distributist views were accepted in the United States by the journalist Herbert Agar and Catholic activist Dorothy Day as well as through the influence of the German-born British economist E. F. Schumacher, and were comparable to Wilhelm Roepke's work.
T. S. Eliot was a staunch supporter of Western culture and traditional Christianity. Eliot was a political reactionary who used literary modernism to achieve traditionalist goals. Following in the footsteps of Edmund Burke, Samuel Taylor Coleridge, Thomas Carlyle, John Ruskin, G. K. Chesterton, and Hilaire Belloc, he wrote After Strange Gods (1934), and Notes towards the Definition of Culture (1948). At Harvard University, where he was educated by Irving Babbitt and George Santayana, Eliot was acquainted with Allen Tate and Russell Kirk.
T. S. Eliot praised Christopher Dawson as the most potent intellectual influence in Britain, and he was a prominent player in 20th-century traditionalism. The belief that religion was at the center of all civilization, especially Western culture, was central to his work, and his books reflected this view, notably The Age of Gods (1928), Religion and Culture (1948), and Religion and the Rise of Western Culture (1950). Dawson, a contributor to Eliot's Criterion, believed that religion and culture were crucial to rebuilding the West after World War II in the aftermath of fascism and the advent of communism.
In the United Kingdom
Philosophers
Roger Scruton, a British philosopher, was a self-described traditionalist and conservative. One of his most well-known books, The Meaning of Conservatism (1980), is on foreign policy, animal rights, arts and culture, and philosophy. Scruton was a member of the American Enterprise Institute, the Institute for the Psychological Sciences, the Trinity Forum, and the Center for European Renewal. Modern Age, National Review, The American Spectator, The New Criterion, and City Journal were among the many publications for which he wrote.
Phillip Blond, a British philosopher, has recently gained notoriety as a proponent of traditionalist philosophy, specifically progressive conservatism, or Red Toryism. Blond believes that Red Toryism would rejuvenate British conservatism and society by combining civic communitarianism, localism, and traditional values. He has formed a think tank, ResPublica.
Publications and political organizations
The oldest traditionalist publication in the United Kingdom is The Salisbury Review, which was founded by British philosopher Roger Scruton. The Salisbury Reviews current managing editor is Merrie Cave.
A group of traditionalist MPs known as the Cornerstone Group was created in 2005 within the British Conservative Party. The Cornerstone Group represents "faith, flag, and family" and stands for traditional values. Edward Leigh and John Henry Hayes are two notable members.
In Europe
The Edmund Burke Foundation is a traditionalist educational foundation established in the Netherlands and is modeled after the Intercollegiate Studies Institute. It was created by traditionalists such as academic Andreas Kinneging and journalist Bart Jan Spruyt as a think tank. The Center for European Renewal is linked with it.
In 2007, a number of leading traditionalist scholars from Europe, as well as representatives of the Edmund Burke Foundation and the Intercollegiate Studies Institute, created the Center for European Renewal, which is designed to be the European version of the Intercollegiate Studies Institute.
In the United States
The Federalists had no ties to European-style nobility, royalty, or organized churches when it came to "classical conservatism." John Adams was one of the first champions of a traditional social order.
The Whig Party had an approach that mirrored Burkean conservatism in the post-Revolutionary era. Rufus Choate argued that lawyers were the guardians and preservers of the Constitution. In the antebellum period, George Ticknor and Edward Everett were the "Guardians of Civilization." Orestes Brownson examined how America satisfies Catholic tradition and Western civilization. The Southern Agrarians, or Fugitives, were another group of traditionalist conservatives. In 1930, some of the Fugitives published I'll Take My Stand, which applied agrarian standards to politics and economics.
Following WWII, the initial stirrings of a "traditionalist movement" emerged. Certain conservative scholars and writers garnered the attention of the popular press. Russell Kirk's The Conservative Mind, an expansion of his PhD dissertation written in Scotland, was the book that defined the traditionalist school. Kirk was an independent scholar, writer, critic, and man of letters. He was friends with William F. Buckley Jr., a National Review columnist, editor, and syndicated columnist. When Barry Goldwater combated the Republican Party's Eastern Establishment in 1964, Kirk backed him in the primaries and campaigned for him. After Goldwater's defeat, the New Right reunited in the late 1970s and found a new leader in Ronald Reagan. Ronald Reagan created a coalition of libertarians, foreign-policy rightists, business conservatives, as well as Christian social conservatives and maintained his power by solidifying a newer form of conservative alliance that would continue to dominate the political landscape of the American conservatism to this day.
Political organizations
The Trinity Forum, Ellis Sandoz's Eric Voegelin Institute and the Eric Voegelin Society, the Conservative Institute's New Centurion Program, the T. S. Eliot Society, the Malcolm Muggeridge Society, and the Free Enterprise Institute's Center for the American Idea are all traditionalist groups. The Wilbur Foundation is a prominent supporter of traditionalist activities, particularly the Russell Kirk Center.
Literary
Literary traditionalists are frequently associated with political conservatives and the right wing, whilst experimental works and the avant-garde are frequently associated with progressives and the left wing. John Barth, a postmodern writer and literary theorist, said: "I confess to missing, in apprentice seminars in the later 1970s and the 1980s, that lively Make-It-New spirit of the Buffalo Sixties. A roomful of young traditionalists can be as depressing as a roomful of young Republicans."
James Fenimore Cooper, Nathaniel Hawthorne, James Russell Lowell, W. H. Mallock, Robert Frost and T. S. Eliot are among the literary figures covered in Russell Kirk's The Conservative Mind (1953). The writings of Rudyard Kipling and Phyllis McGinley are presented as instances of literary traditionalism in Kirk's The Conservative Reader (1982). Kirk was also a well-known author of spooky and suspense fiction with a Gothic flavor. Ray Bradbury and Madeleine L'Engle both praised novels such as Old House of Fear, A Creature of the Twilight, and Lord of the Hollow Dark as well as short stories such as "Lex Talionis", "Lost Lake", "Beyond the Stumps", "Ex Tenebris," and "Fate's Purse." Kirk was also close friends with a number of 20th-century literary heavyweights, including T. S. Eliot, Roy Campbell, Wyndham Lewis, Ray Bradbury, Madeleine L'Engle, Fernando Sánchez Dragó, and Flannery O'Connor, all of whom wrote conservative poetry or fiction.
Evelyn Waugh, J.R.R. Tolkien, and G. K. Chesterton – British novelists and traditionalist Catholics – are often considered traditionalist conservatives. With regard to both literature and cultural revival among speakers of Celtic languages, the same argument can be made for Saunders Lewis, Máirtín Ó Direáin, John Lorne Campbell, and Margaret Fay Shaw.
See also
Christian democracy
Communitarianism
Counter-Enlightenment
Corporatism
Distributism
High Tories
Historical school of economics
Integralism
Localism (politics)
Monarchism
National conservatism
Natural order (philosophy)
Neoauthoritarianism (China)
New Humanism
New traditionalism
Organicism
Paleoconservatism
Philosophical naturalism
Red Tory
Regionalism
Right-wing authoritarianism
Royalism
Social conservatism
Tory
Tory (political faction)
Traditionalism (Spain)
References
Bibliography
Further reading
Articles
"Understanding Traditionalist Conservatism" by Mark C. Henrie. The New Pantagruel, formerly published in Varieties of Conservatism in America, Peter Berkowitz, Ed. (Hoover Press, 2004) .
General references
Allitt, Patrick (2009) The Conservatives: Ideas and Personalities Throughout American History. New Haven, CT: Yale University Press.
Critchlow, Donald T. (2007) The Conservative Ascendancy: How the GOP Right Made Political History. Cambridge, MA: Harvard University Press.
Dunn, Charles W., and J. David Woodard (2003) The Conservative Tradition in America. Lanham, MD: Rowman and Littlefield Publishers.
Edwards, Lee (2004) A Brief History of the Modern American Conservative Movement. Washington, D.C.: Heritage Foundation.
Frohnen, Bruce, Jeremy Beer, and Jeffrey O. Nelson (2006) American Conservatism: An Encyclopedia. Wilmington, DE: ISI Books.
Gottfried, Paul, and Thomas Fleming (1988) The Conservative Movement. Boston: Twayne Publishers.
Nash, George H. (1976, 2006) The Conservative Intellectual Movement in America since 1945. Wilmington, DE: ISI Books.
Nisbet, Robert (1986) Conservatism: Dream and Reality. Minneapolis, MN: University of Minnesota Press.
Regnery, Alfred S. (2008) Upstream: The Ascendance of American Conservatism. New York: Threshold Editions.
Viereck, Peter (1956, 2006) Conservative Thinkers from John Adams to Winston Churchill. New Brunswick, NJ: Transaction Publishers.
By the New Conservatives
Bestor, Arthur (1953, 1988) Educational Wastelands: The Retreat from Learning in Our Public Schools. Champaign, IL: University of Illinois Press.
Boorstin, Daniel (1953) The Genius of American Politics. Chicago: University of Chicago Press.
Chalmers, Gordon Keith (1952) The Republic and the Person: A Discussion of Necessities in Modern American Education. Chicago: Regnery.
Hallowell, John (1954, 2007) The Moral Foundation of Democracy. Indianapolis: Liberty Fund Inc.
Heckscher, August (1947) A Pattern of Politics. New York: Reynal and Hitchcock.
Kirk, Russell (1953, 2001) The Conservative Mind. Washington, D.C.: Regnery Publishing.
Kirk, Russell (1982) The Portable Conservative Reader. New York: Penguin.
Nisbet, Robert (1953, 1990) The Quest for Community: A Study in the Ethics of Order and Freedom. San Francisco: ICS Press.
Smith, Mortimer (1949) And Madly Teach. Chicago:Henry Regnery Co.
Viereck, Peter (1949, 2006) Conservatism Revisited: The Revolt Against Ideology. New Brunswick, NJ: Transaction Publishers.
Vivas, Eliseo (1950, 1983) The Moral Life and the Ethical Life. Lanham, MD: University Press of America.
Voegelin, Eric (1952, 1987) The New Science of Politics: An Introduction. Chicago: University of Chicago Press.
Weaver, Richard (1948, 1984) Ideas Have Consequences. Chicago: University of Chicago Press.
Wilson, Francis G. (1951, 1990) The Case for Conservatism. New Brunswick, NJ: Transaction Publishers.
By other traditionalist conservatives
Dreher, Rod (2006) Crunchy Cons: How Birkenstocked Burkeans, Gun-loving Organic Farmers, Hip Homeschooling Mamas, Right-wing Nature Lovers, and Their Diverse Tribe of Countercultural Conservatives Plan to Save America (or At Least the Republican Party). New York: Crown Forum.
Frohnen, Bruce (1993) Virtue and the Promise of Conservatism: The Legacy of Burke and Tocqueville. Lawrence, KS: University Press of Kansas.
Henrie, Mark C. (2008) Arguing Conservatism: Four Decades of the Intercollegiate Review. Wilmington, DE: ISI Books.
Kushiner, James M., Ed. (2003) Creed and Culture: A Touchstone Reader. Wilmington, DE: ISI Books.
MacIntyre, Alaisdar (1981, 2007) After Virtue: A Study in Moral Theory. Notre Dame, IN: University of Notre Dame Press.
Panichas, George A., Ed. (1988) Modern Age: The First Twenty-Five Years: A Selection. Indianapolis: Liberty Fund, Inc.
Panichas, George A. (2008) Restoring the Meaning of Conservatism: Writings from Modern Age. Wilmington, DE: ISI Books.
Scruton, Roger (1980, 2002) The Meaning of Conservatism. South Bend, IN: St. Augustine's Press.
Scruton, Roger (2012) Green Philosophy: How to Think Seriously About the Planet. Atlantic Books
About traditionalist conservatives
Duffy, Bernard K. and Martin Jacobi (1993) The Politics of Rhetoric: Richard M. Weaver and the Conservative Tradition. Santa Barbara, CA: Greenwood Press.
Federici, Michael P. (2002) Eric Voegelin: The Restoration of Order. Wilmington, DE: ISI Books.
Gottfried, Paul (2009) Encounters: My Life with Nixon, Marcuse, and Other Friends and Teachers. Wilmington, DE: ISI Books.
Kirk, Russell (1995) The Sword of Imagination: Memoirs of a Half-Century of Literary Conflict. Grand Rapids, MI: William B. Eerdman's Publishing Co.
Langdale, John., (2012) Superfluous Southerners: Cultural Conservatism and the South, 1920–1990. Columbia, MO: University of Missouri Press.
McDonald, W. Wesley (2004) Russell Kirk and the Age of Ideology. Columbia, MO: University of Missouri Press.
Person, James E. Jr. (1999) Russell Kirk: A Critical Biography of a Conservative Mind. Lanham, MD: Madison Books.
Russello, Gerald J. (2007) The Postmodern Imagination of Russell Kirk. Columbia, MO: University of Missouri Press.
Scotchie, Joseph (1997) Barbarians in the Saddle: An Intellectual Biography of Richard M. Weaver. New Brunswick, NJ: Transaction Publishers.
Scotchie, Joseph (1995) The Vision of Richard Weaver. New Brunswick, NJ: Transaction Publishers.
Scruton, Roger (2005) Gentle Regrets: Thoughts From A Life London: Continuum.
Stone, Brad Lowell (2002) Robert Nisbet: Communitarian Traditionalist. Wilmington, DE: ISI Books.
Wilson, Clyde (1999) A Defender of Conservatism: M. E. Bradford and His Achievements. Columbia, MO: University of Missouri Press.
Conservatism
Conservatism in the United States
Conservatism in the United Kingdom
Criticism of rationalism
Counter-Enlightenment
Toryism
Tradition
Right-wing ideologies | 0.768684 | 0.996199 | 0.765763 |
Golden Age | The term Golden Age comes from Greek mythology, particularly the Works and Days of Hesiod, and is part of the description of temporal decline of the state of peoples through five Ages, Gold being the first and the one during which the Golden Race of humanity ( chrýseon génos) lived. After the end of the first age was the Silver, then the Bronze, after this the Heroic age, with the fifth and current age being Iron.
By extension, "Golden Age" denotes a period of primordial peace, harmony, stability, and prosperity. During this age, peace and harmony prevailed in that people did not have to work to feed themselves for the earth provided food in abundance. They lived to a very old age with a youthful appearance, eventually dying peacefully, with spirits living on as "guardians". Plato in Cratylus (397 e) recounts the golden race of humans who came first. He clarifies that Hesiod did not mean literally made of gold, but good and noble.
In classical Greek mythology, the Golden Age was presided over by the leading Titan Cronus; in Latin authors it was associated with the god Saturn. In some versions of the myth Astraea also ruled. She lived with men until the end of the Silver Age. But in the Bronze Age, when men became violent and greedy, she fled to the stars, where she appears as the constellation Virgo, holding the scales of Justice, or Libra.
European pastoral literary tradition often depicted nymphs and shepherds as living a life of rustic innocence and peace, set in Arcadia, a region of Greece that was the abode and center of worship of their tutelary deity, goat-footed Pan, who dwelt among them.
The Golden Age in Europe: Greece
The earliest attested reference to the European myth of the Ages of Man 500 BCE–350 BCE appears in the late 6th century BCE works of the Greek poet Hesiod's Works and Days (109–126). Hesiod, a deteriorationist, identifies the Golden Age, the Silver Age, the Bronze Age, the Heroic Age, and the Iron Age. With the exception of the Heroic Age, each succeeding age was worse than the one that went before. Hesiod maintains that during the Golden Age, before the invention of the arts, the earth produced food in such abundance that there was no need for agriculture:
[Men] lived like gods without sorrow of heart, remote and free from toil and grief: miserable age rested not on them; but with legs and arms never failing they made merry with feasting beyond the reach of all devils. When they died, it was as though they were overcome with sleep, and they had all good things; for the fruitful earth unforced bare them fruit abundantly and without stint. They dwelt in ease and peace.
Plato in his Cratylus referred to an age of golden men and also at some length on Ages of Man from Hesiod's Works and Days. The Roman poet Ovid simplified the concept by reducing the number of Ages to four: Gold, Bronze, Silver, and Iron. Ovid's poetry was likely a prime source for the transmission of the myth of the Golden Age during the period when Western Europe had lost direct contact with Greek literature.
In Hesiod's version, the Golden Age ended when the Titan Prometheus conferred on mankind the gift of fire and all the other arts. For this, Zeus punished Prometheus by chaining him to a rock in the Caucasus, where an eagle eternally ate at his liver. The gods sent the beautiful maiden Pandora to Prometheus's brother Epimetheus. The gods had entrusted Pandora with a box that she was forbidden to open; however, her uncontrollable curiosity got the better of her and she opened the box, thereby unleashing all manner of evil into the world.
The Orphic school, a mystery cult that originated in Thrace and spread to Greece in the 5th century BCE, held similar beliefs about the early days of man, likewise denominating the ages with metals. In common with the many other mystery cults prevalent in the Graeco-Roman world (and their Indo-European religious antecedents), the world view of Orphism was cyclical. Initiation into its secret rites, together with ascetic practices, was supposed to guarantee the individual's soul eventual release from the grievous circle of mortality and also communion with the gods. Orphics sometimes identified the Golden Age with the era of the god Phanes, who was regent over the Olympus before Cronus. In classical mythology however, the Golden Age was associated with the reign of Saturn. In the 5th century BCE, the philosopher Empedocles, like Hesiod before him, emphasized the idea of primordial innocence and harmony in all of nature, including human society, from which he maintained there had been a steady deterioration until the present.
Arcadia
A tradition arose in Greece that the site of the original Golden Age had been Arcadia, an impoverished rural area of Greece where the herdsmen still lived on acorns and where the goat-footed god Pan had his home among the poplars on Mount Maenalus. However, in the 3rd century BCE, the Greek poet, Theocritus, writing in Alexandria, set his pastoral poetry on the lushly fertile island of Sicily, where he had been born. The protagonist of Theocritus's first Idyll, the goat herder, Daphnis, is taught to play the Syrinx (panpipes) by Pan himself.
The Golden Age in Rome: Virgil and Ovid
Writing in Latin during the turbulent period of revolutionary change at the end of the Roman Republic (roughly between 44 and 38 BCE), the poet Virgil moved the setting for his pastoral imitations of Theocritus back to an idealized Arcadia in Greece, thus initiating a rich and resonant tradition in subsequent European literature.
Virgil, moreover, introduced into his poetry the element of political allegory, which had been largely absent in Theocritus, even intimating in his fourth Eclogue that a new Golden Age of peace and justice was about to return:
Ultima Cumaei venit iam carminis aetas;
magnus ab integro saeclorum nascitur ordo:
iam redit et Virgo, redeunt Saturnia regna;
iam nova progenies caelo demittitur alto.
Translation:
Now the last age by Cumae's Sibyl sung
Has come and gone, and the majestic roll
Of circling centuries begins anew:
Astraea returns,
Returns old Saturn's reign,
With a new breed of men sent down from heaven.
Somewhat later, shortly before he wrote his epic poem the Aeneid, which dealt with the establishment of Roman Imperial rule, Virgil composed his Georgics (29 BCE), modeled directly on Hesiod's Works and Days and similar Greek works. Ostensibly about agriculture, the Georgics are in fact a complex allegory about how man's alterations of nature (through works) are related to good and bad government. Although Virgil does not mention the Golden Age by name in the Georgics, he does refer in them to a time where man was in harmony with nature before the reign of Jupiter, when:
Fields knew no taming hand of husbandmen
To mark the plain or mete with boundary-line.
Even this was impious; for the common stock
They gathered, and the earth of her own will
All things more freely, no man bidding, bore.
ante Iouem nulli subigebant arua coloni
ne signare quidem aut partiri limite campum
fas erat; in medium quaerebant, ipsaque tellus
omnia liberius nullo poscente ferebat. (Georgics, Book 1: 125–28)
This view, which identifies a State of Nature with the celestial harmony of which man's nature is (or should be, if properly regulated) a microcosm, reflects the Hellenistic cosmology that prevailed among literate classes of Virgil's era. It is seen again in Ovid's Metamorphoses (7 CE), in which the lost Golden Age is depicted as a place and time when, because nature and reason were harmoniously aligned, men were naturally good:
The Golden Age was first; when Man, yet new,
No rule but uncorrupted Reason knew:
And, with a native bent, did good pursue.
Unforc'd by punishment, un-aw'd by fear.
His words were simple, and his soul sincere;
Needless was written law, where none opprest:
The law of Man was written in his breast.
The Graeco-Roman concept of the "natural man" delineated by Ovid and many other classical writers, was especially popular during the Deistically inclined 18th century. It is often erroneously attributed to Rousseau, who did not share it.
"Soft" and "hard" primitivism in Arcadia
In his famous essay, "Et in Arcadia ego: Poussin and the Elegiac Tradition", Erwin Panofsky remarks how in ancient times, "that particular not overly opulent, region of central Greece, Arcady, came to be universally accepted as an ideal realm of perfect bliss and beauty, a dream incarnate of ineffable happiness, surrounded nevertheless with a halo of 'sweetly sad' melancholy":
There had been, from the beginning of classical speculation, two contrasting opinions about the natural state of man, each of them, of course, a "Gegen-Konstruktion" to the conditions under which it was formed. One view, termed "soft" primitivism in an illuminating book by Lovejoy and Boas conceives of primitive life as a golden age of plenty, innocence, and happiness – in other words, as civilized life purged of its vices. The other, "hard" form of primitivism conceives of primitive life as an almost subhuman existence full of terrible hardships and devoid of all comforts – in other words, as civilized life stripped of its virtues.
Arcady, as we encounter it in all modern literature, and as we refer to it in our daily speech, falls under the heading of “soft" or golden-age primitivism. To be sure, this real Arcady was the domain of Pan, who could be heard playing the syrinx on Mount Maenalus; and its inhabitants were famous for their musical accomplishments as well as for their ancient lineage, rugged virtue, and rustic hospitality.
Other Golden Ages
There are analogous concepts in the religious and philosophical traditions of the Indian subcontinent. For example, the Vedic or ancient Hindu culture saw history as cyclical, each cycle composed of four yugas (ages)Satya Yuga (Golden Age), Treta Yuga (Silver Age), Dvapara Yuga (Bronze Age) and Kali Yuga (Iron Age)correspond to the four Greek ages. Similar beliefs occur in the ancient Middle East and throughout the ancient world, as well.
Christianity
There is a reference to a succession of kingdoms in Nebuchadnezzar's dream in Daniel 2, in decreasing order identified as gold, silver, bronze, iron and finally mixed iron and clay.
The interpretation of the dream follows in verses 36–45.
Judaism
The Jewish Golden Age refers to the period of Muslim rule of Spain, which allowed Jewish culture to thrive.
Hinduism
The Indian teachings differentiate the four world ages (yugas) not according to metals, but according to dharmic qualities (virtues), where the first age starts with the most and the last age ends with the least. The end is followed by a new cycle (Yuga Cycle) of the same four ages: Satya Yuga (golden age), Treta Yuga, Dvapara Yuga, and Kali Yuga (dark age), of which we are currently in Kali Yuga.
In Satya Yuga, knowledge, meditation, and communion with spirit hold special importance. Most people engage only in good, sublime deeds and mankind lives in harmony with the Earth. Ashrams become devoid of wickedness and deceit. Natyam (such as Bharatanatyam), according to Natya Shastra, did not exist in the Satya Yuga "because it was the time when all people were happy".
Satya Yuga ( Krita Yuga) according to Mahabharata:
Men neither bought nor sold; there were no poor and no rich; there was no need to labour, because all that men required was obtained by the power of will; the chief virtue was the abandonment of all worldly desires. The Krita Yuga was without disease; there was no lessening with the years; there was no hatred or vanity, or evil thought whatsoever; no sorrow, no fear. All mankind could attain to supreme blessedness.
Islam
The Islamic Golden Age (Arabic: العصر الذهبي للإسلام, romanized: al-'asr al-dhahabi lil-islam), was a period of cultural, economic, and scientific flourishing in the history of Islam, traditionally dated from the 8th century to the 14th century. This period is traditionally understood to have begun during the reign of the Abbasid caliph Harun al-Rashid (786 to 809) with the inauguration of the House of Wisdom in Baghdad, the world's largest city by then, where Islamic scholars and polymaths from various parts of the world with different cultural backgrounds were mandated to gather and translate all of the known world's classical knowledge into Syriac and Arabic.
Germanic
("Age of Gold") is used in Gylfaginning to describe the period after the creation of the world, and before the arrival of three women out of Jötunheimr, who have been proposed to be the Norns.
A second ideal period is Fróði's Peace, a semi-legendary time during the rule of a Danish king in which peace and prosperity was seen throughout Northern Europe.
Chinese mythology and religion
Shennong, in the myth cycles that thought of him as a culture hero rather than a god, was thought to have maintained a Golden Age in the world by helping humans by doing things such as eating every single plant to see which ones were edible, as well as his own suffering because of various poisonings he survived with the help of his supernatural digestive system, that came to an end with his death.
Popular culture
Fantasy
In modern fantasy worlds, whose background and setting sometimes draw heavily on real-world myths, similar or compatible concepts of a Golden Age exist in the said world's prehistory; when deities or elf-like creatures existed, before the coming of humans.
For example, in The Silmarillion by J. R. R. Tolkien, a Golden Age exists in Middle-earth legendarium. Arda (the part of the world where The Lord of the Rings is set), was designed to be symmetrical and perfect. After the wars of the Gods, Arda lost its perfect shape (known as Arda Unmarred) and was called Arda Marred. Another kind of 'Golden Age' follows later, after the Elves awoke; the Eldar stay on Valinor, live with the Valar and advance in arts and knowledge, until the rebellion and the fall of the Noldor, reminiscent of the Fall of Man. Eventually, after the end of the world, the Silmarilli will be recovered and the light of the Two Trees of Valinor rekindled. Arda will be remade again as Arda Healed.
In The Wheel of Time universe, the "Age of Legends" is the name given to the previous Age: In this society, channelers were common and Aes Sedai – trained channelers – were extremely powerful, able to make angreal, sa'angreal, and ter'angreal, and holding important civic positions. The Age of Legends is seen as a utopian society without war or crime, and devoted to culture and learning. Aes Sedai were frequently devoted to academic endeavours, one of which inadvertently resulted in a hole – The Bore – being drilled in the Dark One's prison. The immediate effects were not realised, but the Dark One gradually asserted power over humanity, swaying many to become his followers. This resulted in the War of Power and eventually the Breaking of the World.
Another example is in the background of the Lands of Lore classic computer game, where the history of the Lands is divided in Ages. One of them is also called the Golden Age, a time when the Lands were ruled by the 'Ancients', and there were no wars. This age ended with the 'War of the Heretics'.
The Golden Age may also refer to a state of early childhood. Herbert Spencer argued that young children progress through the cognitive stages of evolution of the human species and of human civilization, thereby linking pre-civilization and infancy. Kenneth Grahame called his evocation of early childhood The Golden Age and J. M. Barrie's fictional character Peter Pan, who first appeared in The Little White Bird was named after Pan, a Greek god from the Golden Age. Barrie's further works about Peter Pan depict early childhood as a time of pre-civilised naturalness and happiness, which is destroyed by the subsequent process of education.
Present-day usage
The term "Golden Age" is at present frequently used in the context of a specific time in the history of a particular countrysuch as the "Spanish Golden Age", "Dutch Golden Age", "Danish Golden Age", "Golden Age of Flanders"or the history of a specific field"Golden age of alpinism", "Golden Age of American animation", "Golden Age of Comics", "Golden Age of Science Fiction", "Golden Age of Television", "Golden Age of Hollywood", "Golden age of arcade video games", "Golden Age of Radio", "Golden Age of Hip Hop" "Golden Age of Kamishibai Theater" (in Japan) and even "Golden Age of Piracy" or "Golden Age of Porn". Usually, the term "Golden Age" is bestowed retroactively, when the period in question has ended and is compared with what followed in the specific field discussed. The term has also been used prospectively. For example, on July 27, 2020, the President of the United States, Donald Trump, published a post on Twitter promising a future "golden age" once the country recovered from the COVID-19 pandemic. The term Gilded Age, which refers to a period in the history of the United States, is a parody of this usage of "golden age" (suggesting that the period has the outward appearance of a golden age, but is in actuality much less desirable).
See also
Age of Aquarius
Islamic Golden Age
American Century
Belle Époque
2012 phenomenon
Ages of Man
Arcadia (utopia)
Eschatology
Garden of Eden
Golden Liberty (in Polish history)
Great year
Khalistan
Merrie England
Messianic Age
Millennialism
New world order (Baháʼí)
Paradise
Precession of the Equinoxes/African humid period
Satya Yuga
Utopia
References
Historiography
Mythical utopias
Mythography
Nostalgia | 0.767189 | 0.99814 | 0.765761 |
Pleistocene | The Pleistocene ( ; often referred to colloquially as the Ice Age) is the geological epoch that lasted from to 11,700 years ago, spanning the Earth's most recent period of repeated glaciations. Before a change was finally confirmed in 2009 by the International Union of Geological Sciences, the cutoff of the Pleistocene and the preceding Pliocene was regarded as being 1.806 million years Before Present (BP). Publications from earlier years may use either definition of the period. The end of the Pleistocene corresponds with the end of the last glacial period and also with the end of the Paleolithic age used in archaeology. The name is a combination of Ancient Greek 'most' and (; Latinized as ) 'new'.
At the end of the preceding Pliocene, the previously isolated North and South American continents were joined by the Isthmus of Panama, causing a faunal interchange between the two regions and changing ocean circulation patterns, with the onset of glaciation in the Northern Hemisphere occurring around 2.7 million years ago. During the Early Pleistocene (2.58–0.8 Ma), archaic humans of the genus Homo originated in Africa and spread throughout Afro-Eurasia. The end of the Early Pleistocene is marked by the Mid-Pleistocene Transition, with the cyclicity of glacial cycles changing from 41,000-year cycles to asymmetric 100,000-year cycles, making the climate variation more extreme. The Late Pleistocene witnessed the spread of modern humans outside of Africa as well as the extinction of all other human species. Humans also spread to the Australian continent and the Americas for the first time, co-incident with the extinction of most large-bodied animals in these regions.
The aridification and cooling trends of the preceding Neogene were continued in the Pleistocene. The climate was strongly variable depending on the glacial cycle, with the sea levels being up to lower than present at peak glaciation, allowing the connection of Asia and North America via Beringia and the covering of most of northern North America by the Laurentide Ice Sheet.
Etymology
Charles Lyell introduced the term "Pleistocene" in 1839 to describe strata in Sicily that had at least 70% of their molluscan fauna still living today. This distinguished it from the older Pliocene Epoch, which Lyell had originally thought to be the youngest fossil rock layer. He constructed the name "Pleistocene" ('most new' or 'newest') from the Greek πλεῖστος (pleīstos) 'most' and καινός (kainós (Latinized as cænus) 'new'). This contrasts with the immediately preceding Pliocene ("newer", from πλείων (pleíōn, "more") and kainós) and the immediately subsequent Holocene ("wholly new" or "entirely new", from ὅλος (hólos, "whole") and kainós) epoch, which extends to the present time.
Dating
The Pleistocene has been dated from 2.580 million (±0.005) to 11,650 years BP with the end date expressed in radiocarbon years as 10,000 carbon-14 years BP. It covers most of the latest period of repeated glaciation, up to and including the Younger Dryas cold spell. The end of the Younger Dryas has been dated to about 9640 BCE (11,654 calendar years BP). The end of the Younger Dryas is the official start of the current Holocene Epoch. Although it is considered an epoch, the Holocene is not significantly different from previous interglacial intervals within the Pleistocene. In the ICS timescale, the Pleistocene is divided into four stages or ages, the Gelasian, Calabrian, Chibanian (previously the unofficial "Middle Pleistocene"), and Upper Pleistocene (unofficially the "Tarantian"). In addition to these international subdivisions, various regional subdivisions are often used.
In 2009 the International Union of Geological Sciences (IUGS) confirmed a change in time period for the Pleistocene, changing the start date from 1.806 to 2.588 million years BP, and accepted the base of the Gelasian as the base of the Pleistocene, namely the base of the Monte San Nicola GSSP. The start date has now been rounded down to 2.580 million years BP. The IUGS has yet to approve a type section, Global Boundary Stratotype Section and Point (GSSP), for the upper Pleistocene/Holocene boundary (i.e. the upper boundary). The proposed section is the North Greenland Ice Core Project ice core 75° 06' N 42° 18' W. The lower boundary of the Pleistocene Series is formally defined magnetostratigraphically as the base of the Matuyama (C2r) chronozone, isotopic stage 103. Above this point there are notable extinctions of the calcareous nannofossils: Discoaster pentaradiatus and Discoaster surculus. The Pleistocene covers the recent period of repeated glaciations.
The name Plio-Pleistocene has, in the past, been used to mean the last ice age. Formerly, the boundary between the two epochs was drawn at the time when the foraminiferal species Hyalinea baltica first appeared in the marine section at La Castella, Calabria, Italy. However, the revised definition of the Quaternary, by pushing back the start date of the Pleistocene to 2.58 Ma, results in the inclusion of all the recent repeated glaciations within the Pleistocene.
Radiocarbon dating is considered to be inaccurate beyond around 50,000 years ago. Marine isotope stages (MIS) derived from Oxygen isotopes are often used for giving approximate dates.
Deposits
Pleistocene non-marine sediments are found primarily in fluvial deposits, lakebeds, slope and loess deposits as well as in the large amounts of material moved about by glaciers. Less common are cave deposits, travertines and volcanic deposits (lavas, ashes). Pleistocene marine deposits are found primarily in shallow marine basins mostly (but with important exceptions) in areas within a few tens of kilometres of the modern shoreline. In a few geologically active areas such as the Southern California coast, Pleistocene marine deposits may be found at elevations of several hundred metres.
Paleogeography and climate
The modern continents were essentially at their present positions during the Pleistocene, the plates upon which they sit probably having moved no more than relative to each other since the beginning of the period. In glacial periods, the sea level would drop by up to lower than today during peak glaciation, exposing large areas of the present continental shelf as dry land.
According to Mark Lynas (through collected data), the Pleistocene's overall climate could be characterised as a continuous El Niño with trade winds in the south Pacific weakening or heading east, warm air rising near Peru, warm water spreading from the west Pacific and the Indian Ocean to the east Pacific, and other El Niño markers.
Glacial features
Pleistocene climate was marked by repeated glacial cycles in which continental glaciers pushed to the 40th parallel in some places. It is estimated that, at maximum glacial extent, 30% of the Earth's surface was covered by ice. In addition, a zone of permafrost stretched southward from the edge of the glacial sheet, a few hundred kilometres in North America, and several hundred in Eurasia. The mean annual temperature at the edge of the ice was ; at the edge of the permafrost, .
Each glacial advance tied up huge volumes of water in continental ice sheets thick, resulting in temporary sea-level drops of or more over the entire surface of the Earth. During interglacial times, such as at present, drowned coastlines were common, mitigated by isostatic or other emergent motion of some regions.
The effects of glaciation were global. Antarctica was ice-bound throughout the Pleistocene as well as the preceding Pliocene. The Andes were covered in the south by the Patagonian ice cap. There were glaciers in New Zealand and Tasmania. The current decaying glaciers of Mount Kenya, Mount Kilimanjaro, and the Ruwenzori Range in east and central Africa were larger. Glaciers existed in the mountains of Ethiopia and to the west in the Atlas Mountains.
In the northern hemisphere, many glaciers fused into one. The Cordilleran Ice Sheet covered the North American northwest; the east was covered by the Laurentide. The Fenno-Scandian ice sheet rested on northern Europe, including much of Great Britain; the Alpine ice sheet on the Alps. Scattered domes stretched across Siberia and the Arctic shelf. The northern seas were ice-covered.
South of the ice sheets large lakes accumulated because outlets were blocked and the cooler air slowed evaporation. When the Laurentide Ice Sheet retreated, north-central North America was completely covered by Lake Agassiz. Over a hundred basins, now dry or nearly so, were overflowing in the North American west. Lake Bonneville, for example, stood where Great Salt Lake now does. In Eurasia, large lakes developed as a result of the runoff from the glaciers. Rivers were larger, had a more copious flow, and were braided. African lakes were fuller, apparently from decreased evaporation. Deserts, on the other hand, were drier and more extensive. Rainfall was lower because of the decreases in oceanic and other evaporation.
It has been estimated that during the Pleistocene, the East Antarctic Ice Sheet thinned by at least 500 meters, and that thinning since the Last Glacial Maximum is less than 50 meters and probably started after ca 14 ka.
Major events
During the 2.5 million years of the Pleistocene, numerous cold phases called glacials (Quaternary ice age), or significant advances of continental ice sheets, in Europe and North America, occurred at intervals of approximately 40,000 to 100,000 years. The long glacial periods were separated by more temperate and shorter interglacials which lasted about 10,000–15,000 years. The last cold episode of the last glacial period ended about 10,000 years ago. Over 11 major glacial events have been identified, as well as many minor glacial events. A major glacial event is a general glacial excursion, termed a "glacial." Glacials are separated by "interglacials". During a glacial, the glacier experiences minor advances and retreats. The minor excursion is a "stadial"; times between stadials are "interstadials".
These events are defined differently in different regions of the glacial range, which have their own glacial history depending on latitude, terrain and climate. There is a general correspondence between glacials in different regions. Investigators often interchange the names if the glacial geology of a region is in the process of being defined. However, it is generally incorrect to apply the name of a glacial in one region to another.
For most of the 20th century, only a few regions had been studied and the names were relatively few. Today the geologists of different nations are taking more of an interest in Pleistocene glaciology. As a consequence, the number of names is expanding rapidly and will continue to expand. Many of the advances and stadials remain unnamed. Also, the terrestrial evidence for some of them has been erased or obscured by larger ones, but evidence remains from the study of cyclical climate changes.
The glacials in the following tables show historical usages, are a simplification of a much more complex cycle of variation in climate and terrain, and are generally no longer used. These names have been abandoned in favour of numeric data because many of the correlations were found to be either inexact or incorrect and more than four major glacials have been recognised since the historical terminology was established.
Corresponding to the terms glacial and interglacial, the terms pluvial and interpluvial are in use (Latin: pluvia, rain). A pluvial is a warmer period of increased rainfall; an interpluvial is of decreased rainfall. Formerly a pluvial was thought to correspond to a glacial in regions not iced, and in some cases it does. Rainfall is cyclical also. Pluvials and interpluvials are widespread.
There is no systematic correspondence between pluvials to glacials, however. Moreover, regional pluvials do not correspond to each other globally. For example, some have used the term "Riss pluvial" in Egyptian contexts. Any coincidence is an accident of regional factors. Only a few of the names for pluvials in restricted regions have been stratigraphically defined.
Palaeocycles
The sum of transient factors acting at the Earth's surface is cyclical: climate, ocean currents and other movements, wind currents, temperature, etc. The waveform response comes from the underlying cyclical motions of the planet, which eventually drag all the transients into harmony with them. The repeated glaciations of the Pleistocene were caused by the same factors.
The Mid-Pleistocene Transition, approximately one million years ago, saw a change from low-amplitude glacial cycles with a dominant periodicity of 41,000 years to asymmetric high-amplitude cycles dominated by a periodicity of 100,000 years.
However, a 2020 study concluded that ice age terminations might have been influenced by obliquity since the Mid-Pleistocene Transition, which caused stronger summers in the Northern Hemisphere.
Milankovitch cycles
Glaciation in the Pleistocene was a series of glacials and interglacials, stadials and interstadials, mirroring periodic climate changes. The main factor at work in climate cycling is now believed to be Milankovitch cycles. These are periodic variations in regional and planetary solar radiation reaching the Earth caused by several repeating changes in the Earth's motion. The effects of Milankovitch cycles were enhanced by various positive feedbacks related to increases in atmospheric carbon dioxide concentrations and Earth's albedo.
Milankovitch cycles cannot be the sole factor responsible for the variations in climate since they explain neither the long-term cooling trend over the Plio-Pleistocene nor the millennial variations in the Greenland Ice Cores known as Dansgaard-Oeschger events and Heinrich events. Milankovitch pacing seems to best explain glaciation events with periodicity of 100,000, 40,000, and 20,000 years. Such a pattern seems to fit the information on climate change found in oxygen isotope cores.
Oxygen isotope ratio cycles
In oxygen isotope ratio analysis, variations in the ratio of to (two isotopes of oxygen) by mass (measured by a mass spectrometer) present in the calcite of oceanic core samples is used as a diagnostic of ancient ocean temperature change and therefore of climate change. Cold oceans are richer in , which is included in the tests of the microorganisms (foraminifera) contributing the calcite.
A more recent version of the sampling process makes use of modern glacial ice cores. Although less rich in than seawater, the snow that fell on the glacier year by year nevertheless contained and in a ratio that depended on the mean annual temperature.
Temperature and climate change are cyclical when plotted on a graph of temperature versus time. Temperature coordinates are given in the form of a deviation from today's annual mean temperature, taken as zero. This sort of graph is based on another isotope ratio versus time. Ratios are converted to a percentage difference from the ratio found in standard mean ocean water (SMOW).
The graph in either form appears as a waveform with overtones. One half of a period is a Marine isotopic stage (MIS). It indicates a glacial (below zero) or an interglacial (above zero). Overtones are stadials or interstadials.
According to this evidence, Earth experienced 102 MIS stages beginning at about 2.588 Ma BP in the Early Pleistocene Gelasian. Early Pleistocene stages were shallow and frequent. The latest were the most intense and most widely spaced.
By convention, stages are numbered from the Holocene, which is MIS1. Glacials receive an even number and interglacials receive an odd number. The first major glacial was MIS2-4 at about 85–11 ka BP. The largest glacials were 2, 6, 12, and 16. The warmest interglacials were 1, 5, 9 and 11. For matching of MIS numbers to named stages, see under the articles for those names.
Fauna
Both marine and continental faunas were essentially modern but with many more large land mammals such as Mammoths, Mastodons, Diprotodons, Smilodons, tigers, lions, Aurochs, short-faced bears, giant sloths, species within Gigantopithecus and others. Isolated landmasses such as Australia, Madagascar, New Zealand and islands in the Pacific saw the evolution of large birds and even reptiles such as the Elephant bird, moa, Haast's eagle, Quinkana, Megalania and Meiolania.
The severe climatic changes during the Ice Age had major impacts on the fauna and flora. With each advance of the ice, large areas of the continents became depopulated, and plants and animals retreating southwards in front of the advancing glacier faced tremendous stress. The most severe stress resulted from drastic climatic changes, reduced living space, and curtailed food supply. A major extinction event of large mammals (megafauna), which included mammoths, mastodons, saber-toothed cats, glyptodons, the woolly rhinoceros, various giraffids, such as the Sivatherium; ground sloths, Irish elk, cave lions, cave bears, Gomphotheres, American lions, dire wolves, and short-faced bears, began late in the Pleistocene and continued into the Holocene. Neanderthals also became extinct during this period. At the end of the last ice age, cold-blooded animals, smaller mammals like wood mice, migratory birds, and swifter animals like whitetail deer had replaced the megafauna and migrated north. Late Pleistocene bighorn sheep were more slender and had longer legs than their descendants today. Scientists believe that the change in predator fauna after the late Pleistocene extinctions resulted in a change of body shape as the species adapted for increased power rather than speed.
The extinctions hardly affected Africa but were especially severe in North America where native horses and camels were wiped out.
Asian land mammal ages (ALMA) include Zhoukoudianian, Nihewanian, and Yushean.
European land mammal ages (ELMA) include the Villafranchian, Galerian, and Aurelian
North American land mammal ages (NALMA) include Blancan (4.75–1.8), Irvingtonian (1.8–0.24) and Rancholabrean (0.24–0.01) in millions of years. The Blancan extends significantly back into the Pliocene.
South American land mammal ages (SALMA) include Uquian (2.5–1.5), Ensenadan (1.5–0.3) and Lujanian (0.3–0.01) in millions of years. The Uquian previously extended significantly back into the Pliocene, although the new definition places it entirely within the Pleistocene.
In July 2018, a team of Russian scientists in collaboration with Princeton University announced that they had brought two female nematodes frozen in permafrost, from around 42,000 years ago, back to life. The two nematodes, at the time, were the oldest confirmed living animals on the planet.
Humans
The evolution of anatomically modern humans took place during the Pleistocene. At the beginning of the Pleistocene Paranthropus species were still present, as well as early human ancestors, but during the lower Palaeolithic they disappeared, and the only hominin species found in fossilic records is Homo erectus for much of the Pleistocene. Acheulean lithics appear along with Homo erectus, some 1.8 million years ago, replacing the more primitive Oldowan industry used by A. garhi and by the earliest species of Homo. The Middle Paleolithic saw more varied speciation within Homo, including the appearance of Homo sapiens about 300,000 years ago. Artifacts associated with modern human behavior are unambiguously attested starting 40,000–50,000 years ago.
According to mitochondrial timing techniques, modern humans migrated from Africa after the Riss glaciation in the Middle Palaeolithic during the Eemian Stage, spreading all over the ice-free world during the late Pleistocene. A 2005 study posits that humans in this migration interbred with archaic human forms already outside of Africa by the late Pleistocene, incorporating archaic human genetic material into the modern human gene pool.
See also
Climate change
Pleistocene megafauna
Pleistocene Park
Quaternary glaciation
Explanatory notes
References
Ogg, Jim (June 2004). "Overview of Global Boundary Stratotype Sections and Points (GSSP's)". International Commission on Stratigraphy. Retrieved 20 March 2019.
External links
Late Pleistocene environments of the southern high plains, 1975, edited by Wendorf and Hester.
Pleistocene Microfossils: 50+ images of Foraminifera
Stepanchuk V.N., Sapozhnykov I.V. Nature and man in the pleistocene of Ukraine. 2010
Human Timeline (Interactive) – Smithsonian, National Museum of Natural History (August 2016).
Pleistocene Park: Conservation Project to Restore a Pleistocene Ecology and Protect Permafrost Soils in Northern Siberia
1830s neologisms
Articles which contain graphical timelines
Geological epochs
Quaternary geochronology | 0.766153 | 0.999474 | 0.76575 |
Time management | Time management is the process of planning and exercising conscious control of time spent on specific activities—especially to increase effectiveness, efficiency and productivity.
Time management involves demands relating to work, social life, family, hobbies, personal interests and commitments. Using time effectively gives people more choices in managing activities. Time management may be aided by a range of skills, tools and techniques, especially when accomplishing specific tasks, projects and goals complying with a due date.
Initially, the term time management encompassed only business and work activities, but eventually the term broadened to include personal activities as well. A time management system is a designed combination of processes, tools, techniques and methods. Time management is usually a necessity in managing projects, as it determines the project completion time and scope.
Cultural views of time management
Differences in the way a culture views time can affect the way their time is managed. For example, a linear time view is a way of conceiving time as flowing from one moment to the next in a linear fashion. This linear perception of time is predominant in America along with most Northern European countries, such as Germany, Switzerland and England. People in these cultures tend to place a large value on productive time management and tend to avoid decisions or actions that would result in wasted time. This linear view of time correlates to these cultures being more "monochronic", or preferring to do only one thing at a time.
Another cultural time view is the multi-active time view. In multi-active cultures, most people feel that the more activities or tasks being done at once the better. This creates a sense of happiness. Multi-active cultures are "polychronic" or prefer to do multiple tasks at once. This multi-active time view is prominent in most Southern European countries such as Spain, Portugal and Italy. In these cultures, people often tend to spend time on things they deem to be more important such as placing a high importance on finishing social conversations. In business environments, they often pay little attention to how long meetings last and instead focus on having high-quality meetings. In general, the cultural focus tends to be on synergy and creativity over efficiency.
A final cultural time view is a cyclical time view. In cyclical cultures, time is considered neither linear nor event related. Because days, months, years, seasons, and events happen in regular repetitive occurrences, time is viewed as cyclical. In this view, time is not seen as wasted because it will always come back later, hence there is an unlimited amount of it. This cyclical time view is prevalent throughout most countries in Asia, including Japan and China. It is more important in cultures with cyclical concepts of time to focus on completing tasks correctly, thus most people will spend more time thinking about decisions and the impact they will have, before acting on their plans. Most people in cyclical cultures tend to understand that other cultures have different perspectives of time and are cognizant of this when acting on a global stage.
Neuropsychology
Excessive and chronic inability to manage time effectively may result from attention deficit hyperactivity disorder (ADHD). Diagnostic criteria include a sense of underachievement, difficulty getting organized, trouble getting started, trouble managing many simultaneous projects, and trouble with follow-through.
Setting priorities and goals
These goals are recorded and may be broken down into a project, an action plan or a simple task list. For individual tasks or for goals, an importance rating may be established. Deadlines may be set and priorities assigned. This process results in a plan with a task list, schedule or calendar of activities. Authors may recommend daily, weekly, monthly or other planning periods, associated with different scope of planning or review. This is done in various ways, as follows:
ABC analysis
The ABC method for time management developed by Alan Lakein involves categorizing tasks into three labels: A, B, and C.
A Tasks These are the highest priority and most urgent tasks. They include work that must be completed promptly, such as projects with a deadline.
B Tasks These tasks are important but not necessarily associated with a specific deadline. They should be completed as soon as possible.
C Tasks These are the least important tasks. They can be done when time permits and don’t require immediate attention.
Pareto analysis
The Pareto principle is the idea that 80% of consequences come from 20% of causes. Applied to productivity, it means that 80% of results can be achieved by doing 20% of tasks. If productivity is the aim of time management, then these tasks should be prioritized higher.
The Eisenhower Method
The "Eisenhower Method" or "Eisenhower Principle" is a method that utilizes the principles of importance and urgency to organize priorities and workload. This method stems from a quote attributed to Dwight D. Eisenhower: "I have two kinds of problems, the urgent and the important. The urgent are not important, and the important are never urgent." Eisenhower did not claim this insight for his own, but attributed it to an (unnamed) "former college president."
Using the Eisenhower Decision Principle, tasks are evaluated using the criteria important/unimportant and urgent/not urgent, and then placed in according quadrants in an Eisenhower Matrix (also known as an "Eisenhower Box" or "Eisenhower Decision Matrix"). Tasks in the quadrants are then handled as follows.
Important/Urgent quadrant tasks are done immediately and personally e.g. crises, deadlines, problems.
Important/Not Urgent quadrant tasks get an end date and are done personally, e.g. relationships, planning, recreation.
Unimportant/Urgent quadrant tasks are delegated, e.g. interruptions, meetings, activities.
Unimportant/Not Urgent quadrant tasks are dropped, e.g. time wasters, pleasant activities, trivia.
Implementation of goals
A task list (also called a to-do list or "things-to-do") is a list of tasks to be completed such as chores or steps toward completing a project. It is an inventory tool which serves as an alternative or supplement to memory.
Task lists are used in self-management, business management, project management and software development. It may involve more than one list.
When one of the items on a task list is accomplished, the task is checked or crossed off. The traditional method is to write these on a piece of paper with a pen or pencil, usually on a note pad or clip-board. Task lists can also have the form of paper or software checklists.
Writer Julie Morgenstern suggests "do's and don'ts" of time management that include:
Map out everything that is important, by making a task list.
Create "an oasis of time" for one to manage.
Say "No".
Set priorities.
Do not drop everything.
Do not think a critical task will get done in one's spare time.
Numerous digital equivalents are now available, including personal information management (PIM) applications and most PDAs. There are also several web-based task list applications, many of which are free.
Task list organization
Task lists are often diarized and tiered. The simplest tiered system includes a general to-do list (or task-holding file) to record all the tasks the person needs to accomplish and a daily to-do list which is created each day by transferring tasks from the general to-do list. An alternative is to create a "not-to-do list", to avoid unnecessary tasks.
Task lists are often prioritized in the following ways.
A daily list of things to do, numbered in the order of their importance and done in that order one at a time as daily time allows, is attributed to consultant Ivy Lee (1877–1934) as the most profitable advice received by Charles M. Schwab (1862–1939), president of the Bethlehem Steel Corporation.
An early advocate of "ABC" prioritization was Alan Lakein, in 1973. In his system "A" items were the most important ("A-1" the most important within that group), "B" next most important, "C" least important.
A particular method of applying the ABC method assigns "A" to tasks to be done within a day, "B" a week, and "C" a month.
To prioritize a daily task list, one either records the tasks in the order of highest priority, or assigns them a number after they are listed ("1" for highest priority, "2" for second highest priority, etc.) which indicates in which order to execute the tasks. The latter method is generally faster, allowing the tasks to be recorded more quickly.
Another way of prioritizing compulsory tasks (group A) is to put the most unpleasant one first. When it is done, the rest of the list feels easier. Groups B and C can benefit from the same idea, but instead of doing the first task (which is the most unpleasant) right away, it gives motivation to do other tasks from the list to avoid the first one.
Various writers have stressed potential difficulties with to-do lists such as the following.
Management of the list can take over from implementing it. This could be caused by procrastination by prolonging the planning activity. This is akin to analysis paralysis. As with any activity, there's a point of diminishing returns.
To remain flexible, a task system must allow for disaster. A company must be ready for a disaster. Even if it is a small disaster, if no one made time for this situation, it can metastasize, potentially causing damage to the company.
To avoid getting stuck in a wasteful pattern, the task system should also include regular (monthly, semi-annual, and annual) planning and system-evaluation sessions, to weed out inefficiencies and ensure the user is headed in the direction he or she truly desires.
If some time is not regularly spent on achieving long-range goals, the individual may get stuck in a perpetual holding pattern on short-term plans, like staying at a particular job much longer than originally planned.
Software applications
Many companies use time tracking software to track an employee's working time, billable hours, etc., e.g. law practice management software.
Many software products for time management support multiple users. They allow the person to give tasks to other users and use the software for communication and to prioritize tasks.
Task-list applications may be thought of as lightweight personal information manager or project management software.
Modern task list applications may have built-in task hierarchy (tasks are composed of subtasks which again may contain subtasks), may support multiple methods of filtering and ordering the list of tasks, and may allow one to associate arbitrarily long notes for each task.
Time management systems
Time management systems often include a time clock or web-based application used to track an employee's work hours. Time management systems give employers insights into their workforce, allowing them to see, plan and manage employees' time. Doing so allows employers to manage labor costs and increase productivity. A time management system automates processes, which eliminates paperwork and tedious tasks.
GTD (Getting Things Done)
The Getting Things Done method, created by David Allen, is to finish small tasks immediately and for large tasks to be divided into smaller tasks to start completing now. The thrust of GTD is to encourage the user to get their tasks and ideas out and on paper and organized as quickly as possible so they are easy to see and manage. "The truth is, it takes more energy to keep something inside your head than outside," says Allen.
Pomodoro
Francesco Cirillo's "Pomodoro Technique" was originally conceived in the late 1980s and gradually refined until it was later defined in 1992. The technique is the namesake of a Pomodoro (Italian for tomato) shaped kitchen timer initially used by Cirillo during his time at university. The "Pomodoro" is described as the fundamental metric of time within the technique and is traditionally defined as being 30 minutes long, consisting of 25 minutes of work and 5 minutes of break time. Cirillo also recommends a longer break of 15 to 30 minutes after every four Pomodoros. Through experimentation involving various workgroups and mentoring activities, Cirillo determined the "ideal Pomodoro" to be 20–35 minutes long.
Related concepts
Time management is related to the following concepts.
Return on time invested: Effective time management is essential for maximizing Return on Time Invested (ROTI). By prioritizing tasks and organizing schedules, individuals can ensure that time is allocated to activities yielding the highest value.
Project management: Time management can be considered to be a project management subset and is more commonly known as project planning and project scheduling. Time management has also been identified as one of the core functions identified in project management.
Attention management relates to the management of cognitive resources, and in particular, the time that humans allocate their mind (and organize the minds of their employees) to conduct some activities.
Timeblocking is a time management strategy that specifically advocates for allocating chunks of time to dedicated tasks in order to promote deeper focus and productivity.
See also
Attention management
Chronemics
Goal setting
Interruption science
Order
Procrastination
Professional organizing
Project management
Prospective memory
Punctuality
Task management
Time perception
Time-tracking software
Workforce management
References
Further reading
Burkeman, Oliver (2021). Four Thousand Weeks. Time Management for Mortals, Farrar, Straus and Giroux. 978-0374159122
Management systems | 0.766754 | 0.998486 | 0.765594 |
Autarky | Autarky is the characteristic of self-sufficiency, usually applied to societies, communities, states, and their economic systems.
Autarky as an ideology or economic approach has been attempted by a range of political ideologies and movements, particularly leftist ones like African socialism, mutualism, war communism, communalism, swadeshi, syndicalism (especially anarcho-syndicalism), and left-wing populism, generally in an effort to build alternative economic structures or to control resources against structures a particular movement views as hostile. Conservative, centrist and nationalist movements have also adopted autarky, generally on a more limited scale, to develop a particular industry, to gain independence from other national entities or to preserve part of an existing social order.
Proponents of autarky have argued for national self-sufficiency to reduce foreign economic, political and cultural influences, as well as to promote international peace. Economists are generally supportive of free trade. There is a broad consensus among economists that protectionism has a negative effect on economic growth and economic welfare, while free trade and the reduction of trade barriers has a positive effect on economic growth and economic stability.
Autarky may be a policy of a state or some other type of entity when it seeks to be self-sufficient as a whole, but it also can be limited to a narrow field such as possession of a key raw material. Some countries have a policy of autarky with respect to foodstuffs (as South Korea), and water for national-security reasons. Autarky can result from economic isolation or from external circumstances in which a state or other entity reverts to localized production when it lacks currency or excess production to trade with the outside world.
Etymology
The word autarky is from the Ancient Greek word , which means "self-sufficiency" (derived from αὐτο-, "self", and ἀρκέω, "to suffice"). In Stoicism the concept of autarky represents independence from anything external, including independence from personal relationships, so as to immune one from vagaries of fortune. The Stoic sage is autarkic by being dependent only on his own virtue. In Epicureanism the concept of autarky represents having the fewest possible requirements for living a life of pleasure, free of pain (aponia).
Lexico, whose content is provided by the same publisher as that of the Oxford English Dictionary, says that autarky is a variant spelling of and pronounced the same as autarchy and that autarchy is another term for autocracy.
History
Ancient and medieval
Early state societies that can be regarded as autarkic include nomadic pastoralism and palace economy, though over time these tend towards becoming less self-sufficient and more interconnected. The late Bronze Age, for example, saw formerly self-sufficient palace economies rely more heavily on trade, which may have been a contributing factor to the eventual Bronze Age Collapse when multiple crises hit those systems at once. After that collapse, the ideal of autarkeia formed a part of emerging Greek political culture, emphasizing economic self-sufficiency and local self-rule.
The populist Chinese philosophy of Agriculturalism, prominent in the Spring and Autumn and Warring States periods, supported egalitarian, self-sufficient societies as an antidote to rampant war and corruption.
During the Late Roman Empire, some rebellions and communities pursued autarky as a reaction both to upheaval and to counter imperial power. A prominent example is the Bacaude, who repeatedly rebelled against the empire and "formed self-governing communities" with their own internal economy and coinage.
Medieval communes combined an attempt at overall economic self-sufficiency through the use of common lands and resources with the use of mutual defense pacts, neighborhood assemblies and organized militias to preserve local autonomy against the depredations of the local nobility. Many of these communes later became trading powers such as the Hanseatic League. In some cases, communal village economies maintained their own debt system as part of a self-sufficient economy and to avoid reliance on possibly hostile aristocratic or business interests. The trend toward "local self-sufficiency" increased after the Black Plague, initially as a reaction to the impact of the epidemic and later as a way for communes and city states to maintain power against the nobility.
There is considerable debate about how autarkic cultures that resisted the spread of early capitalism were. Golden Age pirate communities have been dubbed both heavily autarkic societies where "the marauders...lived in small, self-contained democracies" and as an "anti-autarky" due to their dependence on raiding.
While rarer among imperial states, some autarkies did occur during specific time periods. The Ming dynasty, during its earlier, more isolationist period, kept a closed economy that prohibited outside trade and focused on centralized distribution of goods produced in localized farms and workshops. A hierarchy of bureaucrats oversaw the distribution of these resources from central depots, including a massive one located in the Forbidden City. That depot was, at the time, the largest logistical base in the world. The Incan Empire also maintained a system of society-wide autarky based on community levies of specific goods and "supply on command".
19th and early 20th centuries
In some areas of the antebellum South, the enslaved and free black populations forged self-sufficient economies in an effort to avoid reliance on the larger economy controlled by the planter aristocracy. In eastern North Carolina maroon communities, often based in swampy areas, used a combination of agriculture and fishing to forge a "hidden economy" and secure survival. The relative self-reliance of these maritime African-American populations provided the basis for a strongly abolitionist political culture that made increasingly radical demands after the start of the Civil War. Due to tense relations with some Union commanders and political factions during and after that war, these communities "focused their organizing efforts on developing their own institutions, their own sense of self-reliance, and their own political strength".
Autarkic ambitions can also be seen in the populist backlash to the exploitations of free trade in the late 19th-century and in many early utopian socialist movements. Mutual aid societies like the Grange and Sovereigns of Industry attempted to set up self-sufficient economies (with varying degrees of success) in an effort to be less dependent on what they saw as an exploitative economic system and to generate more power to push for reforms.
Early socialist movements used these autarkic efforts to build their base with institutions like the Bourse de travail, socialist canteens and food assistance. These played a major role in securing workers' loyalty and building those parties into increasingly powerful institutions (especially in Europe) throughout the late 19th and early 20th-centuries. Through these cooperatives, "workers bought Socialist bread and Socialist shoes, drank Socialist beer, arranged for Socialist vacations and obtained a Socialist education."
Local and regional farming autarkies in many areas of Africa and Southeast Asia were displaced by European colonial administrations in the late 19th and early 20th centuries, who sought to push smallholder villages into larger plantations that, while less productive, they could more easily control. The self-sufficient communities and societies ended by colonialism were later cited as a useful example by African anarchists in the late 20th century.
Communist movements embraced or dismissed autarky as a goal at different times. In her survey of anarchism in the late 1800s, Voltairine De Cleyre summarized the autarkic goals of early anarchist socialists and communists as "small, independent, self-resourceful, freely-operating communes". In particular, Peter Kropotkin advocated local and regional autarky integrating agriculture and industry, instead of the international division of labor. His work repeatedly held up communities "that needed neither aid or protection from without" as a more resilient model.
Some socialist communities like Charles Fourier's phalansteries strove for self-sufficiency. The early USSR in the Russian Civil War strove for a self-sufficient economy with war communism, but later pursued international trade vigorously under the New Economic Policy. However, while the Soviet government during the latter period encouraged international trade, it also permitted and even encouraged local autarkies in many peasant villages.
Sometimes leftist groups clashed over autarkic projects. During the Spanish Civil War, the anarcho-syndicalist CNT and the socialist UGT had created economic cooperatives in the Levante that they claimed were "managing the economic life of the region independent of the government". But communist factions responded by cracking down on these cooperatives in an attempt to place economic control back in the hands of the central government.
Some right-wing totalitarian governments have claimed autarky as a goal, developing national industry and imposing high tariffs but have crushed other autarkic movements and often engaged in extensive outside economic activity. In 1921, Italian fascists attacked existing left-wing autarkic projects at the behest of large landowners, destroying roughly 119 labor chambers, 107 cooperatives and 83 peasant offices that year alone. Nazi Germany under economics minister Hjalmar Schacht, and later Walther Funk, still pursued major international trade, albeit under a different system, to escape the terms of the Treaty of Versailles, satisfy business elites and prepare for war. The regime would continue to conduct trade, including with countries like the United States, including connections with major companies like IBM and Coca-Cola.
After World War II
Economic self-sufficiency was pursued as a goal by some members of the Non-Aligned Movement, such as India under Jawaharlal Nehru and Tanzania, under the ideology of Ujamaa and Swadeshi. That was partly an effort to escape the economic domination of both the United States and the Soviet Union while modernizing the countries' infrastructure.
In the case of Francoist Spain, it was both the effect of international sanctions after the Spanish Civil War (1939) and the Second World War and the totalitarian nationalist ideology of Falange.
Post-war famine and misery lasted longer than in war-ravaged Europe.
It was not until the capitalist reforms of 1950s with the approach to the United States that the Spanish economy recovered the levels of 1935 launching into the Spanish Miracle.
In the latter half of the 20th century economists, especially in wealthier countries, backed the emerging Washington consensus, overwhelmingly endorsing free trade while discouraging autarkic and socialist policies. These economists asserted that protectionism has a negative effect on economic growth and economic welfare while free trade and the reduction of trade barriers has a positive effect on economic growth and economic stability. But economists from many developing nations, or who came from Marxian traditions, endorsed more autarkic ideas including Import Substitution Industrialization and Dependency theory The work of Celso Furtado, Raul Prebisch and Samir Amin, especially Amin's semi-autarkic "autocentric" policies, proved influential in efforts by many countries in Latin America, Asia and Africa to stave off economic domination by developing more self-sufficient economies.
Small-scale autarkies were sometimes used by the Civil Rights Movement, such as in the case of the Montgomery Bus Boycott. Boycotters set up their own self-sufficient system of cheap or free transit to allow black residents to get to work and avoid using the then-segregated public systems in a successful effort to bring political pressure.
Autarkic efforts for food sovereignty also formed part of the civil rights movement. In the late 60s activist Fannie Lou Hamer was one of the founders of the Freedom Farms Cooperative, an effort to redistribute economic power and build self-sufficiency in Black communities. "When you've got 400 quarts of greens and gumbo soup canned for the winter, nobody can push you around or tell you what to say or do," Hamer summarized as the rationale for the cooperative. The efforts were extensively targeted by segregationist authorities and the far-right with measures ranging from economic pressure to outright violence.
After World War II, Autonomist efforts in Europe embraced local autarkic projects in an effort to craft anti-authoritarian left-wing spaces, especially influencing the social center and squatters' rights movements. Such efforts remain a common feature of Autonomist and anarchist movements on the continent today. The Micropolis social centre in Greece, for example, has gyms, restaurants, bars, meeting space and free distribution of food and resources.
Around 1970, the Black Panther Party moved away from orthodox communist internationalism towards "intercommunalism", a term coined by Huey P. Newton, "to retain a grasp on the local when the rest of radical thought seemed to be moving global". Intercommunalism drew from left-wing autarkic projects like free medical clinics and breakfast programs, "explicitly articulated as attempts to fill a void left by the failure of the federal government to provide resources as basic as food to black communities".
In Murray Bookchin's Communalist ideal, he wrote that in a more liberated future "every community would approximate local or regional autarky".
The influential 1983 anarchist book bolo'bolo, by Hans Widmer, advocated the use of autarky among its utopian anti-capitalist communes (known as bolos), asserting "the power of the State is based on food supply. Only on the basis of a certain degree of autarky can the bolos enter into a network of exchange without being exploited". Widmer theorized that through "tactical autarky" such communes would be able to prevent the return of oppressive structures and a money economy.
Autarkic efforts to counter the forcible privatization of public resources and maintain local self-sufficiency also formed a key part of alter-globalization efforts. The Cochabamba Water War had Bolivians successfully oppose the privatization of their water system to keep the resource in public hands.
Contemporary
Today, national economic autarkies are relatively rare. A commonly-cited example is North Korea, based on the government ideology of Juche (self-reliance), which is concerned with maintaining its domestic localized economy in the face of its isolation. However, even North Korea has extensive trade with Russia, China, Syria, Iran, Vietnam, India and many countries in Europe and Africa. North Korea had to import food during a widespread famine in the 1990s.
Some would consider a modern example at a societal level is Rojava, the autonomous northern region of Syria. Despite a key alliance with the United States, supporters consider them largely cut off from international trade, facing multiple enemies, and striving for a society based on communalism, Rojava's government and constitution emphasize economic self-sufficiency directed by neighborhood and village councils. Rojavan society and economics are influenced by Bookchin's ideas, including the emphasis on local and regional self governance. Under changes made in 2012 property and business belong to those who live in or use it towards these goals, while infrastructure, land and major resources are commons run by local and regional councils. Bookchin however was concerned about the effects of isolationist autarky in respect to the closing off of a community and therefore always stressed the need for a balance between localism and globalism.
An example of a small, but true autarky is North Sentinel Island, whose native inhabitants refuse all contact with outsiders and live completely self-sufficient lives.
An example of a contemporary effort at localized autarky, incorporating the concept's history from black nationalism, Ujamaa, African-American socialism and the civil rights movement, is Cooperation Jackson, a movement aimed at creating a self-sufficient black working class economy in Jackson, Mississippi. The movement has aimed to secure land and build self-sufficient cooperatives and workplaces "to democratically transform the political economy of the city" and push back against gentrification. Cooperation Jackson also saw a gain in electoral political power when its involvement proved pivotal to the 2013 mayoral election of Chokwe Lumumba and the 2017 election of his son, Chokwe Antar Lumumba.
See also
Autarchism
Domestic sourcing
Robinson Crusoe
Swadeshi
References
Bibliography
External links
International trade
Self-sustainability | 0.767312 | 0.997732 | 0.765572 |
Patrilineality | Patrilineality, also known as the male line, the spear side or agnatic kinship, is a common kinship system in which an individual's family membership derives from and is recorded through their father's lineage. It generally involves the inheritance of property, rights, names, or titles by persons related through male kin. This is sometimes distinguished from cognate kinship, through the mother's lineage, also called the spindle side or the distaff side.
A patriline ("father line") is a person's father, and additional ancestors, as traced only through males.
In the Bible
In the Bible, family and tribal membership appears to be transmitted through the father. For example, a person is considered to be a priest or Levite, if his father is a priest or Levite, and the members of all the Twelve Tribes are called Israelites because their father is Israel (Jacob).
In the first lines of the New Testament, the descent of Jesus Christ from King David is counted through the male lineage.
Agnatic succession
Patrilineal or agnatic succession gives priority to or restricts inheritance of a throne or fief to male heirs descended from the original title holder through males only. Traditionally, agnatic succession is applied in determining the names and membership of European dynasties. The prevalent forms of dynastic succession in Europe, Asia and parts of Africa were male-preference primogeniture, agnatic primogeniture, or agnatic seniority until after World War II. The agnatic succession model, also known as Salic law, meant the total exclusion of women as hereditary monarchs and restricted succession to thrones and inheritance of fiefs or land to men in parts of medieval and later Europe. This form of strict agnatic inheritance has been officially revoked in all extant European monarchies except the Principality of Liechtenstein.
By the 21st century, most ongoing European monarchies had replaced their traditional agnatic succession with absolute primogeniture, meaning that the first child born to a monarch inherits the throne, regardless of the child's sex.
Genetic genealogy
The fact that human Y-chromosome DNA (Y-DNA) is paternally inherited enables patrilines and agnatic kinships of men to be traced through genetic analysis.
Y-chromosomal Adam (Y-MRCA) is the patrilineal most recent common ancestor from whom all Y-DNA in living men is descended. An identification of a very rare and previously unknown Y-chromosome variant in 2012 led researchers to estimate that Y-chromosomal Adam lived 338,000 years ago (237,000 to 581,000 years ago with 95% confidence), judging from molecular clock and genetic marker studies. Before this discovery, estimates of the date when Y-chromosomal Adam lived were much more recent, estimated to be tens of thousands of years.
See also
Agnatic seniority
Cadet branch
Derbfine
Family name
Historical inheritance systems
Hypodescent
Hyperdescent
Matrilineality
Matriname
Order of succession
Patricide
Patrilocal residence
Primogeniture
Royal and noble ranks
Y chromosome
References
External links
Kinship and descent
Patriarchy
Order of succession | 0.768124 | 0.996645 | 0.765547 |
Legend | A legend is a genre of folklore that consists of a narrative featuring human actions, believed or perceived to have taken place in human history. Narratives in this genre may demonstrate human values, and possess certain qualities that give the tale verisimilitude. Legend, for its active and passive participants, may include miracles. Legends may be transformed over time to keep them fresh and vital.
Many legends operate within the realm of uncertainty, never being entirely believed by the participants, but also never being resolutely doubted. Legends are sometimes distinguished from myths in that they concern human beings as the main characters and do not necessarily have supernatural origins, and sometimes in that they have some sort of historical basis whereas myths generally do not. The Brothers Grimm defined legend as "folktale historically grounded". A by-product of the "concern with human beings" is the long list of legendary creatures, leaving no "resolute doubt" that legends are "historically grounded."
A modern folklorist's professional definition of legend was proposed by Timothy R. Tangherlini in 1990:
Legend, typically, is a short (mono-) episodic, traditional, highly ecotypified historicized narrative performed in a conversational mode, reflecting on a psychological level a symbolic representation of folk belief and collective experiences and serving as a reaffirmation of commonly held values of the group to whose tradition it belongs.
Etymology and origin
Legend is a loanword from Old French that entered English usage . The Old French noun legende derives from the Medieval Latin legenda. In its early English-language usage, the word indicated a narrative of an event. The word legendary was originally a noun (introduced in the 1510s) meaning a collection or corpus of legends. This word changed to legendry, and legendary became the adjectival form.
By 1613, English-speaking Protestants began to use the word when they wished to imply that an event (especially the story of any saint not acknowledged in John Foxe's Actes and Monuments) was fictitious. Thus, legend gained its modern connotations of "undocumented" and "spurious", which distinguish it from the meaning of chronicle.
In 1866, Jacob Grimm described the fairy tale as "poetic, legend historic." Early scholars such as Friedrich Ranke and Will Erich Peuckert followed Grimm's example in focussing solely on the literary narrative, an approach that was enriched particularly after the 1960s, by addressing questions of performance and the anthropological and psychological insights provided in considering legends' social context. Questions of categorising legends, in hopes of compiling a content-based series of categories on the line of the Aarne–Thompson folktale index, provoked a search for a broader new synthesis.
In an early attempt at defining some basic questions operative in examining folk tales, in 1925 characterised the folk legend as "a popular narrative with an objectively untrue imaginary content", a dismissive position that was subsequently largely abandoned.
Compared to the highly structured folktale, legend is comparatively amorphous, Helmut de Boor noted in 1928. The narrative content of legend is in realistic mode, rather than the wry irony of folktale; Wilhelm Heiske remarked on the similarity of motifs in legend and folktale and concluded that, in spite of its realistic mode, legend is not more historical than folktale.
In Einleitung in der Geschichtswissenschaft (1928), Ernst Bernheim asserted that a legend is simply a longstanding rumour. Gordon Allport credited the staying-power of some rumours to the persistent cultural state-of-mind that they embody and capsulise; thus "Urban legends" are a feature of rumour. When Willian Hugh Jansen suggested that legends that disappear quickly were "short-term legends" and the persistent ones be termed "long-term legends", the distinction between legend and rumour was effectively obliterated, Tangherlini concluded.
Christian legenda
In a narrow Christian sense, legenda ("things to be read [on a certain day, in church]") were hagiographical accounts, often collected in a legendary. Because saints' lives are often included in many miracle stories, legend, in a wider sense, came to refer to any story that is set in a historical context, but that contains supernatural, divine or fantastic elements.
Oral tradition
History preserved orally through many generations often takes on a more narrative-based or mythological form over time, an example being the oral traditions of the African Great Lakes.
Related concepts
Hippolyte Delehaye distinguished legend from myth: "The legend, on the other hand, has, of necessity, some historical or topographical connection. It refers imaginary events to some real personage, or it localizes romantic stories in some definite spot."
From the moment a legend is retold as fiction, its authentic legendary qualities begin to fade and recede: in The Legend of Sleepy Hollow, Washington Irving transformed a local Hudson River Valley legend into a literary anecdote with "Gothic" overtones, which actually tended to diminish its character as genuine legend.
Stories that exceed the boundaries of "realism" are called "fables". For example, the talking animal formula of Aesop identifies his brief stories as fables, not legends. The parable of the Prodigal Son would be a legend if it were told as having actually happened to a specific son of a historical father. If it included a donkey that gave sage advice to the Prodigal Son it would be a fable.
Legend may be transmitted orally, passed on person-to-person, or, in the original sense, through written text. Jacobus de Voragine's Legenda Aurea or "The Golden Legend" comprises a series of vitae or instructive biographical narratives, tied to the liturgical calendar of the Roman Catholic Church. They are presented as lives of the saints, but the profusion of miraculous happenings and above all their uncritical context are characteristics of hagiography. The Legenda was intended to inspire extemporized homilies and sermons appropriate to the saint of the day.
Urban legend
Urban legends are a modern genre of folklore that is rooted in local popular culture, usually comprising fictional stories that are often presented as true, with macabre or humorous elements. These legends can be used for entertainment purposes, as well as semi-serious explanations for seemingly-mysterious events, such as disappearances and strange objects.
The term "urban legend," as generally used by folklorists, has appeared in print since at least 1968. Jan Harold Brunvand, professor of English at the University of Utah, introduced the term to the general public in a series of popular books published beginning in 1981. Brunvand used his collection of legends, The Vanishing Hitchhiker: American Urban Legends & Their Meanings (1981) to make two points: first, that legends and folklore do not occur exclusively in so-called primitive or traditional societies, and second, that one could learn much about urban and modern culture by studying such tales.
See also
The Matter of Britain, Arthurian legend
Legendary saga
Legendary creature
Lists of legendary creatures
Narrative history
References
Folklore
Literary genres
Narratology
Traditional stories
Adventure fiction | 0.767202 | 0.997817 | 0.765527 |
Historical revisionism | In historiography, historical revisionism is the reinterpretation of a historical account. It usually involves challenging the orthodox (established, accepted or traditional) scholarly views or narratives regarding a historical event, timespan, or phenomenon by introducing contrary evidence or reinterpreting the motivations and decisions of the people involved. Revision of the historical record can reflect new discoveries of fact, evidence, and interpretation as they come to light. The process of historical revision is a common, necessary, and usually uncontroversial process which develops and refines the historical record in order to make it more complete and accurate.
One form of historical revisionism involves a reversal of older moral judgments. Revision in this fashion is a more controversial topic, and can include denial or distortion of the historical record yielding an illegitimate form of historical revisionism known as historical negationism (involving, for example, distrust of genuine documents or records or deliberate manipulation of statistical data to draw predetermined conclusions). This type of historical revisionism can present a re-interpretation of the moral meaning of the historical record. Negationists use the term revisionism to portray their efforts as legitimate historical inquiry; this is especially the case when revisionism relates to Holocaust denial.
Historical scholarship
Historical revisionism is the means by which the historical record, the history of a society, as understood in its collective memory, continually accounts for new facts and interpretations of the events that are commonly understood as history. The historian and American Historical Association member James M. McPherson has said:
In the field of historiography, the historian who works within the existing establishment of society and has produced a body of history books from which he or she can claim authority, usually benefits from the status quo. As such, the professional-historian paradigm is manifested as a denunciative stance towards any form of historical revisionism of fact, interpretation or both. In contrast to the single-paradigm form of writing history, the philosopher of science, Thomas Kuhn, said, in contrast to the quantifiable hard sciences, characterized by a single paradigm, the social sciences are characterized by several paradigms that derive from a "tradition of claims, counterclaims, and debates over [the] fundamentals" of research. On resistance to the works of revised history that present a culturally-comprehensive historical narrative of the US, the perspectives of black people, women, and the labour movement, the historian David Williams said:
After the Second World War, the study and production of history in the US was expanded by the G.I. Bill, which funding allowed "a new and more broadly-based generation of scholars" with perspectives and interpretations drawn from the feminist movement, the Civil Rights Movement, and the American Indian Movement. That expansion and deepening of the pool of historians voided the existence of a definitive and universally-accepted history, therefore, is presented by the revisionist historian to the national public with an history that has been corrected and augmented with new facts, evidence, and interpretations of the historical record. In The Cycles of American History (1986), in contrasting and comparing the US and the Soviet Union during the Cold War (1945–1991), the historian Arthur M. Schlesinger Jr. said:
Revisionist historians contest the mainstream or traditional view of historical events and raise views at odds with traditionalists, which must be freshly judged. Revisionist history is often practiced by those who are in the minority, such as feminist historians, ethnic minority historians, those working outside of mainstream academia in smaller and less known universities, or the youngest scholars, essentially historians who have the most to gain and the least to lose in challenging the status quo. In the friction between the mainstream of accepted beliefs and the new perspectives of historical revisionism, received historical ideas are either changed, solidified, or clarified. If over a period of time, the revisionist ideas become the new establishment status quo a paradigm shift is said to have occurred. The historian Forrest McDonald is often critical of the turn that revisionism has taken but admits that the turmoil of the 1960s America has changed the way history was written:
Historians are influenced by the zeitgeist (spirit of the time), and the usually progressive changes to society, politics, and culture, such as occurred after the Second World War (1939–1945); in The Future of the Past (1989), the historian C. Vann Woodward said:
Developments in the academy, culture, and politics shaped the contemporary model of writing history, the accepted paradigm of historiography. The philosopher Karl Popper said that "each generation has its own troubles and problems, and, therefore, its own interests and its own point of view".
As the social, political, and cultural influences change a society, most historians revise and update their explanation of historical events. The old consensus, based upon limited evidence, might no longer be considered historically valid in explaining the particulars: of cause and effect, of motivation and self-interest – that tell How? and Why? the past occurred as it occurred; therefore, the historical revisionism of the factual record is revised to concord with the contemporary understanding of history. As such, in 1986, the historian John Hope Franklin described four stages in the historiography of the African experience of life in the US, which were based upon different models of historical consensus.
Negationism and denial
The historian Deborah Lipstadt (Denying the Holocaust: The Growing Assault on Truth and Memory, 1993), and the historians Michael Shermer and Alex Grobman (Denying History: Who Says the Holocaust Never Happened and Why Do They Say It?, 2002), distinguish between historical revisionism and historical negationism, the latter of which is a form of denialism. Lipstadt said that Holocaust deniers, such as Harry Elmer Barnes, disingenuously self-identify as "historical revisionists" in order to obscure their denialism as academic revision of the historical record.
As such, Lipstadt, Shermer, and Grobman said that legitimate historical revisionism entails the refinement of existing knowledge about a historical event, not a denial of the event, itself; that such refinement of history emerges from the examination of new, empirical evidence, and a re-examination, and consequent re-interpretation of the existing documentary evidence. That legitimate historical revisionism acknowledges the existence of a "certain body of irrefutable evidence" and the existence of a "convergence of evidence", which suggest that an event – such as the Black Death, American slavery, and the Holocaust – did occur; whereas the denialism of history rejects the entire foundation of historical evidence, which is a form of historical negationism.
Basis for Historical Revision
The process of historical revision involves updating the historical record to accommodate developments as they arise. The historical record may be revised to accommodate for a number of academic reasons, including the following:
Access to new data/records
The release, discovery, or publicization of documents previously unknown may lead scholars to hold new views of well established events. For example, archived or sealed government records (often related to national security) will become available under the thirty-year rule and similar laws. Such documents can provide new sources and therefore new analyses of past events that will alter the historical perspective.
With the release of the ULTRA archives in the 1970s under the British thirty-year rule, much of the Allied high command tactical decisiomaking process was re-evaluated, particularly the Battle of the Atlantic. Before the release of the ULTRA archives, there was much debate over whether Field Marshal Bernard Montgomery could have known that Arnhem was heavily garrisoned. With the release of the archives, which indicated that they were, the balance of the evidence swung in the direction of his detractors. The release of the ULTRA archives also forced a re-evaluation of the history of the electronic computer.
New sources in other languages
As more sources in other languages become available historians may review their theories in light of the new sources. The revision of the meaning of the Dark Ages is an example.
Developments in other fields of science
DNA analysis has had an impact in various areas of history either confirming established historical theories or presenting new evidence that undermines the current established historical explanation. Professor Andrew Sherratt, a British prehistorian, was responsible for introducing the work of anthropological writings on the consumption of legal and illegal drugs and how to use the papers to explain certain aspects of prehistoric societies. Carbon dating, the examination of ice cores and tree rings, palynology, scanning electron microscope analysis of early metal samples, and measuring oxygen isotopes in bones, have all provided new data in the last few decades with which to argue new hypotheses. Extracting ancient DNA allows historians to debate the meaning and importance of race and indeed current identities.
Nationalism
For example, in schoolbooks' history on Europe, it is possible to read about an event from completely different perspectives. In the Battle of Waterloo, most British, French, Dutch and German schoolbooks slant the battle to emphasise the importance of the contribution of their nations. Sometimes, the name of an event is used to convey political or a national perspective. For example, the same conflict between two English-speaking countries is known by two different names: the "American War of Independence" and the "American Revolutionary War". As perceptions of nationalism change, so do the areas of history that are driven by such ideas. Wars are contests between enemies, and postwar histories select the facts and interpretations to suit their internal needs, The Korean War, for example, has sharply different interpretations in textbooks in the countries involved.
Culture
For example, as regionalism has regained some of its old prominence in British politics, some historians have suggested that the older studies of the English Civil War were centred on England and that to understand the war, events that had previously been dismissed as on the periphery should be given greater prominence. To emphasise this, revisionist historians have suggested that the English Civil War becomes just one of a number of interlocking conflicts known as Wars of the Three Kingdoms. Furthermore, as cultures develop, it may become strategically advantageous for some revision-minded groups to revise their public historical narrative in such a way so as to either discover, or in rarer cases manufacture, a precedent which contemporary members of the given subcultures can use as a basis or rationale for reform or change.
Ideology
For example, in the 1940s, it became fashionable to see the English Civil War from a Marxist school of thought. In the words of Christopher Hill, "the Civil War was a class war." After World War II, the influence of Marxist interpretation waned in British academia and by the 1970s this view came under attack by a new school of revisionists and it has been largely overturned as a major mainstream explanation of the mid-17th-century conflict in England, Scotland, and Ireland.
Historical causation
Issues of causation in history are often revised with new research: for example, by the mid-20th century the status quo was to see the French Revolution as the result of the triumphant rise of a new middle class. Research in the 1960s prompted by revisionist historians like Alfred Cobban and François Furet revealed the social situation was much more complex, and the question of what caused the revolution is now closely debated.
Specific issues
Dark Ages
As non-Latin texts, such as Welsh, Gaelic and the Norse sagas have been analysed and added to the canon of knowledge about the period, and as much more archaeological evidence has come to light, the period known as the Dark Ages has narrowed to the point that many historians no longer believe that such a term is useful. Moreover, the term "dark" implies less of a void of culture and law but more a lack of many source texts in Mainland Europe. Many modern scholars who study the era tend to avoid the term altogether for its negative connotations and find it misleading and inaccurate for any part of the Middle Ages.
Feudalism
The concept of feudalism has been questioned. Revisionist scholars led by historian Elizabeth A. R. Brown have rejected the term.
Battle of Agincourt
Historians generally believe that the Battle of Agincourt was an engagement in which the English army, overwhelmingly outnumbered four to one by the French army, pulled off a stunning victory. This understanding was especially popularised by Shakespeare's play Henry V. However, recent research by Professor Anne Curry, using the original enrollment records, has brought into question this interpretation. Though her research is not finished, she has published her initial findings that the French outnumbered the English and the Welsh only by 12,000 to 8,000. If true, the numbers may have been exaggerated for patriotic reasons by the English.
New World discovery and European colonization of the Americas
In recounting the European colonization of the Americas, some history books of the past paid little attention to the indigenous peoples of the Americas, usually mentioning them only in passing and making no attempt to understand the events from their point of view. That was reflected in the description of Christopher Columbus having discovered America. Those events' portrayal has since been revised to avoid the word "discovery."
In his 1990 revisionist book, The Conquest of Paradise: Christopher Columbus and the Columbian Legacy, Kirkpatrick Sale argued that Christopher Columbus was an imperialist bent on conquest from his first voyage. In a New York Times book review, historian and member of the Christopher Columbus Quincentenary Jubilee Committee William Hardy McNeill wrote about Sale:
he has set out to destroy the heroic image that earlier writers have transmitted to us. Mr. Sale makes Columbus out to be cruel, greedy and incompetent (even as a sailor), and a man who was perversely intent on abusing the natural paradise on which he intruded."
McNeill declares Sale's work to be "unhistorical, in the sense that [it] selects from the often-cloudy record of Columbus's actual motives and deeds what suits the researcher's 20th-century purposes." McNeill states that detractors and advocates of Columbus present a "sort of history [that] caricatures the complexity of human reality by turning Columbus into either a bloody ogre or a plaster saint, as the case may be."
New Qing history
Historians in China and from abroad long wrote that the Manchus who conquered China and established the Qing dynasty (1636–1912) adopted the customs and institutions of the Han Chinese dynasties that preceded them and were "sinicized", that is, absorbed into Chinese culture. In 1990 American historians explored Manchu language sources and newly accessible imperial archives, and discovered that the emperors retained their Manchu culture and that they regarded China proper as only one part of their larger empire. These scholars differ among themselves but agree on a major revision of the history of the Qing dynasty.
French Revolution
French attack formations in the Napoleonic wars
The military historian James R. Arnold argues:
Argentine Civil Wars
After the proclamation of the Argentine Republic in late 1861, its first de facto President, Bartolomé Mitre, wrote the first Argentine historiographical works: Historia de Belgrano y de la Independencia Argentina and Historia de San Martín y de la emancipación sudamericana. Although these were criticised by notorious intellectuals such as Dalmacio Vélez Sarsfield and Juan Bautista Alberdi and even by some colleagues like Adolfo Saldías, both stated a liberal-conservative bias on Argentine history through the National Academy of History established in 1893, despite the existence of caudillos and gauchos.
During the Radical Civic Union government of Hipólito Yrigoyen, historians followed the revisionist view of anti-mitrist politicians such as Carlos D'Amico, Ernesto Quesada and David Peña and their theories reached the academy thanks to Dardo Corvalán Mendilharsu. Argentine historical revisionism could reach its peak during the peronist government. In 2011, the Manuel Dorrego National Institute of Argentine and Iberoamerican Historical Revisionism was established by the Secretary of Culture, but this one suffered a rupture between 21st century socialists and nationalists. Three weeks after the Inauguration of Mauricio Macri, the institute was closed.
World War I
German guilt
In reaction to the orthodox interpretation enshrined in the Versailles Treaty, which declared that Germany was guilty of starting World War I, the self-described "revisionist" historians of the 1920s rejected the orthodox view and presented a complex causation in which several other countries were equally guilty. Intense debate continues among scholars.
Poor British and French military leadership
The military leadership of the British Army during World War I was frequently condemned as poor by historians and politicians for decades after the war ended. Common charges were that the generals commanding the army were blind to the realities of trench warfare, ignorant of the conditions of their men and unable to learn from their mistakes, thus causing enormous numbers of casualties ("lions led by donkeys").<ref>Lions Led By Donkeys
Thompson, P.A. Lions Led By Donkeys: Showing How Victory In The Great War Was Achieved By Those Who Made the Fewest Mistakes T. Werner Laurie, Ltd. 1st English Edition. 1927
Bournes, John. "Lions Led By Donkeys" , Centre for First World War Studies, University of Birmingham.</ref> However, during the 1960s, historians such as John Terraine began to challenge that interpretation. In recent years, as new documents have come forth and the passage of time has allowed for more objective analysis, historians such as Gary D. Sheffield and Richard Holmes observe that the military leadership of the British Army on the Western Front had to cope with many problems that they could not control, such as a lack of adequate military communications, which had not occurred. Furthermore, military leadership improved throughout the war, culminating in the Hundred Days Offensive advance to victory in 1918. Some historians, even revisionists, still criticise the British High Command severely but are less inclined to portray the war in a simplistic manner with brave troops being led by foolish officers.
There has been a similar movement regarding the French Army during the war with contributions by historians such as Anthony Clayton. Revisionists are far more likely to view commanders such as French General Ferdinand Foch, British General Douglas Haig and other figures, such as American John Pershing, in a sympathetic light.
Reconstruction in the United States
Revisionist historians of the Reconstruction era of the United States rejected the dominant Dunning School that stated that Black Americans were used by carpetbaggers, and instead stressed economic greed on the part of northern businessmen. Indeed, in recent years a "neoabolitionist" revisionism has become standard; it uses the moral standards of racial equality of the 19th century abolitionists to criticize racial policies. "Foner's book represents the mature and settled Revisionist perspective", historian Michael Perman has concluded regarding Eric Foner's Reconstruction: America's Unfinished Revolution, 1863–1877 (1988).
American business and "robber barons"
The role of American business and the alleged "robber barons" began to be revised in the 1930s. Termed "business revisionism" by Gabriel Kolko, historians such as Allan Nevins, and then Alfred D. Chandler emphasized the positive contributions of individuals who were previously pictured as villains. Peter Novick writes, "The argument that whatever the moral delinquencies of the robber barons, these were far outweighed by their decisive contributions to American military [and industrial] prowess, was frequently invoked by Allan Nevins."
Excess mortality in the Soviet Union under Stalin
Prior to the collapse of the Soviet Union and the archival revelations, Western historians estimated that the numbers killed by Stalin's regime were 20 million or higher. After the Soviet Union dissolved, evidence from the Soviet archives also became available and provided information that led to a significant revision in death toll estimates for the Stalin regime, with estimates in the range from 3 million to 9 million. In post-1991 Russia the KGB archives remained briefly open during 1990's, which helped creation of organisations such as Memorial, which engaged in research of the archives and search of secret mass burial grounds. After Putin came to power however, access to archives was restricted again and research in this area once again became politically incorrect, culminating with forcibly shutting down the organization in 2021.
Soviet Union and Russia
Soviet Union frequently resorted to changing its official history to suit changes in state policy, especially after splits in the Bolshevik leadership or change of political alliances. The book History of the Communist Party of the Soviet Union (Bolsheviks) was subject to numerous such changes to reflect removal of Bolshevik leaders previously trusted by Stalin but did not support him unanimously. Great Soviet Encyclopedia was also redacted frequently, with subscribers of the paper book receiving letter to cut out pages e.g. about Lavrentiy Beria or Nikolai Bukharin and replace them with unrelated articles. Historic photos were also frequently edited to remove people who later lost trust of the Party.
The process of rewriting history of USSR and post-1991 Russia was once again restarted in 2010's after Russia's first attack on Ukraine and intensified after 2022 full-scale invasion in Ukraine. History school books received significant changes which reflected the changes in the official history narratives: for example, while 2010 books openly mentioned decrease of life expectancy in Soviet Union caused shortages and insufficient spending on public healthcare, new 2023 books vaguely states that life expectancy has generally increased and instead focused on unspecified "achievements in the sphere of education and science". In chapters on Stalin, he's once again presented as a great tragedy to ordinary Russians and any mentions of repressions have disappeared. Similar changes were introduced in chapters discussing Soviet economy, space program, Brezhnev, collapse of USSR, perestroika and glasnost, where the phrase "freedom of speech" started to be used in scare quotes and presented as something harmful. Soviet intervention in Afghanistan in 1979 which was presented as Soviet contribution into the fight against radical Islamism, completely contradicting both Soviet and post-Soviet narratives.
Also, since 2014, Russian law enforcement started to prosecute public statements which do not comply with the current version of Russian history. Article 354.1 of Criminal Code of Russia which makes "rehabilitation of Nazism" a crime has been applied both to actual statements praising Nazism, but also to statements which recalled Nazi-Soviet cooperation 1939-1941 or Soviet war crimes conducted in other countries. In some cases article 20.3 of Code of the Russian Federation on Administrative Offenses is also being applied in these cases.
Guilt for causing World War II
The orthodox interpretation blamed Nazi Germany and Imperial Japan for causing the war. Revisionist historians of World War II, notably Charles A. Beard, said the United States was partly to blame because it pressed the Japanese too hard in 1940 and 1941 and rejected compromises. Other notable contributions to this discussion include Charles Tansill, Back Door To War (Chicago, 1952); Frederic Sanborn, Design For War (New York, 1951); and David Hoggan, The Forced War (Costa Mesa, 1989). The British historian A. J. P. Taylor ignited a firestorm when he argued Hitler was an ineffective and inexperienced diplomat and did not deliberately set out to cause a world war.
Patrick Buchanan, an American paleoconservative pundit, argued that the Anglo–French guarantee in 1939 encouraged Poland not to seek a compromise over Danzig. He further argued that Britain and France were in no position to come to Poland's aid, and Hitler was offering the Poles an alliance in return. Buchanan argued the guarantee led the Polish government to transform a minor border dispute into a major world conflict, and handed Eastern Europe, including Poland, to Stalin. Buchanan also argued the guarantee ensured the country would be eventually invaded by the Soviet Union, as Stalin knew the British were in no position to declare war on the Soviet Union in 1939, due to their military weakness.
Atomic bombings of Hiroshima and Nagasaki
The atomic bombings of Hiroshima and Nagasaki have generated controversy and debate. Historians who accepted President Harry Truman's reasoning in justifying dropping atomic bombs in order to force Japanese surrender end of World War II are known as "orthodox," while "revisionists" generally deny that the bombs were necessary. Some also claim that Truman knew they were not necessary but wanted to pressure the Soviet Union. These historians see Truman's decision as a major factor in starting the Cold War. They and others also may charge that Truman ignored or downplayed predictions of casualties.
Cold War
Historians debate the causes and responsibility for the Cold War. The "orthodox" view puts the major blame on the Soviet Union, while a "revisionist" view puts more responsibility on the United States.
Vietnam WarAmerica in Vietnam (1978), by Guenter Lewy, is an example of historical revisionism that differs much from the popular view of the U.S. in the Vietnam War (1955–75) for which the author was criticized and supported for belonging to the revisionist school on the history of the Vietnam War. Lewy's reinterpretation was the first book of a body of work by historians of the revisionist school about the geopolitical role and the U.S. military behavior in Vietnam.
In the introduction, Lewy said:
Other reinterpretations of the historical record of the U.S. war in Vietnam, which offer alternative explanations for American behavior, include Why We Are in Vietnam (1982), by Norman Podhoretz, Triumph Forsaken: The Vietnam War, 1954–1965 (2006), by Mark Moyar, and Vietnam: The Necessary War (1999), by Michael Lind.
Chronological revisionism
It is generally accepted that the foundations of modern chronology were laid by the humanist Joseph Scaliger. Isaac Newton in his work The Chronology of Ancient Kingdoms made one of the first attempts to revise the "Scaligerian chronology". In the twentieth century the "revised chronology" of Immanuel Velikovsky can be singled out in this direction, perhaps it initiated a wave of new broad interest in the revision of chronology.
In general, revisionist chronological theories suggest halving the duration of the Christian era, or consider certain historical periods to be erroneously dated, such as Heribert Illig's Phantom time hypothesis or the materials of the "New Chronology", a proposed revision of eras by academician Anatoly Fomenko, albeit one widely rejected by mainstream scholars as pseudoscience.
See also
Denial
Zionism
Dialectic
Dialectical research
Mea culpa Official history
Pseudohistory
Holocaust denial
Holodomor denial
Historical negationism
Cambodian genocide denial
Post-publication peer review
Denial of the 2023 Hamas-led attack on Israel
Cases of revisionism
The 1619 Project, a revisionist look at American history with a focus on slavery and its legacy
Afrocentrism, historical scholarship with a focus on African peoples
Christ myth theory, a theory that Jesus never existed
Donation of Constantine, exposure of a forgery
Historical revision of the Inquisition
New Historians, a group of Israeli historians with alternative views about Israel's history
Revisionism (Spain)
Revisionist school of Islamic studies
References
Informational notes
Citations
Further reading
Banner Jr., James M. (2021). The Ever-Changing Past: Why All History Is Revisionist History. Yale University Press. .
Burgess, Glenn (1990). "On Revisionism: An Analysis of Early Stuart Historiography in the 1970s and 1980s." Historical Journal, vol. 33. no. 3, pp. 609–627. .
Comninel, George C. (1987). Rethinking the French Revolution: Marxism and the Revisionist Challenge. Verso.
Confino, Michael (2009). "The New Russian Historiography, and the Old—Some Considerations." History & Memory, vol. 21, no. 2, pp. 7–33. .
Gaither, Milton (2012). "The Revisionists Revived: The Libertarian Historiography of Education." History of Education Quarterly, vol. 52, no. 4, pp. 488–505.
Jainchill, Andrew, and Samuel Moyn (2004). "French Democracy Between Totalitarianism and Solidarity: Pierre Rosanvallon and Revisionist Historiography." Journal of Modern History, vol. 76, no. 1, pp. 107–154. .
Kopecek, Michal (2008). Past in the Making: Historical Revisionism in Central Europe After 1989. Central European University Press.
Lenin, V.I. (1908). "Marxism and Revisionism." Karl Marx—1818-1883 (symposium).
Markwick, Roger (2001). Rewriting History in Soviet Russia: The Politics of Revisionist Historiography 1956–1974. Springer.
Melosi, Martin V. (1983). "The Triumph of Revisionism: The Pearl Harbor Controversy, 1941-1982." Public Historian, vol. 5, no. 2, pp. 87–103. .
Palmer, William (2010). "Aspects of Revision in History in Great Britain and the United States, 1920–1975." Historical Reflections/Réflexions Historiques, vol. 36, no. 1, pp. 17–32. .
Riggenbach, Jeff (2009). Why American History Is Not What They Say: An Introduction to Revisionism. Auburn, Alabama: Ludwig von Mises Institute.
Rothbard, Murray N. (February 1976). "Revisionism and Libertarianism." Libertarian Forum, pp. 3–6.
Viola, Lynne (2002). "The Cold War in American Soviet Historiography and the End of the Soviet Union." Russian Review'', vol. 61, no. 1, pp. 25–34. .
External links
Historiography | 0.768318 | 0.996339 | 0.765504 |
Geomorphology | Geomorphology (from Ancient Greek: , , 'earth'; , , 'form'; and , , 'study') is the scientific study of the origin and evolution of topographic and bathymetric features generated by physical, chemical or biological processes operating at or near Earth's surface. Geomorphologists seek to understand why landscapes look the way they do, to understand landform and terrain history and dynamics and to predict changes through a combination of field observations, physical experiments and numerical modeling. Geomorphologists work within disciplines such as physical geography, geology, geodesy, engineering geology, archaeology, climatology, and geotechnical engineering. This broad base of interests contributes to many research styles and interests within the field.
Overview
Earth's surface is modified by a combination of surface processes that shape landscapes, and geologic processes that cause tectonic uplift and subsidence, and shape the coastal geography. Surface processes comprise the action of water, wind, ice, wildfire, and life on the surface of the Earth, along with chemical reactions that form soils and alter material properties, the stability and rate of change of topography under the force of gravity, and other factors, such as (in the very recent past) human alteration of the landscape. Many of these factors are strongly mediated by climate. Geologic processes include the uplift of mountain ranges, the growth of volcanoes, isostatic changes in land surface elevation (sometimes in response to surface processes), and the formation of deep sedimentary basins where the surface of the Earth drops and is filled with material eroded from other parts of the landscape. The Earth's surface and its topography therefore are an intersection of climatic, hydrologic, and biologic action with geologic processes, or alternatively stated, the intersection of the Earth's lithosphere with its hydrosphere, atmosphere, and biosphere.
The broad-scale topographies of the Earth illustrate this intersection of surface and subsurface action. Mountain belts are uplifted due to geologic processes. Denudation of these high uplifted regions produces sediment that is transported and deposited elsewhere within the landscape or off the coast. On progressively smaller scales, similar ideas apply, where individual landforms evolve in response to the balance of additive processes (uplift and deposition) and subtractive processes (subsidence and erosion). Often, these processes directly affect each other: ice sheets, water, and sediment are all loads that change topography through flexural isostasy. Topography can modify the local climate, for example through orographic precipitation, which in turn modifies the topography by changing the hydrologic regime in which it evolves. Many geomorphologists are particularly interested in the potential for feedbacks between climate and tectonics, mediated by geomorphic processes.
In addition to these broad-scale questions, geomorphologists address issues that are more specific or more local. Glacial geomorphologists investigate glacial deposits such as moraines, eskers, and proglacial lakes, as well as glacial erosional features, to build chronologies of both small glaciers and large ice sheets and understand their motions and effects upon the landscape. Fluvial geomorphologists focus on rivers, how they transport sediment, migrate across the landscape, cut into bedrock, respond to environmental and tectonic changes, and interact with humans. Soils geomorphologists investigate soil profiles and chemistry to learn about the history of a particular landscape and understand how climate, biota, and rock interact. Other geomorphologists study how hillslopes form and change. Still others investigate the relationships between ecology and geomorphology. Because geomorphology is defined to comprise everything related to the surface of the Earth and its modification, it is a broad field with many facets.
Geomorphologists use a wide range of techniques in their work. These may include fieldwork and field data collection, the interpretation of remotely sensed data, geochemical analyses, and the numerical modelling of the physics of landscapes. Geomorphologists may rely on geochronology, using dating methods to measure the rate of changes to the surface. Terrain measurement techniques are vital to quantitatively describe the form of the Earth's surface, and include differential GPS, remotely sensed digital terrain models and laser scanning, to quantify, study, and to generate illustrations and maps.
Practical applications of geomorphology include hazard assessment (such as landslide prediction and mitigation), river control and stream restoration, and coastal protection.
Planetary geomorphology studies landforms on other terrestrial planets such as Mars. Indications of effects of wind, fluvial, glacial, mass wasting, meteor impact, tectonics and volcanic processes are studied. This effort not only helps better understand the geologic and atmospheric history of those planets but also extends geomorphological study of the Earth. Planetary geomorphologists often use Earth analogues to aid in their study of surfaces of other planets.
History
Other than some notable exceptions in antiquity, geomorphology is a relatively young science, growing along with interest in other aspects of the earth sciences in the mid-19th century. This section provides a very brief outline of some of the major figures and events in its development.
Ancient geomorphology
The study of landforms and the evolution of the Earth's surface can be dated back to scholars of Classical Greece. In the 5th century BC, Greek historian Herodotus argued from observations of soils that the Nile delta was actively growing into the Mediterranean Sea, and estimated its age. In the 4th century BC, Greek philosopher Aristotle speculated that due to sediment transport into the sea, eventually those seas would fill while the land lowered. He claimed that this would mean that land and water would eventually swap places, whereupon the process would begin again in an endless cycle. The Encyclopedia of the Brethren of Purity published in Arabic at Basra during the 10th century also discussed the cyclical changing positions of land and sea with rocks breaking down and being washed into the sea, their sediment eventually rising to form new continents. The medieval Persian Muslim scholar Abū Rayhān al-Bīrūnī (973–1048), after observing rock formations at the mouths of rivers, hypothesized that the Indian Ocean once covered all of India. In his De Natura Fossilium of 1546, German metallurgist and mineralogist Georgius Agricola (1494–1555) wrote about erosion and natural weathering.
Another early theory of geomorphology was devised by Song dynasty Chinese scientist and statesman Shen Kuo (1031–1095). This was based on his observation of marine fossil shells in a geological stratum of a mountain hundreds of miles from the Pacific Ocean. Noticing bivalve shells running in a horizontal span along the cut section of a cliffside, he theorized that the cliff was once the pre-historic location of a seashore that had shifted hundreds of miles over the centuries. He inferred that the land was reshaped and formed by soil erosion of the mountains and by deposition of silt, after observing strange natural erosions of the Taihang Mountains and the Yandang Mountain near Wenzhou. Furthermore, he promoted the theory of gradual climate change over centuries of time once ancient petrified bamboos were found to be preserved underground in the dry, northern climate zone of Yanzhou, which is now modern day Yan'an, Shaanxi province. Previous Chinese authors also presented ideas about changing landforms. Scholar-official Du Yu (222–285) of the Western Jin dynasty predicted that two monumental stelae recording his achievements, one buried at the foot of a mountain and the other erected at the top, would eventually change their relative positions over time as would hills and valleys. Daoist alchemist Ge Hong (284–364) created a fictional dialogue where the immortal Magu explained that the territory of the East China Sea was once a land filled with mulberry trees.
Early modern geomorphology
The term geomorphology seems to have been first used by Laumann in an 1858 work written in German. Keith Tinkler has suggested that the word came into general use in English, German and French after John Wesley Powell and W. J. McGee used it during the International Geological Conference of 1891. John Edward Marr in his The Scientific Study of Scenery considered his book as, 'an Introductory Treatise on Geomorphology, a subject which has sprung from the union of Geology and Geography'.
An early popular geomorphic model was the geographical cycle or cycle of erosion model of broad-scale landscape evolution developed by William Morris Davis between 1884 and 1899. It was an elaboration of the uniformitarianism theory that had first been proposed by James Hutton (1726–1797). With regard to valley forms, for example, uniformitarianism posited a sequence in which a river runs through a flat terrain, gradually carving an increasingly deep valley, until the side valleys eventually erode, flattening the terrain again, though at a lower elevation. It was thought that tectonic uplift could then start the cycle over. In the decades following Davis's development of this idea, many of those studying geomorphology sought to fit their findings into this framework, known today as "Davisian". Davis's ideas are of historical importance, but have been largely superseded today, mainly due to their lack of predictive power and qualitative nature.
In the 1920s, Walther Penck developed an alternative model to Davis's. Penck thought that landform evolution was better described as an alternation between ongoing processes of uplift and denudation, as opposed to Davis's model of a single uplift followed by decay. He also emphasised that in many landscapes slope evolution occurs by backwearing of rocks, not by Davisian-style surface lowering, and his science tended to emphasise surface process over understanding in detail the surface history of a given locality. Penck was German, and during his lifetime his ideas were at times rejected vigorously by the English-speaking geomorphology community. His early death, Davis' dislike for his work, and his at-times-confusing writing style likely all contributed to this rejection.
Both Davis and Penck were trying to place the study of the evolution of the Earth's surface on a more generalized, globally relevant footing than it had been previously. In the early 19th century, authors – especially in Europe – had tended to attribute the form of landscapes to local climate, and in particular to the specific effects of glaciation and periglacial processes. In contrast, both Davis and Penck were seeking to emphasize the importance of evolution of landscapes through time and the generality of the Earth's surface processes across different landscapes under different conditions.
During the early 1900s, the study of regional-scale geomorphology was termed "physiography". Physiography later was considered to be a contraction of "physical" and "geography", and therefore synonymous with physical geography, and the concept became embroiled in controversy surrounding the appropriate concerns of that discipline. Some geomorphologists held to a geological basis for physiography and emphasized a concept of physiographic regions while a conflicting trend among geographers was to equate physiography with "pure morphology", separated from its geological heritage. In the period following World War II, the emergence of process, climatic, and quantitative studies led to a preference by many earth scientists for the term "geomorphology" in order to suggest an analytical approach to landscapes rather than a descriptive one.
Climatic geomorphology
During the age of New Imperialism in the late 19th century European explorers and scientists traveled across the globe bringing descriptions of landscapes and landforms. As geographical knowledge increased over time these observations were systematized in a search for regional patterns. Climate emerged thus as prime factor for explaining landform distribution at a grand scale. The rise of climatic geomorphology was foreshadowed by the work of Wladimir Köppen, Vasily Dokuchaev and Andreas Schimper. William Morris Davis, the leading geomorphologist of his time, recognized the role of climate by complementing his "normal" temperate climate cycle of erosion with arid and glacial ones. Nevertheless, interest in climatic geomorphology was also a reaction against Davisian geomorphology that was by the mid-20th century considered both un-innovative and dubious. Early climatic geomorphology developed primarily in continental Europe while in the English-speaking world the tendency was not explicit until L.C. Peltier's 1950 publication on a periglacial cycle of erosion.
Climatic geomorphology was criticized in a 1969 review article by process geomorphologist D.R. Stoddart. The criticism by Stoddart proved "devastating" sparking a decline in the popularity of climatic geomorphology in the late 20th century. Stoddart criticized climatic geomorphology for applying supposedly "trivial" methodologies in establishing landform differences between morphoclimatic zones, being linked to Davisian geomorphology and by allegedly neglecting the fact that physical laws governing processes are the same across the globe. In addition some conceptions of climatic geomorphology, like that which holds that chemical weathering is more rapid in tropical climates than in cold climates proved to not be straightforwardly true.
Quantitative and process geomorphology
Geomorphology was started to be put on a solid quantitative footing in the middle of the 20th century. Following the early work of Grove Karl Gilbert around the turn of the 20th century, a group of mainly American natural scientists, geologists and hydraulic engineers including William Walden Rubey, Ralph Alger Bagnold, Hans Albert Einstein, Frank Ahnert, John Hack, Luna Leopold, A. Shields, Thomas Maddock, Arthur Strahler, Stanley Schumm, and Ronald Shreve began to research the form of landscape elements such as rivers and hillslopes by taking systematic, direct, quantitative measurements of aspects of them and investigating the scaling of these measurements. These methods began to allow prediction of the past and future behavior of landscapes from present observations, and were later to develop into the modern trend of a highly quantitative approach to geomorphic problems. Many groundbreaking and widely cited early geomorphology studies appeared in the Bulletin of the Geological Society of America, and received only few citations prior to 2000 (they are examples of "sleeping beauties") when a marked increase in quantitative geomorphology research occurred.
Quantitative geomorphology can involve fluid dynamics and solid mechanics, geomorphometry, laboratory studies, field measurements, theoretical work, and full landscape evolution modeling. These approaches are used to understand weathering and the formation of soils, sediment transport, landscape change, and the interactions between climate, tectonics, erosion, and deposition.
In Sweden Filip Hjulström's doctoral thesis, "The River Fyris" (1935), contained one of the first quantitative studies of geomorphological processes ever published. His students followed in the same vein, making quantitative studies of mass transport (Anders Rapp), fluvial transport (Åke Sundborg), delta deposition (Valter Axelsson), and coastal processes (John O. Norrman). This developed into "the Uppsala School of Physical Geography".
Contemporary geomorphology
Today, the field of geomorphology encompasses a very wide range of different approaches and interests. Modern researchers aim to draw out quantitative "laws" that govern Earth surface processes, but equally, recognize the uniqueness of each landscape and environment in which these processes operate. Particularly important realizations in contemporary geomorphology include:
1) that not all landscapes can be considered as either "stable" or "perturbed", where this perturbed state is a temporary displacement away from some ideal target form. Instead, dynamic changes of the landscape are now seen as an essential part of their nature.
2) that many geomorphic systems are best understood in terms of the stochasticity of the processes occurring in them, that is, the probability distributions of event magnitudes and return times. This in turn has indicated the importance of chaotic determinism to landscapes, and that landscape properties are best considered statistically. The same processes in the same landscapes do not always lead to the same end results.
According to Karna Lidmar-Bergström, regional geography is since the 1990s no longer accepted by mainstream scholarship as a basis for geomorphological studies.
Albeit having its importance diminished, climatic geomorphology continues to exist as field of study producing relevant research. More recently concerns over global warming have led to a renewed interest in the field.
Despite considerable criticism, the cycle of erosion model has remained part of the science of geomorphology. The model or theory has never been proved wrong, but neither has it been proven. The inherent difficulties of the model have instead made geomorphological research to advance along other lines. In contrast to its disputed status in geomorphology, the cycle of erosion model is a common approach used to establish denudation chronologies, and is thus an important concept in the science of historical geology. While acknowledging its shortcomings, modern geomorphologists Andrew Goudie and Karna Lidmar-Bergström have praised it for its elegance and pedagogical value respectively.
Processes
Geomorphically relevant processes generally fall into
(1) the production of regolith by weathering and erosion,
(2) the transport of that material, and
(3) its eventual deposition. Primary surface processes responsible for most topographic features include wind, waves, chemical dissolution, mass wasting, groundwater movement, surface water flow, glacial action, tectonism, and volcanism. Other more exotic geomorphic processes might include periglacial (freeze-thaw) processes, salt-mediated action, changes to the seabed caused by marine currents, seepage of fluids through the seafloor or extraterrestrial impact.
Aeolian processes
Aeolian processes pertain to the activity of the winds and more specifically, to the winds' ability to shape the surface of the Earth. Winds may erode, transport, and deposit materials, and are effective agents in regions with sparse vegetation and a large supply of fine, unconsolidated sediments. Although water and mass flow tend to mobilize more material than wind in most environments, aeolian processes are important in arid environments such as deserts.
Biological processes
The interaction of living organisms with landforms, or biogeomorphologic processes, can be of many different forms, and is probably of profound importance for the terrestrial geomorphic system as a whole. Biology can influence very many geomorphic processes, ranging from biogeochemical processes controlling chemical weathering, to the influence of mechanical processes like burrowing and tree throw on soil development, to even controlling global erosion rates through modulation of climate through carbon dioxide balance. Terrestrial landscapes in which the role of biology in mediating surface processes can be definitively excluded are extremely rare, but may hold important information for understanding the geomorphology of other planets, such as Mars.
Fluvial processes
Rivers and streams are not only conduits of water, but also of sediment. The water, as it flows over the channel bed, is able to mobilize sediment and transport it downstream, either as bed load, suspended load or dissolved load. The rate of sediment transport depends on the availability of sediment itself and on the river's discharge. Rivers are also capable of eroding into rock and forming new sediment, both from their own beds and also by coupling to the surrounding hillslopes. In this way, rivers are thought of as setting the base level for large-scale landscape evolution in nonglacial environments. Rivers are key links in the connectivity of different landscape elements.
As rivers flow across the landscape, they generally increase in size, merging with other rivers. The network of rivers thus formed is a drainage system. These systems take on four general patterns: dendritic, radial, rectangular, and trellis. Dendritic happens to be the most common, occurring when the underlying stratum is stable (without faulting). Drainage systems have four primary components: drainage basin, alluvial valley, delta plain, and receiving basin. Some geomorphic examples of fluvial landforms are alluvial fans, oxbow lakes, and fluvial terraces.
Glacial processes
Glaciers, while geographically restricted, are effective agents of landscape change. The gradual movement of ice down a valley causes abrasion and plucking of the underlying rock. Abrasion produces fine sediment, termed glacial flour. The debris transported by the glacier, when the glacier recedes, is termed a moraine. Glacial erosion is responsible for U-shaped valleys, as opposed to the V-shaped valleys of fluvial origin.
The way glacial processes interact with other landscape elements, particularly hillslope and fluvial processes, is an important aspect of Plio-Pleistocene landscape evolution and its sedimentary record in many high mountain environments. Environments that have been relatively recently glaciated but are no longer may still show elevated landscape change rates compared to those that have never been glaciated. Nonglacial geomorphic processes which nevertheless have been conditioned by past glaciation are termed paraglacial processes. This concept contrasts with periglacial processes, which are directly driven by formation or melting of ice or frost.
Hillslope processes
Soil, regolith, and rock move downslope under the force of gravity via creep, slides, flows, topples, and falls. Such mass wasting occurs on both terrestrial and submarine slopes, and has been observed on Earth, Mars, Venus, Titan and Iapetus.
Ongoing hillslope processes can change the topology of the hillslope surface, which in turn can change the rates of those processes. Hillslopes that steepen up to certain critical thresholds are capable of shedding extremely large volumes of material very quickly, making hillslope processes an extremely important element of landscapes in tectonically active areas.
On the Earth, biological processes such as burrowing or tree throw may play important roles in setting the rates of some hillslope processes.
Igneous processes
Both volcanic (eruptive) and plutonic (intrusive) igneous processes can have important impacts on geomorphology. The action of volcanoes tends to rejuvenize landscapes, covering the old land surface with lava and tephra, releasing pyroclastic material and forcing rivers through new paths. The cones built by eruptions also build substantial new topography, which can be acted upon by other surface processes. Plutonic rocks intruding then solidifying at depth can cause both uplift or subsidence of the surface, depending on whether the new material is denser or less dense than the rock it displaces.
Tectonic processes
Tectonic effects on geomorphology can range from scales of millions of years to minutes or less. The effects of tectonics on landscape are heavily dependent on the nature of the underlying bedrock fabric that more or less controls what kind of local morphology tectonics can shape. Earthquakes can, in terms of minutes, submerge large areas of land forming new wetlands. Isostatic rebound can account for significant changes over hundreds to thousands of years, and allows erosion of a mountain belt to promote further erosion as mass is removed from the chain and the belt uplifts. Long-term plate tectonic dynamics give rise to orogenic belts, large mountain chains with typical lifetimes of many tens of millions of years, which form focal points for high rates of fluvial and hillslope processes and thus long-term sediment production.
Features of deeper mantle dynamics such as plumes and delamination of the lower lithosphere have also been hypothesised to play important roles in the long term (> million year), large scale (thousands of km) evolution of the Earth's topography (see dynamic topography). Both can promote surface uplift through isostasy as hotter, less dense, mantle rocks displace cooler, denser, mantle rocks at depth in the Earth.
Marine processes
Marine processes are those associated with the action of waves, marine currents and seepage of fluids through the seafloor. Mass wasting and submarine landsliding are also important processes for some aspects of marine geomorphology. Because ocean basins are the ultimate sinks for a large fraction of terrestrial sediments, depositional processes and their related forms (e.g., sediment fans, deltas) are particularly important as elements of marine geomorphology.
Overlap with other fields
There is a considerable overlap between geomorphology and other fields. Deposition of material is extremely important in sedimentology. Weathering is the chemical and physical disruption of earth materials in place on exposure to atmospheric or near surface agents, and is typically studied by soil scientists and environmental chemists, but is an essential component of geomorphology because it is what provides the material that can be moved in the first place. Civil and environmental engineers are concerned with erosion and sediment transport, especially related to canals, slope stability (and natural hazards), water quality, coastal environmental management, transport of contaminants, and stream restoration. Glaciers can cause extensive erosion and deposition in a short period of time, making them extremely important entities in the high latitudes and meaning that they set the conditions in the headwaters of mountain-born streams; glaciology therefore is important in geomorphology.
See also
Bioerosion
Biogeology
Biogeomorphology
Biorhexistasy
British Society for Geomorphology
Coastal biogeomorphology
Coastal erosion
Concepts and Techniques in Modern Geography
Drainage system (geomorphology)
Erosion prediction
Geologic modelling
Geomorphometry
Geotechnics
Hack's law
Hydrologic modeling, behavioral modeling in hydrology
List of landforms
Orogeny
Physiographic regions of the world
Sediment transport
Soil morphology
Soils retrogression and degradation
Stream capture
Thermochronology
References
Further reading
Ialenti, Vincent. "Envisioning Landscapes of Our Very Distant Future" NPR Cosmos & Culture. 9/2014.
Bierman, P.R.; Montgomery, D.R. Key Concepts in Geomorphology. New York: W. H. Freeman, 2013. .
Ritter, D.F.; Kochel, R.C.; Miller, J.R.. Process Geomorphology. London: Waveland Pr Inc, 2011. .
Hargitai H., Page D., Canon-Tapia E. and Rodrigue C.M..; Classification and Characterization of Planetary Landforms. in: Hargitai H, Kereszturi Á, eds, Encyclopedia of Planetary Landforms. Cham: Springer 2015
External links
The Geographical Cycle, or the Cycle of Erosion (1899)
Geomorphology from Space (NASA)
British Society for Geomorphology
Earth sciences
Geology
Geological processes
Gravity
Physical geography
Planetary science
Seismology
Topography | 0.768533 | 0.996041 | 0.76549 |
Pseudoarchaeology | Pseudoarchaeology (sometimes called fringe or alternative archaeology) consists of attempts to study, interpret, or teach about the subject-matter of archaeology while rejecting, ignoring, or misunderstanding the accepted data-gathering and analytical methods of the discipline. These pseudoscientific interpretations involve the use of artifacts, sites or materials to construct scientifically insubstantial theories to strengthen the pseudoarchaeologists' claims. Methods include exaggeration of evidence, dramatic or romanticized conclusions, use of fallacious arguments, and fabrication of evidence.
There is no unified pseudoarchaeological theory or method, but rather many different interpretations of the past which are jointly at odds with those developed by the scientific community as well as with each other. These include religious philosophies such as creationism or "creation science" that apply to the archaeology of historic periods such as those that would have included the supposed worldwide flood myth, the Genesis flood narrative, Nephilim, Noah's Ark, and the Tower of Babel. Some pseudoarchaeological theories concern the idea that prehistoric and ancient human societies were aided in their development by intelligent extraterrestrial life, an idea propagated by those such as Italian author Peter Kolosimo, French authors Louis Pauwels and Jacques Bergier in The Morning of the Magicians (1963), and Swiss author Erich von Däniken in Chariots of the Gods? (1968). Others instead argue there were human societies in the ancient period which were significantly technologically advanced, such as Atlantis, and this idea has been propagated by some people such as Graham Hancock in his publication Fingerprints of the Gods (1995). Pseudoarchaeology has also been manifest in Mayanism and the 2012 phenomenon.
Many pseudoarchaeological theories are intimately linked with the occult/Western esoteric tradition. Many alternative archaeologies have been adopted by religious groups. Fringe archaeological ideas such as archaeocryptography and pyramidology have been endorsed by religions ranging from the British Israelites to the theosophists. Other alternative archaeologies include those that have been adopted by members of New Age and contemporary pagan belief systems.
Academic archaeologists have often criticised pseudoarchaeology, with one of the major critics, John R. Cole, characterising it as relying on "sensationalism, misuse of logic and evidence, misunderstanding of scientific method, and internal contradictions in their arguments". The relationship between alternative and academic archaeologies has been compared to the relationship between intelligent design theories and evolutionary biology by some archaeologists.
Etymology
Various terms have been employed to refer to these non-academic interpretations of archaeology. During the 1980s, the term "cult archaeology" was used by some people such as John R. Cole (1980) and William H. Stiebing Jr. (1987). "Fantastic archaeology" was used during the 1980s as the name of an undergraduate course at Harvard University taught by Stephen Williams, who published a book with the same title. During the 2000s, the term "alternative archaeology" began to be instead applied by academics like Tim Sebastion (2001), Robert J. Wallis (2003), Cornelius Holtorf (2006), and Gabriel Moshenka (2008). Garrett F. Fagan and Kenneth Feder (2006) however claimed this term was only chosen because it "imparts a warmer, fuzzier feel" that "appeals to our higher ideals and progressive inclinations". They argued that the term "pseudoarchaeology" was much more appropriate, a term also used by other prominent academic and professional archaeologists such as Colin Renfrew (2006).
Other academic archaeologists have chosen to use other terms to refer to these interpretations. Glyn Daniel, the editor of Antiquity, used the derogative term "bullshit archaeology", and similarly the academic William H. Stiebing Jr. noted that there were certain terms used for pseudoarchaeology that were heard "in the privacy of professional archaeologists' homes and offices but which cannot be mentioned in polite society".
Description
Pseudoarchaeology can be practised intentionally or unintentionally. Archaeological frauds and hoaxes are considered intentional pseudoarchaeology. Genuine archaeological finds may be converted to pseudoarchaeology unintentionally by unscientific interpretation. (cf. confirmation bias)
A type of pseudoarcheology of the Middle East has created a pseudo-history of Babylon, in contradiction to Judeo-Christian and Biblical history, resulting in the production of fraudulent cuneiform tablets, as clay tablets are difficult to date. "By 1904, during the early period of cuneiform tablet collecting, J. Edgar Banks, a Mesopotamian explorer and tablet dealer, estimated that nearly 80% of tablets offered for sale in Baghdad were fakes. In 2016, Syria's Director General for Antiquities and Museums reported that approximately 70% of seized artefacts in the country are fakes."
Especially in the past, but also in the present, pseudoarchaeology has been affected by racism, which can be suggested by attempts to attribute ancient sites and artefacts to ancient Egyptians, Hebrew Lost Tribes, Pre-Columbian trans-oceanic contact, or even extraterrestrial intelligence rather than to indigenous peoples.
Practitioners of pseudoarchaeology often criticise academic archaeologists and established scientific methods, claiming that conventional science has ignored critical evidence. Conspiracy theories may be invoked, in which "the Establishment" colludes in suppressing evidence.
Cornelius Holtorf states that countering the misleading "discoveries" of pseudoarchaeology creates a dilemma for archaeologists: whether to attempt to disprove pseudoarchaeology by "crusading" methods or to concentrate on better public knowledge of the sciences involved. Holtorf suggested a third method involving identifying the social and cultural demands that both scientific archaeology and pseudoarchaeology address, and identifying the engagement of present people with the material remains of the past (such as Barbara Bender explored for Stonehenge). Holtorf presents the search for truth as a process rather than a result and states that "even non-scientific research contributes to enriching our landscapes."
Characteristics
William H. Stiebing Jr. argued that despite their many differences, there were a set of common characteristics shared by almost all pseudoarchaeological interpretations. He believed that because of this, pseudoarchaeology could be categorised as a "single phenomenon". He then identified three main commonalities of pseudeoarchaeological theories: the unscientific nature of its method and evidence, its history of providing "simple, compact answers to complex, difficult issues", and its tendency to present itself as being persecuted by the archaeological establishment, accompanied by an ambivalent attitude towards the scientific ethos of the Enlightenment. This idea that there are common characteristics of pseudoarchaeologies is shared by other academics.
Lack of scientific method
Academic critics have stated that pseudoarchaeologists typically neglect to use the scientific method. Instead of testing evidence to see what hypotheses it satisfies best, pseudoarchaeologists force the archaeological data to fit a "favored conclusion" that is often arrived at through hunches, intuition, or religious or nationalist dogma. Pseudoarchaeological groups have a variety of basic assumptions that are typically unscientific: the Nazi pseudoarchaeologists for instance used the cultural superiority of the ancient Aryan race as a basic assumption, whilst Christian fundamentalist pseudoarchaeologists conceive of the Earth as being less than 10,000 years old and Hindu fundamentalist pseudoarchaeologists believe that the species Homo sapiens is much older than the 200,000 years old it has been shown to be by archaeologists. Despite this, many of pseudoarchaeology's proponents claim that they gained their conclusions using scientific techniques and methods, even when it is demonstrable that they have not.
Academic archaeologist John R. Cole believed that most pseudoarchaeologists do not understand how scientific investigation works, and that they instead believe it to be a "simple, catastrophic right versus wrong battle" between contesting theories. It was because of this failure to understand the scientific method, he argued, that pseudoarchaeological arguments were faulty. He then argued that most pseudoarchaeologists do not consider alternative explanations to that which they want to propagate, and that their "theories" were typically just "notions", not having sufficient evidence to allow them to be considered "theories" in the scientific, academic meaning of the word.
Commonly lacking scientific evidence, pseudoarchaeologists typically use other types of evidence for their arguments. For instance, they often use "generalized cultural comparisons", using various artefacts and monuments from one society, and emphasizing similarities with those of another society to conclude that both had a common source—typically an ancient lost civilisation like Atlantis, Mu, or an extraterrestrial influence. This takes the different artefacts or monuments entirely out of their original contexts, something which is anathema to academic archaeologists, for whom context is of the utmost importance.
Another type of evidence used by a number of pseudoarchaeologists is the interpretation of various myths as representing historical events, but in doing so these myths are often taken out of their cultural contexts. For instance, pseudoarchaeologist Immanuel Velikovsky claimed that the myths of migrations and war gods in the Central American Aztec civilisation represented a cosmic catastrophe that occurred during the 7th and 8th centuries BCE. This was criticised by academic archaeologist William H. Stiebing Jr., who noted that such myths only developed during the 12th to the 14th centuries CE, two millennia after Velikovsky claimed that the events had occurred, and that the Aztec society itself had not even developed by the 7th century BCE.
Opposition to the archaeological establishment
Pseudoarchaeologists typically present themselves as being disadvantaged with respect to the much larger archaeological establishment. They often use language that disparages academics and dismisses them as being unadventurous, spending all their time in dusty libraries and refusing to challenge the orthodoxies of the establishment lest they lose their jobs. In some more extreme examples, pseudoarchaeologists have accused academic archaeologists of being members of a widespread conspiracy to hide the truth about history from the public. When academics challenge pseudoarchaeologists and criticise their theories, many pseudoarchaeologists claim it as further evidence that their own ideas are right, and that they are simply being harassed by members of this academic conspiracy.
The prominent English archaeologist Colin Renfrew admitted that the archaeological establishment was often "set in its ways and resistant to radical new ideas" but that this was not the reason why pseudoarchaeological theories were rejected by academics. Garrett G. Fagan expanded on this, noting how in the academic archaeological community, "New evidence or arguments have to be thoroughly scrutinised to secure their validity ... and longstanding, well-entrenched positions will take considerable effort and particularly compelling data to overturn." Fagan noted that pseudoarchaeological theories simply do not have sufficient evidence to allow them to be accepted by professional archaeologists.
Conversely, many pseudoarchaeologists, whilst criticising the academic archaeological establishment, also attempt to get endorsements from people with academic credentials and affiliations. At times, they quote historical, and in most cases dead academics to strengthen their arguments; for instance prominent pseudoarchaeologist Graham Hancock, in his seminal Fingerprints of the Gods (1995), repeatedly notes that the eminent physicist Albert Einstein once commented positively on the pole shift hypothesis, a theory that has been abandoned by the academic community but which Hancock endorses. As Fagan noted however, the fact that Einstein was a physicist and not a geologist is not even mentioned by Hancock, nor is the fact that the present understanding of plate tectonics (which came to disprove earth crustal displacement) only became accepted generally after Einstein's death.
Academic archaeological responses
Pseudoarchaeological theories have come to be much criticised by academic and professional archaeologists. One of the first books to address these directly was by archaeologist Robert Wauchope of Tulane University. Prominent academic archaeologist Colin Renfrew stated his opinion that it was appalling that pseudoarchaeologists treated archaeological evidence in such a "frivolous and self-serving way", something he believed trivialised the "serious matter" of the study of human origins. Academics like John R. Cole, Garrett G. Fagan and Kenneth L. Feder have argued that pseudoarchaeological interpretations of the past were based upon sensationalism, self-contradiction, fallacious logic, manufactured or misinterpreted evidence, quotes taken out of context and incorrect information. Fagan and Feder characterised such interpretations of the past as being "anti-reason and anti-science" with some being "hyper-nationalistic, racist and hateful". In turn, many pseudoarchaeologists have dismissed academics as being closed-minded and not willing to consider theories other than their own.
Many academic archaeologists have argued that the spread of alternative archaeological theories is a threat to the general public's understanding of the past. Fagan was particularly scathing of television shows that presented pseudoarchaeological theories to the general public, believing that they did so because of the difficulties in making academic archaeological ideas comprehensible and interesting to the average viewer. Renfrew however believed that those television executives commissioning these documentaries knew that they were erroneous, and that they had allowed them to be made and broadcast simply for the hope of "short-term financial gain".
Fagan and Feder believed that it was not possible for academic archaeologists to successfully engage with pseudoarchaeologists, remarking that "you cannot reason with unreason". Speaking from their own experiences, they thought that attempted dialogues just became "slanging matches in which the expertise and motives of the critic become the main focus of attention." Fagan has maintained this idea elsewhere, remarking that arguing with supporters of pseudoarchaeological theories was "pointless" because they denied logic. He noted that they included those "who openly admitted to not having read a word written by a trained Egyptologist" but who at the same time "were pronouncing how academic Egyptology was all wrong, even sinister."
Conferences and anthologies
At the 1986 meeting of the Society for American Archaeology, its organizers, Kenneth Feder, Luanne Hudson and Francis Harrold decided to hold a symposium to examine pseudoarchaeological beliefs from a variety of academic standpoints, including archaeology, physical anthropology, sociology, history and psychology. From this symposium, an anthology was produced, entitled Cult Archaeology & Creationism: Understanding Pseudoarchaeological Beliefs about the Past (1987).
At the 2002 annual meeting of the Archaeological Institute of America, a workshop was held on the topic of pseudoarchaeology. It subsequently resulted in the publication of an academic anthology, Archaeological Fantasies: How Pseudoarchaeology Misrepresents the Past and Misleads the Public (2006), which was edited by Garrett G. Fagan.
On 23 and 24 April 2009, The American Schools of Oriental Research and the Duke University Center for Jewish Studies, along with the Duke Department of Religion, the Duke Graduate Program in Religion, the Trinity College of Arts and Sciences Committee on Faculty Research, and the John Hope Franklin Humanities Institute, sponsored a conference entitled "Archaeology, Politics, and the Media," which addressed the abuse of archaeology in the Holy Land for political, religious, and ideological purposes. Emphasis was placed on the media's reporting of sensational and politically motivated archaeological claims and the academy's responsibility in responding to it.
Inclusive attitudes
Academic archaeologist Cornelius Holtorf believed however that critics of alternative archaeologies like Fagan were "opinionated and patronizing" towards alternative theories, and that purporting their opinions in such a manner was damaging to the public's perception of archaeologists. Holtorf emphasized that there were similarities between academic and alternative archaeological interpretations, with the former being influenced by the latter. As evidence, he emphasized archaeoastronomy, which was once considered as a component of fringe archaeological interpretations before being adopted by mainstream academics. He also noted that certain archaeological scholars, like William Stukeley (1687–1765), Margaret Murray (1863–1963) and Marija Gimbutas (1921–1994) were formerly considered to be eminent by both academic and alternative archaeologists. He came to the conclusion that a constructive dialogue should be begun between academic and alternative archaeologists. Fagan and Feder have responded to Holtorf's statements in detail, asserting that such a dialogue is no more possible than is one between evolutionary biologists and creationists or between astronomers and astrologers: one is scientific, the other is anti-scientific.
During the early 1980s, Kenneth Feder performed a survey of his archaeology students. On the 50-question survey, 10 questions had to do with archaeology and/or pseudoscience. Some of the claims were more rational; the world is 5 billion years old, and human beings came about through evolution. However, questions also included issues such as, King Tut's tomb actually killed people upon discovery, and there is good evidence for the existence of Atlantis. As it resulted, some of the students Feder was teaching gave some credibility to the pseudoscience claims. 12% actually believed people on Howard Carter's expedition were killed by an ancient Egyptian curse.
Historical pseudoarchaeology
During the mid-2nd century, those exposed by Lucian's sarcastic essay "Alexander the false prophet" prepared an archaeological "find" in Chalcedon to prepare a public for the supposed oracle they planned to establish at Abonoteichus in Paphlagonia (Pearse, 2001):
At Glastonbury Abbey in 1291, at a time when King Edward I desired to emphasize his "Englishness", an alleged discovery was made: the supposed coffin of King Arthur, identified helpfully with an inscribed plaque. Arthur was reinterred at Glastonbury with a magnificent ceremonial attended by the king and queen.
Nationalist motivations
Pseudoarchaeology can be motivated by nationalism (cf. Nazi archaeology, using cultural superiority of the ancient Aryan race as a basic assumption to establish the Germanic people as the descendants of the original Aryan 'master race') or a desire to prove a particular religious (cf. intelligent design), pseudohistorical, political, or anthropological theory. In many cases, an a priori conclusion is established, and fieldwork is performed explicitly to corroborate the theory in detail. According to archaeologist John Hoopes, writing for the magazine of the Society for American Archaeology, "Pseudoarchaeology actively promotes myths that are routinely used in the service of white supremacy, racialized nationalism, colonialism, and the dispossession and oppression of indigenous peoples."
Archaeologists distinguish their research from pseudoarchaeology by indicating differences of research methods, including recursive methods, falsifiable theories, peer review, and a generally systematic approach to collecting data. Though there is overwhelming evidence of cultural associations informing folk traditions about the past, objective analysis of folk archaeology—in anthropological terms of their cultural contexts and the cultural desires to which they respond—have been comparatively few. However, in this vein, Robert Silverberg located the Mormons' use of Mound Builder culture within a larger cultural nexus and the voyage of Madoc and "Welsh Indians" was set in its changing and evolving sociohistorical contexts by Gwyn Williams.
Examples
The Kensington Runestone of Minnesota used to allege Nordic Viking primacy of exploring the Americas.
Nazi archaeology, the Thule Society, and expeditions sent by the Ahnenerbe to research the existence of an alleged Aryan race. The research of Edmund Kiss at Tiwanaku would be one example.
The Bosnian pyramids project, which has projected that several hills in Visoko, Bosnia are ancient pyramids.
Piltdown man.
Jovan I. Deretić's Serbocentric claims for the ancient history of the Old World.
Romanian protochronism also uses pseudoarchaeological interpretations; for more pieces of information, see the Tărtăria tablets, the Rohonc Codex's Daco-Romanian hypothesis, or the Sinaia lead plates.
The theory that New Zealand was not settled by the Māori people, but by a pre-Polynesian race of giants
Claims of a Tartarian Empire that colonized the world.
Afrocentrist claims that Black people should be credited with creating the first civilizations.
Religious motivations
Religiously motivated pseudoarchaeological theories include the young earth theory of some Judeo-Christian fundamentalists. They argue that the Earth is 4,000–10,000 years old, with claims varying depending on the source. Some Hindu pseudoarchaeologists believe that the Homo sapiens species is much older than the 200,000 years it is generally believed to have existed. Archaeologist John R. Cole refers to such beliefs as "cult archaeology" and believes them to be pseudoarchaeological. He said that this "pseudoarchaeology" had "many of the attributes, causes, and effects of religion".
A more specific example of religious pseudoarcheology is the claim of Ron Wyatt to have discovered Noah's ark, the graves of Noah and his wife, the location of Sodom and Gomorrah, the Tower of Babel, and numerous other important sites. However, he has not presented evidence sufficient to impress Bible scholars, scientists, and historians. The organization Answers in Genesis propagates many pseudoscientific notions as part of its creationist ministry.
Examples
Creation science, also known as "scientific creationism," but which is actually pseudoscientific, as it pertains to human origins.
Repeated claims of the discovery of Noah's Ark on Mount Ararat or neighbouring mountain ranges.
Use of questionable artefacts such as the Grave Creek Stone, the Los Lunas Decalogue Stone and the Michigan relics to represent proof of the presence of a pre-Columbian Semitic culture in America.
New Age assertions about Atlantis, Lemuria, and ancient root races derived from the writings of authors such as 19th-century theosophist and occultist Helena Blavatsky.
Denial of scientific dating techniques in favor of a young Earth age.
In Egyptology
Pseudoarchaeology can be found in relation to Egyptology, the study of ancient Egypt. Some of this includes pyramidology, a collection of pseudoscientific beliefs about pyramids around the world that includes the pyramids in Egypt and specifically the Great Pyramid of Giza.
Pyramids
One belief originally published by Charles Piazzi Smyth in 1864 is that the Great Pyramid was not built by humans for the pharaoh Khufu, but was so beautiful that it could it have been crafted only by the hand of God. Though Smyth contributed to the idea of the Great Pyramid not being created originally by Khufu, this belief has been further propagated by Zecharia Sitchin in books such as The Stairway to Heaven (1983) and more recently by Scott Creighton in The Great Pyramid Hoax (2017), both of which argue that Howard Vyse (the discoverer of Khufu cartouches within the Great Pyramid) presented the earliest evidence that the Great Pyramid's builder) faked the markings of Khufu's name. However, Sitchin's research has been challenged as being pseudoscience. Arguments against these theories often detail the discovery of external texts on papyri such as the Diary of Merer that detail the construction of the Great Pyramid.
The theory the Egyptian pyramids were not built as tombs of ancient pharaohs, but for other purposes, has resulted in a variety of alternative theories about their purpose and origins. One such pseudoarchaeological theory is from Scott Creighton, who argues that the pyramids were built as recovery vaults to survive a deluge (whether that be associated with flood geology or the Genesis Flood Narrative). Another alternative theory for the purpose of the pyramids comes from known pseudoarchaeologist Graham Hancock, who argues that the pyramids originated from an early civilization that was destroyed by an asteroid that also began the Younger Dryas period. A third common pseudoarchaeological theory about the Egyptian pyramids is that they were built by ancient aliens. This belief is sometimes explained for why the pyramids supposedly appear suddenly in history. However, this claim is challenged by Egyptologists who describe an evolution of pyramid designs from mastaba tombs, to the Step Pyramid of Djoser, to the collapsed Meidum Pyramid, to Sneferefu's Bent Pyramid, ending with Khufu's Great Pyramid. Many alternative beliefs have been criticized as ignoring the knowledge, architectural and constructive capabilities of ancient Egyptians.
Mummy curses
Another pseudoegyptological belief is that of the curse of the pharaohs, which involves a belief of imprecations being directed against those who enter the tombs of mummies, and pharaohs. These curses often include natural disaster or illness or death for those who have entered the tomb. One of the most influential iterations of this theory comes from the discovery of King Tutankhamun by Howard Carter in November 1922. Several deaths of those present at the excavation have been attributed to a curse, including that of Lord Carnarvon who died as the result of an infected mosquito bite, sepsis, and pneumonia slightly more than four months after the excavation. There were also claims that all lights in Cairo went out at the moment of Lord Carnavon's death. However, skeptics believe that reporters overlooked rational explanations and relied on supernatural legends. In 2021, mummies discovered mostly from the New Kingdom period were to be paraded through Cairo during a transference for study. However, several events occurred, including a ship blocking the Suez Canal and accidents involving several members of the crew. Many claimed these were the results of a pharaoh's curse, however, Egyptologist Zahi Hawass dismissed the claims as random tragedies.
Pre-Columbian contact and Mayan connections
Some pseudoarchaeologists speculate that Egypt had contact with the Maya civilization before Columbus reached the Bahamas in 1492. Part of these arguments stem from the discovery of nicotine and cocaine traces found in various mummies. The argument is that plants producing these were not known to exist outside the Americas, although Duncan Edlin found that plants containing both nicotine and cocaine existed in Egypt and therefore could have been used by ancient Egyptians. Another argument against possible contact is that there is a massive body of literature in the form of hieroglyphics from ancient Egypt, however ancient Egyptian scholars never noted contacting the Americas in any of the texts that have been found.
Another argument in favor of contact between ancient Egyptians and Mayans is from claims of similarities of art, architecture and writing. These theories are explained by authors such as Graham Hancock in Fingerprints of the Gods (1995) and more recently by Richard Cassaro in Mayan Masonry. These similarities commonly mention creation of pyramids, use of archways, and similarities in artwork of the divine. Arguments such as these claim an association between ancient Egypt and Maya through either a transatlantic outing that brought Egypt to the Mayas or through a shared origin in both civilizations (either in Atlantis or Lemuria). Voyages of the Pyramid Builders (2003) by geologist Robert Schoch argues that both Egyptian and Maya pyramids result from a common lost civilization. However, ancient historian Garrett Fagan criticized Schoch's theory on the grounds that it demonstrated ignorance of relevant facts and that it did not explain variations in appearance or how various civilizations' pyramids were built. Fagan also describes known research by several archaeologists about the development of various civilizations' pyramids that was not used or addressed by Schoch's theory.
Flood theories and the Great Sphinx
For Egypt-related pseudoarchaeology, there are a variety of flood-related theories, many of which relate to the Biblical Genesis flood narrative or other flood theories. Scott Creighton claims that knowledge of a coming deluge (which he refers to as "Thoth's Flood") generated the idea of building pyramids as recovery vaults from which civilization could rebuild. Another fringe theory relating to this is the Sphinx water erosion hypothesis, which claims that the Great Sphinx of Giza's modern body appearance is caused by erosion due to flooding or rain. This theory, which has been perpetuated by Robert Schoch who claims the Sphinx was built between 5000 and 7000 BCE, has been criticized by Zahi Hawass and Mark Lehner as ignoring Old Kingdom societal evidence about the Sphinx and being flawed in citing specifics about a possible erosion. Currently Egyptologists tend to date the Sphinx sometime about 2500 B.C., approximately the reign of the pharaoh Khafre for whom the Sphinx is commonly attributed.
The Mayas
Many aspects of Maya civilization have inspired pseudoarchaeological speculation. In Mexico, this history can bring more people which in turn brings more money for the area, which the Maya peoples usually do not receive. Many examples of pseudoarcheology pertaining to Maya civilization can be found in literature, art, and movies. Many of them have to do with the 2012 phenomenon and the Maya calendar. These are often referred to as Mayanism, a collection of New Age beliefs about Mayas and Maya religion and/or spirituality. That said, Mayan culture has long been a subject of scientific archaeology. Archaeologists have uncovered evidence that has furthered our knowledge of the past. Some of these include stone carvings in Tikal that show the earliest stories of Sihyaj Chan Kʼawiil II and materials recovered from Chichen Itza.
Examples of Maya-related pseudoarchaeology
A well-known example of Maya pseudoarcheology is the interpretation of the remains of Kʼinich Janaabʼ Pakal and his burial. Pseudoarchaeologists have discussed much about the discovery of Pakal's sarcophagus lid and the answers they gained from studying it. Pseudoarchaeology author Maurice Cotterell writes about this in his book The Supergods. One of the main draws in this material for Cotterell and other pseudoarchaeologists is that the ancient Aztec and Maya people possessed knowledge beyond our imagination. From being able to “take off in spaceships”, to dealing with complex numbers and equations, these people possessed “godly intelligence”. Their biggest study and answer came from analyzing the Mayan calendar and finding correlations with the Sun and Earth. He states that “they (Sun, Earth, Mayan Calendar) come close together every 260 days, this agreed with his suspicion that the Mayan numbering system was connected with solar magnetic cycles”. There are no professionals that endorse his statements, and his conclusions are based on insufficient evidence. Cotterell's work is pseudoarcheology because it reports his own non-scientific interpretations, without any scientific peer review or critical analysis by professional archaeologists.
Another example of pseudoarcheology concerning Maya civilization are some conclusions gained from studying the Maya calendar. The Calendar Round seems to have been based on two overlapping annual cycles: a 260-day sacred year and a 365-day secular year that named 18 months with 20 days each. The Maya calendar also included what were termed Long Counts, these were created by priests at the time and a single cycle lasted 5,126 solar years. From the time this was created, the end of the solar years occurred on 21 December 2012. Ancient hieroglyphs from Tortuguero claimed that when this cycle ended, Bolon Yokte, the Mayan god of creation and war would arrive. Some pseudoarchaeologists assumed to mean that the world would end.
The stone carvings in Tikal that have been important to archaeologists attempting to recreate the past, have also been used by pseudoarchaeologists to fabricate false claims about the past. In reality these carvings have been used to reconstruct the stories and history of more than thirty dynastic rulers. Some pseudoarchaeologists claim that these carvings are of ancient aliens or another form of extraterrestrial life. These claims are widely regarded as false by archeologists. When these claims were circulated during the early 1990s, the rate of tourism boomed. In cases like this, pseudoarcheological claims can often garner public attention more effectively than peer-reviewed archeology.
Chichén Itzá in Mexico has long been an important archaeological site. Throughout the past few years there have been many wild claims by pseudoarchaeologists. The passageway beneath the Kulkulcan pyramid, a part of Chichén Itzá, was found and this is what many of the pseudoarchaeologists' claims concern. The claim is that this passageway was and still is a direct channel to the underworld. There are many possibilities for what this could have been used for, but there are not any facts to prove this statement. Many experts, including Guillermo de And, an underwater archaeologist who directed a few expeditions to uncover Mayan aqua life, believe that the passageway was a “secret cenote”.
Other notable examples
The assertion that the Mound Builders were a long vanished non-Native American people thought to have come from Europe, the Middle East, or Africa.
Neolithic hyperdiffusion from Egypt being responsible for influencing most of the major ancient civilizations of the world in Asia, the Middle East, Europe, and particularly the ancient Native Americans. This includes Olmec alternative origin speculations.
Archaeological interest of Pedra da Gávea.
The work of 19th- and early 20th-century authors such as Ignatius Donnelly, Augustus Le Plongeon, James Churchward, and Arthur Posnansky.
The work of contemporary authors such as Giorgio Tsoukalos, Erich von Däniken, Barry Fell, Zecharia Sitchin, Robert Bauval, Frank Joseph, Graham Hancock, Colin Wilson, Michael Cremo, Immanuel Velikovsky, and David Hatcher Childress.
Lost lands such as Atlantis, Mu, Kumari Kandam, Phantom island Tartary and Lemuria, which are all contested by mainstream archaeologists and historians as lacking critical physical evidence and general historical credibility.
Speculation by paranormal researchers that an abnormal human skull promoted as the "starchild skull" was the product of extraterrestrial-human breeding or extraterrestrial genetic engineering, despite DNA evidence proving that the skull was that of an anatomically modern human infant, most likely suffering from hydrocephalus.
Pseudoarchaeology books
Atlantis: The Antediluvian World
Chariots of the Gods?
Fingerprints of the Gods
Forbidden Archeology
From Atlantis to the Sphinx
Isis Unveiled
Magicians of the Gods
Morning of the Magicians
The Saturn Myth
The Secret Doctrine
The Sirius Mystery
The Space Gods Revealed
Pseudoarchaeological television programs and series
Ancient Aliens (2010–)
Ancient Apocalypse (2022)
America Unearthed (2012–2015, 2019–)
In Search of... (1977–1982)
Legends of the Lost with Megan Fox (2018)
The Curse of Oak Island (2014–)
The Mysterious Origins of Man (1996)
The UnXplained (2019–2021)
Archaeological sites subject to pseudoarchaeological speculation
Burrows Cave
Calico Early Man Site
Çatalhöyük
Cerutti Mastodon site
Chinese pyramids
Easter Island
Göbekli Tepe
Gunung Padang
La Ciudad Blanca
Machu Picchu
Megalithic Temples of Malta
Nan Madol
Nazca Lines
Tell el-Hammam
Teotihuacan
Terracotta Army
Tiwanaku
Puma Punku
Kalasasaya at Tiwanaku
The Gate of the Sun
The Semi-Subterranean Temple
Yonaguni Monument
Zorats Karer a.k.a. Armenian Stonehenge
So-called out-of-place artefacts
Antikythera mechanism
Babylonokia
Baghdad Battery
Dogū
Etruscan inscriptions
Ingá Stone
Nimrud lens
Phaistos Disc
Piri Reis map
Stone spheres of Costa Rica
See also
Pseudohistory
Historical revisionism
Archaeology and the Book of Mormon
Biblical archaeology
Frauds, Myths, and Mysteries
List of topics characterized as pseudoscience
Pathological science
Psychic archaeology
Xenoarchaeology
References
Works cited
Baumann, Stefan (2018). Fakten und Fiktionen: Archäologie vs. Pseudowissenschaft. Sonderbände der Antiken Welt (in German). Darmstadt: Philipp von Zabern Wissenschaftliche Buchgesellschaft (WBG).
Further reading
External links
– Criticisms of cable network television programs that promote pseudoarchaeology
Fringe theory
Scientific racism
Nationalism and archaeology
Archaeology and racism | 0.769036 | 0.995374 | 0.765478 |
Environmental history | Environmental history is the study of human interaction with the natural world over time, emphasising the active role nature plays in influencing human affairs and vice versa.
Environmental history first emerged in the United States out of the environmental movement of the 1960s and 1970s, and much of its impetus still stems from present-day global environmental concerns. The field was founded on conservation issues but has broadened in scope to include more general social and scientific history and may deal with cities, population or sustainable development. As all history occurs in the natural world, environmental history tends to focus on particular time-scales, geographic regions, or key themes. It is also a strongly multidisciplinary subject that draws widely on both the humanities and natural science.
The subject matter of environmental history can be divided into three main components. The first, nature itself and its change over time, includes the physical impact of humans on the Earth's land, water, atmosphere and biosphere. The second category, how humans use nature, includes the environmental consequences of increasing population, more effective technology and changing patterns of production and consumption. Other key themes are the transition from nomadic hunter-gatherer communities to settled agriculture in the Neolithic Revolution, the effects of colonial expansion and settlements, and the environmental and human consequences of the Industrial and technological revolutions. Finally, environmental historians study how people think about nature – the way attitudes, beliefs and values influence interaction with nature, especially in the form of myths, religion and science.
Origin of name and early works
In 1967, Roderick Nash published Wilderness and the American Mind, a work that has become a classic text of early environmental history. In an address to the Organization of American Historians in 1969 (published in 1970) Nash used the expression "environmental history", although 1972 is generally taken as the date when the term was first coined. The 1959 book by Samuel P. Hays, Conservation and the Gospel of Efficiency: The Progressive Conservation Movement, 1890–1920, while being a major contribution to American political history, is now also regarded as a founding document in the field of environmental history. Hays is professor emeritus of History at the University of Pittsburgh. Alfred W. Crosby's book The Columbian Exchange (1972) is another key early work of environmental history.
Historiography
Brief overviews of the historiography of environmental history have been published by J. R. McNeill, Richard White, and J. Donald Hughes. In 2014 Oxford University Press published a volume of 25 essays in The Oxford Handbook of Environmental History.
Definition
There is no universally accepted definition of environmental history. In general terms it is a history that tries to explain why our environment is like it is and how humanity has influenced its current condition, as well as commenting on the problems and opportunities of tomorrow. Donald Worster's widely quoted 1988 definition states that environmental history is the "interaction between human cultures and the environment in the past".
In 2001, J. Donald Hughes defined the subject as the "study of human relationships through time with the natural communities of which they are a part in order to explain the processes of change that affect that relationship". and, in 2006, as "history that seeks understanding of human beings as they have lived, worked and thought in relationship to the rest of nature through the changes brought by time". "As a method, environmental history is the use of ecological analysis as a means of understanding human history...an account of changes in human societies as they relate to changes in the natural environment". Environmental historians are also interested in "what people think about nature, and how they have expressed those ideas in folk religions, popular culture, literature and art". In 2003, J. R. McNeill defined it as "the history of the mutual relations between humankind and the rest of nature".
Subject matter
Traditional historical analysis has over time extended its range of study from the activities and influence of a few significant people to a much broader social, political, economic, and cultural analysis. Environmental history further broadens the subject matter of conventional history. In 1988, Donald Worster stated that environmental history "attempts to make history more inclusive in its narratives" by examining the "role and place of nature in human life", and in 1993, that "Environmental history explores the ways in which the biophysical world has influenced the course of human history and the ways in which people have thought about and tried to transform their surroundings". The interdependency of human and environmental factors in the creation of landscapes is expressed through the notion of the cultural landscape. Worster also questioned the scope of the discipline, asking: "We study humans and nature; therefore can anything human or natural be outside our enquiry?"
Environmental history is generally treated as a subfield of history. But some environmental historians challenge this assumption, arguing that while traditional history is human history – the story of people and their institutions, "humans cannot place themselves outside the principles of nature". In this sense, they argue that environmental history is a version of human history within a larger context, one less dependent on anthropocentrism (even though anthropogenic change is at the center of its narrative).
Dimensions
J. Donald Hughes responded to the view that environmental history is "light on theory" or lacking theoretical structure by viewing the subject through the lens of three "dimensions": nature and culture, history and science, and scale. This advances beyond Worster's recognition of three broad clusters of issues to be addressed by environmental historians although both historians recognize that the emphasis of their categories might vary according to the particular study as, clearly, some studies will concentrate more on society and human affairs and others more on the environment.
Themes
Several themes are used to express these historical dimensions. A more traditional historical approach is to analyse the transformation of the globe's ecology through themes like the separation of man from nature during the Neolithic Revolution, imperialism and colonial expansion, exploration, agricultural change, the effects of the Industrial and technological revolution, and urban expansion. More environmental topics include human impact through influences on forestry, fire, climate change, sustainability and so on. According to Paul Warde, "the increasingly sophisticated history of colonization and migration can take on an environmental aspect, tracing the pathways of ideas and species around the globe and indeed is bringing about an increased use of such analogies and 'colonial' understandings of processes within European history." The importance of the colonial enterprise in Africa, the Caribbean and Indian Ocean has been detailed by Richard Grove. Much of the literature consists of case-studies targeted at the global, national and local levels.
Scale
Although environmental history can cover billions of years of history over the whole Earth, it can equally concern itself with local scales and brief time periods. Many environmental historians are occupied with local, regional and national histories. Some historians link their subject exclusively to the span of human history – "every time period in human history" while others include the period before human presence on Earth as a legitimate part of the discipline. Ian Simmons's Environmental History of Great Britain covers a period of about 10,000 years. There is a tendency to difference in time scales between natural and social phenomena: the causes of environmental change that stretch back in time may be dealt with socially over a comparatively brief period.
Although at all times environmental influences have extended beyond particular geographic regions and cultures, during the 20th and early 21st centuries anthropogenic environmental change has assumed global proportions, most prominently with climate change but also as a result of settlement, the spread of disease and the globalization of world trade.
History
The questions of environmental history date back to antiquity, including Hippocrates, the father of medicine, who asserted that different cultures and human temperaments could be related to the surroundings in which peoples lived in Airs, Waters, Places. Scholars as varied as Ibn Khaldun and Montesquieu found climate to be a key determinant of human behavior. During the Enlightenment, there was a rising awareness of the environment and scientists addressed themes of sustainability via natural history and medicine. However, the origins of the subject in its present form are generally traced to the 20th century.
In 1929 a group of French historians founded the journal Annales, in many ways a forerunner of modern environmental history since it took as its subject matter the reciprocal global influences of the environment and human society. The idea of the impact of the physical environment on civilizations was espoused by this Annales School to describe the long-term developments that shape human history by focusing away from political and intellectual history, toward agriculture, demography, and geography. Emmanuel Le Roy Ladurie, a pupil of the Annales School, was the first to really embrace, in the 1950s, environmental history in a more contemporary form. One of the most influential members of the Annales School was Lucien Febvre (1878–1956), whose 1922 book A Geographical Introduction to History is now a classic in the field.
The most influential empirical and theoretical work in the subject has been done in the United States where teaching programs first emerged and a generation of trained environmental historians is now active. In the United States environmental history as an independent field of study emerged in the general cultural reassessment and reform of the 1960s and 1970s along with environmentalism, "conservation history", and a gathering awareness of the global scale of some environmental issues. This was in large part a reaction to the way nature was represented in history at the time, which "portrayed the advance of culture and technology as releasing humans from dependence on the natural world and providing them with the means to manage it [and] celebrated human mastery over other forms of life and the natural environment, and expected technological improvement and economic growth to accelerate". Environmental historians intended to develop a post-colonial historiography that was "more inclusive in its narratives".
Moral and political inspiration
Moral and political inspiration to environmental historians has come from American writers and activists such as Henry Thoreau, John Muir, Aldo Leopold, and Rachel Carson. Environmental history "frequently promoted a moral and political agenda although it steadily became a more scholarly enterprise". Early attempts to define the field were made in the United States by Roderick Nash in "The State of Environmental History" and in other works by frontier historians Frederick Jackson Turner, James Malin, and Walter Prescott Webb, who analyzed the process of settlement. Their work was expanded by a second generation of more specialized environmental historians such as Alfred Crosby, Samuel P. Hays, Donald Worster, William Cronon, Richard White, Carolyn Merchant, J. R. McNeill, Donald Hughes, and Chad Montrie in the United States and Paul Warde, Sverker Sorlin, Robert A. Lambert, T.C. Smout, and Peter Coates in Europe.
British Empire
Although environmental history was growing rapidly as a field after 1970 in the United States, it only reached historians of the British Empire in the 1990s.Madhav Gadgil and Ramachandra Guha, This fissured land: an ecological history of India (1993). Gregory Barton argues that the concept of environmentalism emerged from forestry studies, and emphasizes the British imperial role in that research. He argues that imperial forestry movement in India around 1900 included government reservations, new methods of fire protection, and attention to revenue-producing forest management. The result eased the fight between romantic preservationists and laissez-faire businessmen, thus giving the compromise from which modern environmentalism emerged.
In recent years numerous scholars cited by James Beattie have examined the environmental impact of the Empire. Beinart and Hughes argue that the discovery and commercial or scientific use of new plants was an important concern in the 18th and 19th centuries. The efficient use of rivers through dams and irrigation projects was an expensive but important method of raising agricultural productivity. Searching for more efficient ways of using natural resources, the British moved flora, fauna and commodities around the world, sometimes resulting in ecological disruption and radical environmental change. Imperialism also stimulated more modern attitudes toward nature and subsidized botany and agricultural research. Scholars have used the British Empire to examine the utility of the new concept of eco-cultural networks as a lens for examining interconnected, wide-ranging social and environmental processes.
Current practice
In the United States the American Society for Environmental History was founded in 1977 while the first institute devoted specifically to environmental history in Europe was established in 1991, based at the University of St. Andrews in Scotland. In 1986, the Dutch foundation for the history of environment and hygiene Net Werk was founded and publishes four newsletters per year. In the UK the White Horse Press in Cambridge has, since 1995, published the journal Environment and History which aims to bring scholars in the humanities and biological sciences closer together in constructing long and well-founded perspectives on present day environmental problems and a similar publication (Journal for Environmental History) is a combined Flemish-Dutch initiative mainly dealing with topics in the Netherlands and Belgium although it also has an interest in European environmental history. Each issue contains abstracts in English, French and German. In 1999 the Journal was converted into a yearbook for environmental history. In Canada the Network in Canadian History and Environment facilitates the growth of environmental history through numerous workshops and a significant digital infrastructure including their website and podcast.
Communication between European nations is restricted by language difficulties. In April 1999 a meeting was held in Germany to overcome these problems and to co-ordinate environmental history in Europe. This meeting resulted in the creation of the European Society for Environmental History in 1999. Only two years after its establishment, ESEH held its first international conference in St. Andrews, Scotland. Around 120 scholars attended the meeting and 105 papers were presented on topics covering the whole spectrum of environmental history. The conference showed that environmental history is a viable and lively field in Europe and since then ESEH has expanded to over 400 members and continues to grow and attracted international conferences in 2003 and 2005. In 1999 the Centre for Environmental History was established at the University of Stirling. Some history departments at European universities are now offering introductory courses in environmental history and postgraduate courses in Environmental history have been established at the Universities of Nottingham, Stirling and Dundee and more recently a Graduierten Kolleg was created at the University of Göttingen in Germany. In 2009, the Rachel Carson Center for Environment and Society (RCC), an international, interdisciplinary center for research and education in the environmental humanities and social sciences, was founded as a joint initiative of Munich's Ludwig-Maximilians-Universität and the Deutsches Museum, with the generous support of the German Federal Ministry of Education and Research. The Environment & Society Portal (environmentandsociety.org) is the Rachel Carson Center's open access digital archive and publication platform.
Related disciplines
Environmental history prides itself in bridging the gap between the arts and natural sciences although to date the scales weigh on the side of science. A definitive list of related subjects would be lengthy indeed and singling out those for special mention a difficult task. However, those frequently quoted include, historical geography, the history and philosophy of science, history of technology and climate science. On the biological side there is, above all, ecology and historical ecology, but also forestry and especially forest history, archaeology and anthropology. When the subject engages in environmental advocacy it has much in common with environmentalism.
With increasing globalization and the impact of global trade on resource distribution, concern over never-ending economic growth and the many human inequities environmental history is now gaining allies in the fields of ecological and environmental economics.
Engagement with sociological thinkers and the humanities is limited but cannot be ignored through the beliefs and ideas that guide human action. This has been seen as the reason for a perceived lack of support from traditional historians.
Issues
The subject has a number of areas of lively debate. These include discussion concerning: what subject matter is most appropriate; whether environmental advocacy can detract from scholarly objectivity; standards of professionalism in a subject where much outstanding work has been done by non-historians; the relative contribution of nature and humans in determining the passage of history; the degree of connection with, and acceptance by, other disciplines – but especially mainstream history. For Paul Warde the sheer scale, scope and diffuseness of the environmental history endeavour calls for an analytical toolkit "a range of common issues and questions to push forward collectively" and a "core problem". He sees a lack of "human agency" in its texts and suggest it be written more to act: as of information for environmental scientists; incorporation of the notion of risk; a closer analysis of what it is we mean by "environment"; confronting the way environmental history is at odds with the humanities because it emphasises the division between "materialist, and cultural or constructivist explanations for human behaviour".
Global sustainability
Many of the themes of environmental history inevitably examine the circumstances that produced the environmental problems of the present day, a litany of themes that challenge global sustainability including: population, consumerism and materialism, climate change, waste disposal, deforestation and loss of wilderness, industrial agriculture, species extinction, depletion of natural resources, invasive organisms and urban development. The simple message of sustainable use of renewable resources is frequently repeated and early as 1864 George Perkins Marsh was pointing out that the changes we make in the environment may later reduce the environments usefulness to humans so any changes should be made with great care – what we would nowadays call enlightened self-interest. Richard Grove has pointed out that "States will act to prevent environmental degradation only when their economic interests are threatened".
Advocacy
It is not clear whether environmental history should promote a moral or political agenda. The strong emotions raised by environmentalism, conservation and sustainability can interfere with historical objectivity: polemical tracts and strong advocacy can compromise objectivity and professionalism. Engagement with the political process certainly has its academic perils although accuracy and commitment to the historical method is not necessarily threatened by environmental involvement: environmental historians have a reasonable expectation that their work will inform policy-makers.
A recent historiographical shift has placed an increased emphasis on inequality as an element of environmental history. Imbalances of power in resources, industry, and politics have resulted in the burden of industrial pollution being shifted to less powerful populations in both the geographic and social spheres. A critical examination of the traditional environmentalist movement from this historical perspective notes the ways in which early advocates of environmentalism sought the aesthetic preservation of middle-class spaces and sheltered their own communities from the worst effects of air and water pollution, while neglecting the plight of the less privileged.
Communities with less economic and sociopolitical power often lack the resources to get involved in environmental advocacy. Environmental history increasingly highlights the ways in which the middle-class environmental movement has fallen short and left behind entire communities. Interdisciplinary research now understands historic inequality as a lens through which to predict future social developments in the environmental sphere, particularly with regard to climate change. The United Nations Department of Economic and Social Affairs cautions that a warming planet will exacerbate environmental and other inequalities, particularly with regard to: "(a) increase in the exposure of the disadvantaged groups to the adverse effects of climate change; (b) increase in their susceptibility to damage caused by climate change; and (c) decrease in their ability to cope and recover from the damage suffered." As an interdisciplinary field that encompasses a new understanding of social justice dynamics in a rapidly changing global climate, environmental history is inherently advocative.
Declensionist narratives
Narratives of environmental history tend to be what scholars call "declensionist", that is, accounts of increasing decline under human activity. In other words, "declensionist" history is a form of the "lost golden age" narrative that has repeated appeared in human thought since ancient times.
Presentism and culpability
Under the accusation of "presentism" it is sometimes claimed that, with its genesis in the late 20th century environmentalism and conservation issues, environmental history is simply a reaction to contemporary problems, an "attempt to read late twentieth century developments and concerns back into past historical periods in which they were not operative, and certainly not conscious to human participants during those times". This is strongly related to the idea of culpability. In environmental debate blame can always be apportioned, but it is more constructive for the future to understand the values and imperatives of the period under discussion so that causes are determined and the context explained.
Environmental determinism
For some environmental historians "the general conditions of the environment, the scale and arrangement of land and sea, the availability of resources, and the presence or absence of animals available for domestication, and associated organisms and disease vectors, that makes the development of human cultures possible and even predispose the direction of their development" and that "history is inevitably guided by forces that are not of human origin or subject to human choice". This approach has been attributed to American environmental historians Webb and Turner and, more recently to Jared Diamond in his book Guns, Germs, and Steel, where the presence or absence of disease vectors and resources such as plants and animals that are amenable to domestication that may not only stimulate the development of human culture but even determine, to some extent, the direction of that development. The claim that the path of history has been forged by environmental rather than cultural forces is referred to as environmental determinism while, at the other extreme, is what may be called cultural determinism. An example of cultural determinism would be the view that human influence is so pervasive that the idea of pristine nature has little validity – that there is no way of relating to nature without culture.
Methodology
Useful guidance on the process of doing environmental history has been given by Donald Worster, Carolyn Merchant, William Cronon and Ian Simmons. Worster's three core subject areas (the environment itself, human impacts on the environment, and human thought about the environment) are generally taken as a starting point for the student as they encompass many of the different skills required. The tools are those of both history and science with a requirement for fluency in the language of natural science and especially ecology. In fact methodologies and insights from a range of physical and social sciences is required, there seeming to be universal agreement that environmental history is indeed a multidisciplinary subject.
Some key works
Chakrabarti, Ranjan (ed), Does Environmental History Matter: Shikar, Subsistence, Sustenance and the Sciences (Kolkata: Readers Service, 2006)
Chakrabarti, Ranjan (ed.), Situating Environmental History (New Delhi: Manohar, 2007)
Cronon, William (ed), Uncommon Ground: Toward Reinventing Nature (New York: W.W. Norton & Company, 1995)
Dunlap, Thomas R., Nature and the English Diaspora: Environment and History in the United States, Canada, Australia, and New Zealand . (New York/Cambridge: Cambridge University Press, 1999)
Glacken, Clarence, Traces on the Rhodian Shore: Nature and Culture in Western Thought From Ancient Times to the End of the Nineteenth Century (Berkeley: University of California Press, 1967)
Griffiths, Tom and Libby Robin (eds.), Ecology and Empire: The Environmental History of Settler Societies. (Keele: Keele University Press, 1997)
Grove, Richard, Green Imperialism: Colonial Expansion, Tropical Island Edens and the Origins of Environmentalism, 1600–1860. (Cambridge University Press, 1995)
Headrick, Daniel, Humans Versus Nature: A Global Environmental History. (New York: Oxford University Press, 2020)
Hughes, J.D., An Environmental History of the World: Humankind's Changing Role in the Community of Life (Oxford: Routledge, 2001)
Hughes, J.D., "Global Environmental History: The Long View", Globalizations, Vol. 2 No. 3, 2005, 293–208.
LaFreniere, Gilbert F., 2007. The Decline of Nature: Environmental History and the Western Worldview, Academica Press, Bethesda, MD
MacKenzie, John M., Imperialism and the Natural World. (Manchester University Press, 1990)
McCormick, John, Reclaiming Paradise: The Global Environmental Movement. (Bloomington: Indiana University Press, 1989)
Rajan, Ravi S., Modernizing Nature: Forestry and Imperial Eco-Development, 1800–1950 (Oxford: Oxford University Press, 2006)
Redclift, Michael R., Frontier: Histories of Civil Society and Nature (Cambridge, MA.: The MIT Press, 2006).
Stevis, Dimitris, "The Globalizations of the Environment", Globalizations, Vol. 2 No. 3, 2005, 323–334.
Williams, Michael, Deforesting the Earth: From Prehistory to Global Crisis. An Abridgement. (Chicago: University of Chicago Press, 2006)
White, Richard, The Organic Machine: The Remaking of the Columbia River. (Hill and Wang, 1996)
Worster, Donald, Nature's Economy: A Study of Ecological Ideals. (Cambridge University Press, 1977)
Zeilinga de Boer, Jelle and Donald Theodore Sanders, Volcanoes in Human History, The Far-reaching Effects of Major Eruptions. (Princeton: Princeton University Press, 2002)
Germinal works by region
In 2004 a theme issue of Environment and History 10(4) provided an overview of environmental history as practiced in Africa, the Americas, Australia, New Zealand, China and Europe as well as those with global scope. J. Donald Hughes (2006) has also provided a global conspectus of major contributions to the environmental history literature.
George Perkins Marsh, Man and Nature; or, Physical Geography as Modified by Human Action, ed. David Lowenthal (Cambridge, MA: Belknap Press of Harvard University Press, 1965 [1864])
Africa
Adams, Jonathan S. and Thomas McShane, The Myth of Wild Africa: Conservation without Illusion (Berkeley: University of California Press, 1996) 266p; covers 1900 to 1980s
Anderson, David; Grove, Richard. Conservation in Africa: People, Policies & Practice (1988), 355pp
Bolaane, Maitseo. "Chiefs, Hunters & Adventurers: The Foundation of the Okavango/Moremi National Park, Botswana". Journal of Historical Geography. 31.2 (April 2005): 241–259.
Carruthers, Jane. "Africa: Histories, Ecologies, and Societies", Environment and History, 10 (2004), pp. 379–406;
Cock, Jacklyn and Eddie Koch (eds.), Going Green: People, Politics, and the Environment in South Africa (Cape Town: Oxford University Press, 1991)
Dovers, Stephen, Ruth Edgecombe, and Bill Guest (eds.), South Africa's Environmental History: Cases and Comparisons (Athens: Ohio University Press, 2003)
Green Musselman, Elizabeth, "Plant Knowledge at the Cape: A Study in African and European Collaboration", International Journal of African Historical Studies, Vol. 36, 2003, 367–392
Jacobs, Nancy J., Environment, Power and Injustice: A South African History (Cambridge: Cambridge University Press, 2003)
Maathai, Wangari, Green Belt Movement: Sharing the Approach and the Experience (New York: Lantern Books, 2003)
McCann, James, Green Land, Brown Land, Black Land: An Environmental History of Africa, 1800–1990 (Portsmouth: Heinemann, 1999)
Showers, Kate B. Imperial Gullies: Soil Erosion and Conservation in Lesotho (2005) 346pp
Steyn, Phia, "The lingering environmental impact of repressive governance: the environmental legacy of the apartheid-era for the new South Africa", Globalizations, 2#3 (2005), 391–403
Antarctica
Pyne, S.J., The Ice: A Journey to Antarctica. (University of Iowa Press, 1986).
Canada
Dorsey, Kurkpatrick. The Dawn of Conservation Diplomacy: U.S.-Canadian Wildlife Protection Treaties in the Progressive Era. (Washington: University of Washington Press, 1998)
Loo, Tina. States of Nature: Conserving Canada's Wildlife in the Twentieth Century. (Vancouver: UBC Press, 2006)
Wynn, Graeme. Canada and Arctic North America: An Environmental History. (Santa Barbara: ABC-CLIO, 2007)
Parr, Joy. Sensing Changes: Technologies, Environments, and the Everyday, 1953–2003. (Vancouver: UBC Press, 2010)
United States
Allitt, Patrick. A Climate of Crisis: America in the Age of Environmentalism (2014), wide-ranging scholarly history since 1950s excerpt
Andrews, Richard N.L., Managing the Environment, Managing Ourselves: A History of American Environmental Policy (Yale University Press, 1999)
Bates, J. Leonard. "Fulfilling American Democracy: The Conservation Movement, 1907 to 1921", The Mississippi Valley Historical Review, (1957) 44#1 pp. 29–57. in JSTOR
Browning, Judkin and Timothy Silver. An Environmental History of the Civil War (2020) online review
Brinkley, Douglas G. The Wilderness Warrior: Theodore Roosevelt and the Crusade for America, (2009) excerpt and text search
Carson, Rachel, Silent Spring (Cambridge, Mass. : Riverside Press, 1962)
Cawley, R. McGreggor. Federal Land, Western Anger: The Sagebrush Rebellion and Environmental Politics (1993), on conservatives
Cronon, William, Changes in the Land: Indians, Colonists and the Ecology of New England (New York: Hill and Wang, 1983)
Cronon, William, Nature's Metropolis: Chicago and the Great West (New York: W.W. Norton & Company, 1991)
Dant, Sara. Losing Eden: An Environmental History of the American West. (U of Nebraska Press, 2023). online, also see online book review
Flippen, J. Brooks. Nixon and the Environment (2000).
Gottlieb, Robert, Forcing the Spring: The Transformation of the American Environmental Movement (Washington: Island Press, 1993)
Hays, Samuel P. Conservation and the Gospel of Efficiency (1959), on Progressive Era.
Hays, Samuel P. Beauty, Health, and Permanence: Environmental Politics in the United States, 1955–1985 (1987), the standard scholarly history
Hays, Samuel, Conservation and the Gospel of Efficiency: The Progressive Conservation Movement 1890–1920 (Cambridge, MA: Harvard University Press, 1959)
Hays, Samuel P. A History of Environmental Politics since 1945 (2000), shorter standard history
King, Judson. The Conservation Fight, From Theodore Roosevelt to the Tennessee Valley Authority (2009)
Merchant, Carolyn. American environmental history: An introduction (Columbia University Press, 2007).
Merchant, Carolyn. The Columbia guide to American environmental history (Columbia University Press, 2012).
Merchant, Carolyn. The Death of Nature: Women, Ecology and the Scientific Revolution (New York: Harper & Row, 1980)
Nash, Roderick. The Rights of Nature: A History of Environmental Ethics (Madison: University of Wisconsin Press, 1989)
Nash, Roderick. Wilderness and the American Mind, (4th ed. 2001), the standard intellectual history
Rice, James D. Nature and History in the Potomac Country: From Hunter-Gatherers to the Age of Jefferson (2009)
Sale, Kirkpatrick. The Green Revolution: The American Environmental Movement, 1962–1999 (New York: Hill & Wang, 1993)
Scheffer, Victor B. The Shaping of Environmentalism in America (1991).
Stradling, David (ed), Conservation in the Progressive Era: Classic Texts (Washington: University of Washington Press, 2004), primary sources
Strong, Douglas H. Dreamers & Defenders: American Conservationists. (1988) online edition , good biographical studies of the major leaders
Turner, James Morton, "The Specter of Environmentalism": Wilderness, Environmental Politics, and the Evolution of the New Right. The Journal of American History 96.1 (2009): 123–47 online at History Cooperative
Unger, Nancy C., Beyond Nature's Housekeepers: American Women in Environmental History. (New York: Oxford University Press, 2012)
Worster, Donald, Under Western Skies: Nature and History in the American West (Oxford University Press, 1992)
Melosi, Martin V., Coping with Abundance: Energy and Environment in Industrial America (Temple University Press, 1985)
Steinberg, Ted, Down to Earth: Nature's Role in American History (Oxford University Press, 2002)
Latin America and the Caribbean
Boyer, Christopher R. Political Landscapes: Forests, Conservation, and Community in Mexico. (Durham: Duke University Press 2015.)
Dean, Warren. With Broadax and Firebrand: The Destruction of the Brazilian Atlantic Forest. (Berkeley: University of California Press, 1995)
Funes Monzote, Reinaldo. From Rainforest to Cane Field in Cuba: An Environmental History since 1492. (2008)
Matthews, Andrew S. Instituting Nature: Authority, Expertise, and Power in Mexican Forests. (Cambridge: Massachusetts Institute of Technology Press, 2011.)
Melville, Elinor. A Plague of Sheep: Environmental Consequences of the Conquest of Mexico. (Cambridge: Cambridge University Press, 1994)
Miller, Shawn William. An Environmental History of Latin America. (2007)
Miller, Shawn William. Fruitless Trees: Portuguese Conservation and Brazil's Colonial Timber. Stanford: Stanford University Press 2000.
Noss, Andrew and Imke Oetting. "Hunter Self-Monitoring by the Izoceño-Guarani in the Bolivian Chaco". Biodiversity & Conservation. 14.11 (2005): 2679–2693.
Raffles, Hugh, et al. "Further Reflections on Amazonian Environmental History: Transformations of Rivers and Streams". Latin American Research Review. Vol. 38, Number 3, 2003: 165–187
Santiago, Myrna I. The Ecology of Oil: Environment, Labor, and the Mexican Revolution, 1900–1938. Cambridge: Cambridge University Press 2006.
Simonian, Lane. Defending the Land of the Jaguar: A History of Conservation in Mexico. (Austin: University of Texas Press, 1995)
Wakild, Emily. Revolutionary Parks: Conservation, Social Justice, and Mexico's National Parks, 1910–1940. Tucson: University of Arizona Press 2012.
South and South East Asia
Boomgaard, Peter, ed. Paper Landscapes: Explorations in the Environment of Indonesia (Leiden: KITLV Press, 1997)
David, A. & Guha, R. (eds) 1995. Nature, Culture, Imperialism: Essays on the Environmental History of South Asia. Delhi, India: Oxford University Press.
Fisher, Michael. An Environmental History of India: From Earliest Times to the Twenty-First Century (Cambridge UP, 2018)
Gadgil, M. and R. Guha. This Fissured Land: An Ecological History of India (University of California Press, 1993)
Grove, Richard, Vinita Damodaran, and Satpal Sangwan (eds.) Nature & the Orient: The Environmental History of South and Southeast Asia (Oxford University Press, 1998)
Hill, Christopher V., South Asia: An Environmental History (Santa Barbara: ABC-Clio, 2008)
Shiva, Vandana, Stolen Harvest: the Hijacking of the Global Food Supply (Cambridge MA: South End Press, 2000),
Yok-shiu Lee and Alvin Y. So, Asia's Environmental Movements: Comparative Perspectives (Armonk: M.E. Sharpe, 1999)
Iqbal, Iftekhar. The Bengal Delta: Ecology, State and Social Change, 1840–1943 (London: Palgrave Macmillan, 2010)
East Asia
Elvin, Mark & Ts'ui-jung Liu (eds.), Sediments of Time: Environment and Society in Chinese History (Cambridge University Press, 1998)
Totman, Conrad D., The Green Archipelago: Forestry in Preindustrial Japan (Berkeley: University of California Press, 1989)
Totman, Conrad D., Pre-industrial Korea and Japan in Environmental Perspective (Leiden: Brill, 2004)
Ts'ui-jung Liu, Sediments of Time: Environment and Society in Chinese History (Cambridge University Press, 1998)
Liu, Ts'ui-jung and James Beattie, eds, Environment, Modernization and Development in East Asia: Perspectives from Environmental History (Basingstoke: Palgrave Studies in World Environmental History, 2016)
Tull, Malcolm, and A. R. Krishnan. "Resource Use and Environmental Management in Japan, 1890–1990", in: J.R. McNeill (ed), Environmental History of the Pacific and the Pacific Rim (Aldershot Hampshire: Ashgate Publishing, 2001)
Menzie, Nicholas, Forest and Land Management in Late Imperial China (London, Macmillan Press. 1994)
Maohong, Bao, "Environmental History in China", Environment and History, Volume 10, Number 4, November 2004, pp. 475–499
Marks, R. B., Tigers, rice, silk and silt. Environment and economy in late imperial South China (Cambridge: Cambridge University Press, 1998)
Perdue, Peter C., "Lakes of Empire: Man and Water in Chinese History", Modern China, 16 (January 1990): 119–29
Shapiro, Judith, Mao's War against Nature: Politics and the Environment in Revolutionary China (New York: Cambridge University Press. 2001)
Middle East and North Africa
McNeill, J. R. "The Eccentricity of the Middle East and North Africa's Environmental History." Water on Sand: Environmental Histories of the Middle East and North Africa (2013): 27–50.
Mikhail, Alan, ed. Water on sand: Environmental histories of the Middle East and North Africa. Oxford University Press, 2013.
Dursun, Selçuk. "A call for an environmental history of the Ottoman Empire and Turkey: Reflections on the fourth ESEH conference." New Perspectives on Turkey 37 (2007): 211–222.
Dursun, Selçuk. "Forest and the state: history of forestry and forest administration in the Ottoman Empire." Unpublished PhD. Sabanci University (2007).
Mikhail, Alan. Nature and empire in Ottoman Egypt: An environmental history. Cambridge University Press, 2011.
White, Sam. "Rethinking disease in Ottoman history." International Journal of Middle East Studies 42, no. 4 (2010): 549–567.
* Burke III, Edmund, "The Coming Environmental Crisis in the Middle East: A Historical Perspective, 1750–2000 CE" (April 27, 2005). UC World History Workshop. Essays and Positions from the World History Workshop. Paper 2.
Tal, Alon, Pollution in a Promised Land: An Environmental History of Israel (Berkeley: University of California Press, 2002)
Europe
Brimblecombe, Peter and Christian Pfister, The Silent Countdown: Essays in European Environmental History (Berlin: Springer-Verlag, 1993)
Crosby, Alfred W., Ecological Imperialism: The Biological Expansion of Europe, 900–1900 (Cambridge: Cambridge University Press, 1986)
Christensen, Peter, Decline of Iranshahr: Irrigation and Environments in the History of the Middle East, 500 B.C. to 1500 A.D (Austin: University of Texas Press, 1993)
Ditt, Karl, 'Nature Conservation in England and Germany, 1900–1970: Forerunner of Environmental Protection?', Contemporary European History 5:1–28.
Hughes, J. Donald, Pan's Travail: Environmental Problems of the Ancient Greeks and Romans (Baltimore: Johns Hopkins, 1994)
Hughes, J. Donald, The Mediterranean. An Environmental History (Santa Barbara: ABC-Clio, 2005)
Martí Escayol, Maria Antònia. La construcció del concepte de natura a la Catalunya moderna (Barcelona: Universitat Autonoma de Barcelona, 2004)
Netting, Robert, Balancing on an Alp: Ecological Change and Continuity in a Swiss Mountain Community (Cambridge University Press, 1981)
Parmentier, Isabelle, dir., Ledent, Carole, coll., La recherche en histoire de l'environnement : Belgique, Congo, Rwanda, Burundi, Namur, 2010 (Coll. Autres futurs).
Stephen J. Pyne, Vestal Fire. An Environmental History, Told through Fire, of Europe and Europe's Encounter with the World (Seattle, University of Washington Press, 1997)
Richards, John F., The Unending Frontier: Environmental History of the Early Modern World (Berkeley: University of California Press, 2003)
Whited, Tamara L. (ed.), Northern Europe. An Environmental History (Santa Barbara: ABC-Clio, 2005)
Australia, New Zealand & Oceania
Beattie, James, Empire and Environmental Anxiety: Health, Science, Art and Conservation in South Asia and Australasia, 1800–1920 (Basingstoke: Palgrave Macmillan, 2011)
Beattie, James, Emily O'Gorman and Matt Henry, eds, Climate, Science and Colonization: Histories from Australia and New Zealand (New York: Palgrave Macmillan, 2014).
Bennett, Judith Ann, Natives and Exotics: World War II and Environment in the Southern Pacific (Honolulu: University of Hawai'i Press, 2009)
Bennett, Judith Ann, Pacific Forest: A History of Resource Control and Contest in Solomon Islands, c. 1800–1997 (Cambridge and Leiden: White Horse Press and Brill, 2000)
Bridgman, H. A., "Could climate change have had an influence on the Polynesian migrations?", Palaeogeography, Palaeoclimatology, Palaeoecology, 41(1983) 193–206.
Brooking, Tom and Eric Pawson, Environmental Histories of New Zealand (Oxford: Oxford University Press, 2002).
Carron, L.T., A History of Forestry in Australia (Canberra, 1985).
Cassels, R., "The Role of Prehistoric Man in the Faunal Extinctions of New Zealand and other Pacific Islands", in Martin, P. S. and Klein, R. G. (eds.) Quaternary Extinctions: A Prehistoric Revolution (Tucson, The University of Arizona Press, 1984)
D'Arcy, Paul, The People of the Sea: Environment, Identity, and History in Oceania (Honolulu: University of Hawai'i Press, 2006)
Dargavel, John (ed.), Australia and New Zealand Forest Histories. Short Overviews, Australian Forest History Society Inc. Occasional Publications, No. 1 (Kingston: Australian Forest History Society, 2005)
Dovers, Stephen (ed), Essays in Australian Environmental History: Essays and Cases (Oxford: OUP, 1994).
Dovers, Stephen (ed.), Environmental History and Policy: Still Settling Australia (South Melbourne: Oxford University Press, 2000).
Flannery, Tim, The Future Eaters, An Ecological History of the Australian Lands and People (Sydney: Reed Books,1994)
Garden, Don, Australia, New Zealand, and the Pacific. An Environmental History (Santa Barbara: ABC-Clio, 2005)
Hughes, J. Donald, "Nature and Culture in the Pacific Islands", Leidschrift, 21 (2006) 1, 129–144.
Hughes, J. Donald, "Tahiti, Hawaii, New Zealand: Polynesian impacts on Island Ecosystems", in: An Environmental History of the World. Humankind"s Changing Role in the Community of Life, (London & New York, Routledge, 2002)
James Beattie, "Environmental Anxiety in New Zealand, 1840–1941: Climate Change, Soil Erosion, Sand Drift, Flooding and Forest Conservation", Environment and History 9(2003): 379–392
Knight, Catherine, New Zealand's Rivers: An Environmental History (Christchurch: Canterbury University Press, 2016).
McNeill, John R., "Of Rats and Men. A Synoptic Environmental History of the Island Pacific", Journal of World History, Vol. 5, no. 2, 299–349
Pyne, Stephen, Burning Bush: A Fire History of Australia (New York, Henry Holt, 1991).
Robin, Libby, Defending the Little Desert: The Rise of Ecological Consciousness in Australia (Melbourne: MUP, 1998)
Robin, Libby, How a Continent Created a Nation (Sydney: University of New South Wales Press, 2007)
Robin, Libby, The Flight of the Emu: A Hundred Years of Australian Ornithology 1901–2001, (Melbourne: Melbourne University Press, 2000)
Smith, Mike, Hesse, Paul (eds.), 23 Degrees S: Archaeology and Environmental History of the Southern Deserts (Canberra: National Museum of Australia Press, 2005)
Star, Paul, "New Zealand Environmental History: A Question of Attitudes", Environment and History 9(2003): 463–475
Young, Ann R.M, Environmental Change in Australia since 1788 (Oxford University Press, 2000)
Young, David, Our Islands, Our Selves: A History of Conservation in New Zealand ( Dunedin: Otago University Press, 2004)
United Kingdom
Beinart, William and Lotte Hughes, Environment and Empire (Oxford, 2007).
Clapp, Brian W., An Environmental History of Britain Since the Industrial Revolution (London, 1994). excerpt
Grove, Richard, Green Imperialism: Colonial Expansion, Tropical Island Edens and the Origins of Environmentalism, 1600–1860 (Cambridge, 1994).
Lambert, Robert, Contested Mountains (Cambridge, 2001).
Mosley, Stephen, The Chimney of the World: A History of Smoke Pollution in Victorian and Edwardian Manchester (White Horse, 2001).
Porter, Dale, The Thames Embankment: Environment, Technology, and Society in Victorian London, (University of Akron, 1998).
Simmonds, Ian G., Environmental History of Great Britain from 10,000 Years Ago to the Present (Edinburgh, 2001).
Sheail, John, An Environmental History of Twentieth-Century Britain (Basingstoke, 2002).
Thorsheim, Peter, Inventing Pollution: Coal, Smoke, and Culture in Britain since 1800 (Ohio University, 2006).
Future
Environmental history, like all historical studies, shares the hope that through an examination of past events it may be possible to forge a more considered future. In particular a greater depth of historical knowledge can inform environmental controversies and guide policy decisions.
The subject continues to provide new perspectives, offering cooperation between scholars with different disciplinary backgrounds and providing an improved historical context to resource and environmental problems. There seems little doubt that, with increasing concern for our environmental future, environmental history will continue along the path of environmental advocacy from which it originated as "human impact on the living systems of the planet bring us no closer to utopia, but instead to a crisis of survival" with key themes being population growth, climate change, conflict over environmental policy at different levels of human organization, extinction, biological invasions, the environmental consequences of technology especially biotechnology, the reduced supply of resources – most notably energy, materials and water. Hughes comments that environmental historians "will find themselves increasingly challenged by the need to explain the background of the world market economy and its effects on the global environment. Supranational instrumentalities threaten to overpower conservation in a drive for what is called sustainable development, but which in fact envisions no limits to economic growth". Hughes also notes that "environmental history is notably absent from nations that most adamantly reject US, or Western influences".
Michael Bess sees the world increasingly permeated by potent technologies in a process he calls "artificialization" which has been accelerating since the 1700s, but at a greatly accelerated rate after 1945. Over the next fifty years, this transformative process stands a good chance of turning our physical world, and our society, upside-down. Environmental historians can "play a vital role in helping humankind to understand the gale-force of artifice that we have unleashed on our planet and on ourselves".
Against this background "environmental history can give an essential perspective, offering knowledge of the historical process that led to the present situation, give examples of past problems and solutions, and an analysis of the historical forces that must be dealt with" or, as expressed by William Cronon, "The viability and success of new human modes of existing within the constraints of the environment and its resources requires both an understanding of the past and an articulation of a new ethic for the future."
Related journals
Key journals in this field include: Environment and HistoryEnvironmental History, co-published by the American Society for Environmental History and Forest History SocietyGlobal Environment: A Journal of History and Natural and Social SciencesInternational Review of Environmental History See also
2020s in environmental history
American Society for Environmental History
Conservation Movement
Conservation in the United States
Ecosemiotics
Environmental history of Latin America
List of environmental history topics
Network in Canadian History and Environment
Rachel Carson Center for Environment and Society
References
Bibliography
Global
- covers British Empire
Uekötter, Frank. The Vortex: An Environmental History of the Modern World (University of Pittsburgh Press, 20230 online book review
Asia & Middle East
; scholarly essays on plague and environment in late Ottoman Egypt, the rise and fall of environmentalism in Lebanon, the politics of water in the making of Saudi Arabia, etc.
Europe and Russia
Bonhomme, Brian. Forests, Peasants and Revolutionaries: Forest Conservation & Organization in Soviet Russia, 1917-1929 (2005) 252pp
Campopiano, M., “Evolution of the Landscape and the Social and Political Organisation of Water Management: the Po Valley in the Middle Ages (Fifth to Fourteenth Centuries)”, in Borger, de Kraker, Soens, Thoen and Tys, Landscapes or seascapes?, (CORN, 13), 2013, 313-332
Cioc, Mark. The Rhine: An Eco-Biography, 1815-2000 (2002).
Clapp, Brian William. An environmental history of Britain since the industrial revolution (Routledge, 2014).
Dryzek, John S., et al. Green states and social movements: environmentalism in the United States, United Kingdom, Germany, and Norway (Oxford UP, 2003).
Hoffmann, Richard. An Environmental History of Medieval Europe (2014)
Luckin, Bill, and Peter Thorsheim, eds. A Mighty Capital under Threat: The Environmental History of London, 1800-2000 (U of Pittsburgh Press, 2020) online review.
Smout, T. Christopher. Nature contested: environmental history in Scotland and Northern England since 1600 (2000)
Thorsheim, Peter. Inventing Pollution: Coal, Smoke, and Culture in Britain since 1800 (2009)
Uekotter, Frank. The greenest nation?: A new history of German environmentalism (Mit Press, 2014).
Warren, Charles R. Managing Scotland's environment (2002)
Weiner, Douglas R. Models of Nature: Ecology, Conservation and Cultural Revolution in Soviet Russia (2000) 324pp; covers 1917 to 1939
Historiography
Arndt, Melanie: Environmental History, Version: 3, in: Docupedia Zeitgeschichte, 23. August 2016
Beattie, James. "Recent Themes in the Environmental History of the British Empire," History Compass (Feb 2012) 10#2 pp 129–139.
Bess, Michael, Mark Cioc, and James Sievert, "Environmental History Writing in Southern Europe," Environmental History, 5 (2000), pp. 545–56;
Cioc, Mark, Björn-Ola Linnér, and Matt Osborn, "Environmental History Writing in Northern Europe," Environmental History, 5 (2000), pp. 396–406
Coates, Peter. "Emerging from the Wilderness (or, from Redwoods to Bananas): Recent Environmental History in the United States and the Rest of the Americas," Environment and History 10 (2004), pp. 407–38
Conway, Richard. "The Environmental History of Colonial Mexico," History Compass (2017) 15#7 DOI: 10.1111/hic3.12388
Haq, Gary, and Alistair Paul. Environmentalism since 1945 (Routledge, 2013).
Hay, Peter. Main Currents in Western Environmental Thought (2002), standard scholarly history excerpt and text search
Mancall, Peter C. "Pigs for Historians: Changes in the Land and Beyond". William and Mary Quarterly (2010) 67#2 pp. 347–375 in JSTOR
Mosley, Stephen. "Common Ground: Integrating Social and Environmental History," Journal of Social History, Volume 39, Number 3, Spring 2006, pp. 915–933; relation to Social history
Robin, Libby, and Tom Griffiths, "Environmental History in Australasia," Environment and History, 10 (2004), pp. 439–74
Sedrez, Lise. (2011) "Environmental History of Modern Latin America" in A Companion to Latin American History, ed. Thomas H. Holloway. Wiley-Blackwell.
Wakild, Emily (2011) "Environment and Environmentalism" in A Companion to Mexican History and Culture, William H. Beezley, ed. Wiley Blackwell.
Further reading
Biasillo, Roberta, Claudio de Majo, eds. "Storytelling and Environmental History: Experiences from Germany and Italy", RCC Perspectives: Transformations in Environment and Society 2020, no. 2. doi.org/10.5282/rcc/9116.
Hall, Marcus, and Patrick Kupper (eds.), "Crossing Mountains: The Challenges of Doing Environmental History", RCC Perspectives 2014, no. 4. doi.org/10.5282/rcc/6510.
Mauch, Christof, "Notes From the Greenhouse: Making the Case for Environmental History", RCC Perspectives 2013, no. 6. doi.org/10.5282/rcc/5661.
External links
Podcasts
Jan W.Oosthoek podcasts on many aspects of the subject including interviews with eminent environmental historians
Nature's Past: Canadian Environmental History Podcast features monthly discussions about the environmental history research community in Canada.
EnvirohistNZ Podcast is a podcast that looks at the environmental history of New Zealand.
Institutions & resources
International Consortium of Environmental History Organizations (ICE-HO)
Oosthoek, K.J.W. What is environmental history?
Historiographies of different countries
H-Environment web resource for students of environmental history
American Society for Environmental History
European Society for Environmental History
Environmental History Now
Environmental History Resources
Environmental History Timeline
Environmental History on the Internet
Rachel Carson Center for Environment and Society and its Environment & Society Portal
Forest History Society
Australian and New Zealand Environmental History Network
Brazilian Environmental History Network
Centre for Environmental History at the Australian National University
Network in Canadian History and the Environment
Centre for World Environmental History, University of Sussex
Croatian journal for environmental history in croatian, english, german and slovenian
Environmental History Virtual Library
Environmental History Top News
Environmental History Mobile Application Project
HistoricalClimatology.com Explores climate history, a form of environmental history.
Climate History Network Network of climate historians.
Environment & Society Portal
Turkish Society for Environmental History
Journals
JSTOR: All Volumes and Issues - Browse - Environmental History [1996–2007 (Volumes 1–12)]
JSTOR: All Volumes and Issues - Browse - Forest & Conservation History [1990–1995 (Volumes 34–39)]
JSTOR: All Volumes and Issues - Browse - Environmental Review: ER [1976–1989 (Volumes 1–13)]
JSTOR: All Volumes and Issues - Browse - Environmental History Review [1990–1995 (Volumes 14–19)]
JSTOR: All Volumes and Issues - Browse - Journal of Forest History [1974–1989 (Volumes 18–33)]
JSTOR: All Volumes and Issues - Browse - Forest History [1957–1974 (Volumes 1–17)]
Environment and History, Published by White Horse Press with British-based Editorial collective
Environmental History, Co-published quarterly by the American Society for Environmental History and the (US) Forest History Society
Global Environment: A Journal of History and Natural and Social Sciences, Published in New Zealand with special regard to the modern and contemporary ages
Historia Ambiental Latinoamericana y Caribeña (HALAC) Journal of the North Atlantic
Economic and Ecohistory: Research Journal for Economic and Environmental History (Croatia)
Pacific Historical ReviewArcadia: Explorations in Environmental History,'' published by the Rachel Carson Center for Environment and Society and ESEH
Articles
Think About Nature
Videos
Notes from the Field public television episodes on U.S. environmental history subjects
Environmental social science
Landscape design history
Historiography
History
Fields of history | 0.772888 | 0.990305 | 0.765395 |
Mesoamerican chronology | Mesoamerican chronology divides the history of prehispanic Mesoamerica into several periods: the Paleo-Indian (first human habitation until 3500 BCE); the Archaic (before 2600 BCE), the Preclassic or Formative (2500 BCE – 250 CE), the Classic (250–900 CE), and the Postclassic; as well as the post European contact Colonial Period (1521–1821), and Postcolonial, or the period after independence from Spain (1821–present).
The periodisation of Mesoamerica by researchers is based on archaeological, ethnohistorical, and modern cultural anthropology research dating to the early twentieth century. Archaeologists, ethnohistorians, historians, and cultural anthropologists continue to work to develop cultural histories of the region.
Overview
Paleo-Indian period
18000–8000 BCE
The Paleo-Indian (less frequently, Lithic) period or era is that which spans from the first signs of human presence in the region, which many believe to have happened due to the Bering Land Bridge, to the establishment of agriculture and other practices (e.g. pottery, permanent settlements) and subsistence techniques characteristic of proto-civilizations. In Mesoamerica, the termination of this phase and its transition into the succeeding Archaic period may generally be reckoned at between 10,000 and 8000 BCE. This dating is approximate only and different timescales may be used between fields and sub-regions.
Archaic Era
Before 2600 BCE
During the Archaic Era agriculture was developed in the region and permanent villages were established. Late in this era, use of pottery and loom weaving became common, and class divisions began to appear. Many of the basic technologies of Mesoamerica in terms of stone-grinding, drilling, pottery etc. were established during this period.
Preclassic Era or Formative Period
2000 BCE – 250 CE
During the Preclassic Era, or Formative Period, large-scale ceremonial architecture, writing, cities, and states developed. Many of the distinctive elements of Mesoamerican civilization can be traced to this period, including the dominance of corn, the building of pyramids, human sacrifice, jaguar-worship, the complex calendar, and many of the gods.The Olmec civilization developed and flourished at such sites as La Venta and San Lorenzo Tenochtitlán, eventually succeeded by the Epi-Olmec culture between 300–250 BCE. The Zapotec civilization arose in the Valley of Oaxaca, the Teotihuacan civilization arose in the Valley of Mexico. The Maya civilization began to develop in the Mirador Basin (in modern-day Guatemala) and the Epi-Olmec culture in the Isthmus of Tehuantepec (in modern-day Chiapas), later expanding into Guatemala and the Yucatan Peninsula.
In Central America, there were some Olmec influences, the archaeological sites of Los Naranjos and Yarumela in Honduras stand out, built by ancestors of the Lencas, which reflect an architectural influence of this culture on Central American soil. Other sites with possible Olmec influence have been reported, such as Puerto Escondido, in the Sula Valley, near La Lima, and Hato Viejo in the department of Olancho, where a jadeite statuette has been found that shares many characteristics with those found in Mexico.
Classic Period
250–900 CE
The Classic Period was dominated by numerous independent city-states in the Maya region and also featured the beginnings of political unity in central Mexico and the Yucatán. Regional differences between cultures grew more manifest. The city-state of Teotihuacan dominated the Valley of Mexico until the early 8th century, but little is known of the political structure of the region because the Teotihuacanos left no written records. The city-state of Monte Albán dominated the Valley of Oaxaca until the late Classic, leaving limited records in their script, which is as yet mostly undeciphered. Highly sophisticated arts such as stuccowork, architecture, sculptural reliefs, mural painting, pottery, and lapidary developed and spread during the Classic era.
In the Maya region, under considerable military influence by Teotihuacan after the "arrival" of Siyaj K'ak' in 378 CE, numerous city states such as Tikal, Uaxactun, Calakmul, Copán, Quirigua, Palenque, Cobá, and Caracol reached their zeniths. Each of these polities was generally independent, although they often formed alliances and sometimes became vassal states of each other. The main conflict during this period was between Tikal and Calakmul, which fought a series of wars over the course of more than half a millennium. Each of these states declined during the Terminal Classic and were eventually abandoned.
Postclassic Period
900–1521 CE
In the Postclassic Period many of the great nations and cities of the Classic Era collapsed, although some continued, such as in Oaxaca, Cholula, and the Maya of Yucatan, such as at Chichen Itza and Uxmal. This is sometimes thought to have been a period of increased chaos and warfare.
The Postclassic is often viewed as a period of cultural decline. However, it was a time of technological advancement in architecture, engineering, and weaponry. Metallurgy (introduced c. 800) came into use for jewelry and some tools, with new alloys and techniques being developed in a few centuries. The Postclassic was a period of rapid movement and population growth—especially in Central Mexico post-1200—and of experimentation in governance. For instance, in Yucatan, 'dual rulership' apparently replaced the more theocratic governments of Classic times, while oligarchic councils operated in much of central Mexico. Likewise, it appears that the wealthy pochteca (merchant class) and military orders became more powerful than was apparently the case in Classic times. This afforded some Mesoamericans a degree of social mobility.
The Toltec for a time dominated central Mexico in the 9th–10th century, then collapsed. The northern Maya were for a time united under Mayapan. Oaxaca was briefly united by Mixtec rulers in the 11th–12th centuries.
The Aztec Empire arose in the early 15th century and appeared to be on a path to asserting dominance over the Valley of Mexico region not seen since Teotihuacan. By the 15th century, the Mayan 'revival' in Yucatan and southern Guatemala and the flourishing of Aztec imperialism evidently enabled a renaissance of fine arts and science. Examples include the 'Pueblan-Mexica' style in pottery, codex illumination, and goldwork, the flourishing of Nahua poetry, and the botanical institutes established by the Aztec elite.
Spain was the first European power to contact Mesoamerica. Its conquistadors, aided by numerous native allies, conquered the Aztecs.
Colonial Period
1521–1821 CE
The Colonial Period was initiated with the Spanish conquest (1519–1521), which ended the hegemony of the Aztec Empire. It was accomplished with Spaniards' strategic alliances with enemies of the empire, most especially Tlaxcala, but also Huexotzinco, Xochimilco, and even Texcoco, a former partner in the Aztec Triple Alliance.
Although not all parts of Mesoamerica were brought under control of the Spanish Empire immediately, the defeat of the Aztecs marked the dramatic beginning of an inexorable process of conquest in Mesoamerica and incorporation that Spain completed in the mid-seventeenth century.
Indigenous peoples did not disappear, although their numbers were greatly reduced in the sixteenth century by new infectious diseases brought by the Spanish invaders; they suffered high mortality from slave labor, and during epidemics. The fall of Tenochtitlan marked the beginning of the three-hundred-year colonial period and the imposition of Spanish rule.
Chronology
Cultural horizons of Mesoamerica
Mesoamerican civilisation was a complex network of different cultures. As seen in the time-line below, these did not necessarily occur at the same time. The processes that gave rise to each of the cultural systems of Mesoamerica were very complex and not determined solely by the internal dynamics of each society. External as well as endogenous factors influenced their development. Among these factors, for example, were the relations between human groups and between humans and the environment, human migrations, and natural disasters.
Historians and archaeologists divide pre-Hispanic Mesoamerican history into three periods. The Spanish conquest of the Aztec empire (1519–1521) marks the end of indigenous rule and the incorporation of indigenous peoples as subjects of the Spanish Empire for the 300 year colonial period. The postcolonial period began with Mexican independence in 1821 and continues to the present day. European conquest did not end the existence of Mesoamerica's indigenous peoples, but did subject them to new political regimes. In the chart below of prehispanic cultures, the dates mentioned are approximations, and that the transition from one period to another did not occur at the same time nor under the same circumstances in all societies.
Timeline of pre-Hispanic Mesoamerica
Preclassic Era
The Preclassic period ran from 2500 BCE to 200 CE. Its beginnings are marked by the development of the first ceramic traditions in the West, specifically at sites such as Matanchén, Nayarit, and Puerto Marqués, in Guerrero. Some authors hold that the early development of pottery in this area is related to the ties between South America and the coastal peoples of Mexico. The advent of ceramics is taken as an indicator of a sedentary society, and it signals the divergence of Mesoamerica from the hunter-gatherer societies in the desert to the north.
The Preclassic Era (also known as the Formative Period) is divided into three phases: the Early (2500–1200 BCE), Middle (1500–600 BCE), and Late (600 BCE – 200 CE). During the first phase, the manufacture of ceramics was widespread across the entire region, the cultivation of maize and vegetables became well-established, and society started to become socially stratified in a process that concluded with the appearance of the first hierarchical societies along the coast of the Gulf of Mexico. In the early Preclassic period, the Capacha culture acted as a driving force in the process of civilizing Mesoamerica, and its pottery spread widely across the region.
By 2500 BCE, small settlements were developing in Guatemala's Pacific Lowlands, places such as Tilapa, La Blanca, Ocós, El Mesak, Ujuxte, and others, where the oldest ceramic pottery from Guatemala have been found. From 2000 BCE a heavy concentration of pottery in the Pacific Coast Line has been documented. Recent excavations suggest that the Highlands were a geographic and temporal bridge between Early Preclassic villages of the Pacific coast and later Petén lowlands cities. In Monte Alto near La Democracia, Escuintla, in the Pacific lowlands of Guatemala, some giant stone heads and potbelly sculptures have been found, dated at , of the so-named Monte Alto Culture.
Around 1500 BCE, the cultures of the West entered a period of decline, accompanied by an assimilation into the other peoples with whom they had maintained connections. As a result, the Tlatilco culture emerged in the Valley of Mexico, and the Olmec culture in the Gulf. Tlatilco was one of the principal Mesoamerican population centers of this period. Its people were adept at harnessing the natural resources of Lake Texcoco and at cultivating maize. Some authors posit that Tlatilco was founded and inhabited by the ancestors of today's Otomi people.
The Olmecs, on the other hand, had entered into an expansionist phase that led them to construct their first works of monumental architecture at San Lorenzo and La Venta. The Olmecs exchanged goods within their own core area and with sites as far away as Guerrero and Morelos and present day Guatemala and Costa Rica. San José Mogote, a site that also shows Olmec influences, ceded dominance of the Oaxacan plateau to Monte Albán toward the end of the middle Preclassic Era. During this same time, the Chupícuaro culture flourished in Bajío, while along the Gulf the Olmecs entered a period of decline.
One of the great cultural milestones that marked the Middle Preclassic period is the development of the first writing system, by either the Maya, the Olmec, or the Zapotec. During this period, the Mesoamerican societies were highly stratified. The connections between different centers of power permitted the rise of regional elites that controlled natural resources and peasant labor. This social differentiation was based on the possession of certain technical knowledge, such as astronomy, writing, and commerce. Furthermore, the Middle Preclassic period saw the beginnings of the process of urbanization that would come to define the societies of the Classic period. In the Maya area, cities such as Nakbe c. 1000 BCE, El Mirador c. 650 BCE, Cival c. 350 BCE, and San Bartolo show the same monumental architecture of the Classic period. In fact, El Mirador is the largest Maya city. It has been argued that the Maya experienced a first collapse c. 100 CE, and resurged c. 250 in the Classic period. Some population centers such as Tlatilco, Monte Albán, and Cuicuilco flourished in the final stages of the Preclassic period. Meanwhile, the Olmec populations shrank and ceased to be major players in the area.
Toward the end of the Preclassic period, political and commercial hegemony shifted to the population centers in the Valley of Mexico. Around Lake Texcoco there existed a number of villages that grew into true cities: Tlatilco and Cuicuilco are examples. The former was found on the northern bank of the lake, while the latter was on the slopes of the mountainous region of Ajusco. Tlatilco maintained strong relationships with the cultures of the West, so much so that Cuicuilco controlled commerce in the Maya area, Oaxaca, and the Gulf coast. The rivalry between the two cities ended with the decline of Tlatilco. Meanwhile, at Monte Albán in the Valley of Oaxaca, the Zapotec had begun developing culturally independent of the Olmec, adopting aspects of that culture but making their own contributions as well. On the southern coast of Guatemala, Kaminaljuyú advanced in the direction of what would be the Classic Maya culture, even though its links to Central Mexico and the Gulf would initially provide their cultural models. Apart from the West, where the tradition of the Tumbas de tiro had taken root, in all the regions of Mesoamerica the cities grew in wealth, with monumental constructions carried out according to urban plans that were surprisingly complex. The circular pyramid of Cuicuilco dates from this time, as well as the central plaza of Monte Albán, and the Pyramid of the Moon in Teotihuacan.
Around the start of the common era, Cuicuilco had disappeared, and the hegemony over the Mexican basin had passed to Teotihuacan. The next two centuries marked the period in which the so-called city of the gods consolidated its power, becoming the premier Mesoamerican city of the first millennium, and the principal political, economic, and cultural center for the next seven centuries.
The Olmec
For many years, the Olmec culture was thought to be the 'mother culture' of Mesoamerica, because of the great influence that it exercised throughout the region. However, more recent perspectives consider this culture to be more of a process to which all the contemporary peoples contributed, and which eventually crystallized on the coasts of Veracruz and Tabasco. The ethnic identity of the Olmecs is still widely debated. Based on linguistic evidence, archaeologists and anthropologists generally believe that they were either speakers of an Oto-Manguean language, or (more likely) the ancestors of the present-day Zoque people who live in the north of Chiapas and Oaxaca. According to this second hypothesis, Zoque tribes emigrated toward the south after the fall of the major population centers of the Gulf plains. Whatever their origin, these bearers of Olmec culture arrived at the leeward shore some eight thousand years BCE, entering like a wedge among the fringe of proto-Maya peoples who lived along the coast, a migration that would explain the separation of the Huastecs of the north of Veracruz from the rest of the Maya peoples based in the Yucatán Peninsula and Guatemala.
The Olmec culture represents a milestone of Mesoamerican history, as various characteristics that define the region first appeared there. Among them are the state organization, the development of the 260-day ritual calendar and the 365-day secular calendar, the first writing system, and urban planning. The development of this culture started 1600 to 1500 BCE, though it continued to consolidate itself up to the 12th century BCE. Its principal sites were La Venta, San Lorenzo, and Tres Zapotes in the core region. However, throughout Mesoamerica numerous sites show evidence of Olmec occupation, especially in the Balsas river basin, where Teopantecuanitlan is located. This site is quite enigmatic, since it dates from several centuries earlier than the main populations of the Gulf, a fact which has continued to cause controversy and given rise to the hypothesis that the Olmec culture originated in that region.
Among the best-known expressions of Olmec culture are giant stone heads, sculptured monoliths up to three meters in height and several tons in weight. These feats of Olmec stonecutting are especially impressive when one considers that Mesoamericans lacked iron tools and that the heads are at sites dozens of kilometers from the quarries where their basalt was mined. The function of these monuments is unknown. Some authors propose that they were commemorative monuments for notable players of the ballgame, and others that they were images of the Olmec governing elite.
The Olmec are also known for their small carvings made of jade and other greenstones. So many of the Olmec figurines and sculptures contain representations of the were-jaguar, that, according to José María Covarrubias, they could be forerunners of the worship of the rain god, or maybe a predecessor of the future Tezcatlipoca in his manifestation as Tepeyolohtli, the "Heart of the Mountain"
The exact causes of the Olmec decline are unknown.
In the Pacific lowlands of the Maya Area, Takalik Abaj c. 800 BCE, Izapa c. 700 BCE, and Chocola c. 600 BCE, along with Kaminaljuyú c. 800 BCE, in the central Highlands of Guatemala, advanced in the direction of what would be the Classic Maya culture. Apart from the West, where the tradition of the Tumbas de tiro had taken root, in all the regions of Mesoamerica the cities grew in wealth, with monumental constructions carried out according to urban plans that were surprisingly complex. La Danta in El Mirador, the San Bartolo murals, and the circular pyramid of Cuicuilco date from this time, as do the central plaza of Monte Albán and the Pyramid of the Moon in Teotihuacan.
Toward the end of the Preclassic period, political and commercial hegemony shifted to the population centers in the Valley of Mexico. Around Lake Texcoco there existed a number of villages that grew into true cities: Tlatilco and Cuicuilco are examples. The former was found on the northern bank of the lake, while the latter was on the slopes of the mountainous region of Ajusco. Tlatilco maintained strong relationships with the cultures of the West, so much so that Cuicuilco controlled commerce in the Maya area, Oaxaca, and the Gulf coast. The rivalry between the two cities ended with the decline of Tlatilco. Meanwhile, at Monte Albán in Oaxaca, the Zapotec had begun developing culturally independent of the Olmec, adopting aspects of that culture and making their own contributions as well.
In Peten, the great Classic Maya cities of Tikal, Uaxactun, and Seibal, began their growth at c. 300 BCE.
Cuicuilco's hegemony over the valley declined in the period 100 BCE to 1 CE. As Cuicuilco declined, Teotihuacan began to grow in importance. The next two centuries marked the period in which the so-called City of the gods consolidated its power, becoming the premier Mesoamerican city of the first millennium, and the principal political, economic, and cultural center in Central Mexico for the next seven centuries.
Classic period
The Classic period of Mesoamerica includes the years from 250 to 900 CE. The end point of this period varied from region to region: for example, in the center of Mexico it is related to the fall of the regional centers of the late Classic (sometimes called Epiclassic) period, toward the year 900; in the Gulf, with the decline of El Tajín, in the year 800; in the Maya area, with the abandonment of the highland cities in the 9th century; and in Oaxaca, with the disappearance of Monte Albán around 850. Normally, the Classic period in Mesoamerica is characterized as the stage in which the arts, science, urbanism, architecture, and social organization reached their peak. This period was also dominated by the influence of Teotihuacan throughout the region, and the competition between the different Mesoamerican states led to continuous warfare.
This period of Mesoamerican history can be divided into three phases. Early, from 250 to 550 CE; Middle, from 550 to 700; and Late, from 700 to 900. The early Classic period began with the expansion of Teotihuacan, which led to its control over the principal trade routes of northern Mesoamerica. During this time, the process of urbanization that started in the last centuries of the early Preclassic period was consolidated. The principal centers of this phase were Monte Albán, Kaminaljuyu, Ceibal, Tikal, and Calakmul, and then Teotihuacan, in which 80 per cent of the 200,000 inhabitants of the Lake Texcoco basin were concentrated.
The cities of this era were characterized by their multi-ethnic composition, which entailed the cohabitation in the same population centers of people with different languages, cultural practices, and places of origin. During this period the alliances between the regional political elites were strengthened, especially for those allied with Teotihuacan. Also, social differentiation became more pronounced: a small dominant group ruled over the majority of the population. This majority was forced to pay tribute and to participate in the building of public structures such as irrigation systems, religious edifices, and means of communication. The growth of the cities could not have happened without advances in agricultural methods and the strengthening of trade networks involving not only the peoples of Mesoamerica, but also the distant cultures of Oasisamerica.
The arts of Mesoamerica reached their high-point in this era. Especially notable are the Maya stelae (carved pillars), exquisite monuments commemorating the stories of the Royal families, the rich corpus of polychrome ceramics, mural painting, and music. In Teotihuacan, architecture made great advances: the Classic style was defined by the construction of pyramidal bases that sloped upward in a step-wise fashion. The Teotihuacan architectural style was reproduced and modified in other cities throughout Mesoamerica, the clearest examples being the Zapotec capital of Monte Alban and Kaminal Juyú in Guatemala. Centuries later, long after Teotihuacan was abandoned c. 700 CE, cities of the Postclassic era followed the style of Teotihuacan construction, especially Tula, Tenochtitlan, and Chichén Itzá.
Many scientific advances were also achieved during this period. The Maya refined their calendar, script, and mathematics to their highest level of development. Writing came to be used throughout the Mayan area, although it was still regarded as a noble activity and practiced only by noble scribes, painters, and priests. Using a similar system of writing, other cultures developed their own scripts, the most notable examples being those of the Ñuiñe culture and the Zapotecs of Oaxaca, although the Mayan system was the only fully developed writing system in Precolumbian America. Astronomy remained a matter of vital significance because of its importance for agriculture, the economic basis of Mesoamerican society, and to predict events in the future such as lunar and solar eclipses, an important feature for the rulers, proving to the commoners their links with the heavenly world.
The Middle Classic period ended in Northern Mesoamerica with the decline of Teotihuacan. This allowed other regional power centers to flourish and compete for control of trade routes and natural resources. In this way the late Classic era commenced. Political fragmentation during this era meant no city had complete hegemony. Various population movements occurred, caused by the incursion of groups from Aridoamerica and other northern regions, who pushed the older populations of Mesoamerica south. Among these new groups were the Nahua, who would later found the cities of Tula and Tenochtitlan, the two most important capitals of the Postclassic era. In addition, southern peoples established themselves in the center of Mexico, including the Olmec-Xicalanca, who came from the Yucatán Peninsula and founded Cacaxtla and Xochicalco.
In the Maya region, Tikal, an ally of Teotihuacan, experienced a decline, the so-called Tikal Hiatus, after being defeated by Dos Pilas, and Caracol, ally of Calakmul, lasted about another 100 years. During this hiatus, the cities of Dos Pilas, Piedras Negras, Caracol, Calakmul, Palenque, Copán, and Yaxchilán were consolidated. These and other city-states of the region found themselves involved in bloody wars with changing alliances, until Tikal defeated, in order, Dos Pilas, Caracol, with the help of Yaxha and El Naranjo, Waka, Calakmul's last ally, and finally Calakmul itself, an event that took place in 732 with the sacrifice of Yuknom Cheen's son in Tikal. That led to construction of monumental architecture in Tikal, from 740 to 810; the last date documented there was 899. The ruin of the Classic Maya civilization in the northern lowlands, begun at La Passion states such as Dos Pilas, Aguateca, Ceibal and Cancuén, c. 760, followed by the Usumacinta system cities of Yaxchilan, Piedras Negras, and Palenque, following a path from south to north.
Toward the end of the late Classic period, the Maya stopped recording the years using the Long Count calendar, and many of their cities were burned and abandoned to the jungle. Meanwhile, in the Southern Highlands, Kaminal Juyú continued its growth until 1200. In Oaxaca, Monte Alban reached its apex c. 750 and finally succumbed toward the end of the 9th century for reasons that are still unclear. Its fate was not much different from that of other cities such as La Quemada in the north and Teotihuacan in the center: it was burned and abandoned. In the last century of the Classic era, hegemony in the valley of Oaxaca passed to Lambityeco, several kilometers to the east.
Teotihuacan
Teotihuacan ("The City of the Gods" in Nahuatl) originated toward the end of the Preclassic period, c. 100 CE. Very little is known about its founders, but it is believed that the Otomí had an important role in the city's development, as they did in the ancient culture of the Valley of Mexico, represented by Tlatilco. Teotihuacan initially competed with Cuicuilco for hegemony in the area. In this political and economic battle, Teotihuacan was aided by its control of the obsidian deposits in the Navaja mountains in Hidalgo. The decline of Cuicuilco is still a mystery, but it is known that a large part of the former inhabitants resettled in Teotihuacan some years before the eruption of Xitle, which covered the southern town in lava.
Once free of competition in the area of the Lake of Mexico, Teotihuacan experienced an expansion phase that made it one of the largest cities of its time, not just in Mesoamerica but in the entire world. During this period of growth, it attracted the vast majority of those then living in the Valley of Mexico.
Teotihuacan was completely dependent on agricultural activity, primarily the cultivation of maize, beans, and squash, the Mesoamerican agricultural trinity. However, its political and economic hegemony was based on outside goods for which it enjoyed a monopoly: Anaranjado ceramics, produced in the Poblano–Tlaxcalteca valley, and the mineral deposits of the Hidalgan mountains. Both were highly valued throughout Mesoamerica and were exchanged for luxury merchandise of the highest caliber, from places as far away as New Mexico and Guatemala. Because of this, Teotihuacan became the hub of the Mesoamerican trade network. Its partners were Monte Albán and Tikal in the southeast, Matacapan on the Gulf coast, Altavista in the north, and Tingambato in the west.
Teotihuacan refined the Mesoamerican pantheon of deities, whose origins dated from the time of the Olmec. Of special importance were the worship of Quetzalcoatl and Tláloc, agricultural deities. Trade links promoted the spread of these cults to other Mesoamerican societies, who took and transformed them. It was thought that Teotihuacan society had no knowledge of writing, but as Duverger demonstrates, the writing system of Teotihuacan was extremely pictographic, to the point that writing was confused with drawing.
The fall of Teotihuacan is associated with the emergence of city-states within the confines of the central area of Mexico. It is thought that these were able to flourish due to the decline of Teotihuacan, though events may have occurred in the opposite order: the cities of Cacaxtla, Xochicalco, Teotenango, and El Tajín may have first increased in power and then were able to economically strangle Teotihuacan, trapped as it was in the center of the valley without access to trade routes. This occurred around 600 CE, and even though people continued to live there for another century and a half, the city was eventually destroyed and abandoned by its inhabitants, who took refuge in places such as Culhuacán and Azcapotzalco, on the shores of Lake Texcoco.
The Maya in the Classic period
The Maya created one of the most developed and best-known Mesoamerican cultures. Although authors such as Michael D. Coe believe that the Maya culture is completely different from the surrounding cultures, many elements present in Maya culture are shared by the rest of Mesoamerica, including the use of two calendars, the base 20 number system, the cultivation of corn, human sacrifice, and certain myths, such as that of the fifth sun and cultic worship, including that of the Feathered Serpent and the rain god, who in the Yucatec Maya language is called Chaac.
The beginnings of Maya culture date from the development of Kaminaljuyu, in the Highlands of Guatemala, during the middle Preclassic period. According to Richard D. Hansen and others researchers, the first true political states in Mesoamerica consisted of Takalik Abaj, in the Pacific Lowlands, and the cities of El Mirador, Nakbe, Cival and San Bartolo, among others, in the Mirador Basin and Peten. Archaeologists believe that this development happened centuries later, around the 1st century BCE, but recent research in the Petén basin and Belize have proven them wrong. The archaeological evidence indicates that the Maya never formed a united empire; they were instead organized into small chiefdoms that were constantly at war. López Austin and López Luján have said that the Preclassic Maya were characterized by their bellicose nature. They probably had a greater mastery of the art of war than Teotihuacan, yet the idea that they were a peaceful society given to religious contemplation, which persists to this day, was particularly promoted by early- and mid-20th century Mayanists such as Sylvanus G. Morley and J. Eric S. Thompson. Confirmation that the Maya practiced human sacrifice and ritual cannibalism came much later (e.g. by the murals of Bonampak).
Writing and the Maya calendar were quite early developments in the great Maya cities, c. 1000 BCE, and some of the oldest commemorative monuments are from sites in the Maya region. Archaeologists once thought that the Maya sites functioned only as ceremonial centers and that the common people lived in the surrounding villages. However, more recent excavations indicate the Maya sites enjoyed urban services as extensive as those of Tikal, believed to be as large as 400,000 inhabitants at its peak, circa 750, Copan, and others. Drainage, aqueducts, and pavement, or Sakbe, meaning "white road", united major centers since the Preclassic. The construction of these sites was carried out on the basis of a highly stratified society, dominated by the noble class, who at the same time were the political, military, and religious elite.
The elite controlled agriculture, practiced by means of mixed systems of ground-clearing, and intensive platforms around the cities. As in the rest of Mesoamerica, they imposed on the lowest classes taxes—in kind or in labor—that permitted them to concentrate sufficient resources for the construction of public monuments, which legitimized the power of the elites and the social hierarchy. During the Early Classic Period, c. 370, the Maya political elite sustained strong ties to Teotihuacan, and it is possible that Tikal may have been an important ally of Teotihuacan that controlled commerce with the Gulf coast and highlands. Finally, it seems the great drought that ravaged Central America in the 9th century, internal wars, ecological disasters, and famine destroyed the Maya political system, which led to popular uprisings and the defeat of the dominant political groups. Many cities were abandoned, remaining unknown until the 19th century, when descendants of the Maya led a group of European and American archaeologists to these cities, which had been swallowed over the centuries by the jungle.
Postclassic period
The Postclassic period is the time between the year 900 and the conquest of Mesoamerica by the Spaniards, which occurred between 1521 and 1697. It was a period in which military activity became of great importance. The political elites associated with the priestly class were relieved of power by groups of warriors. In turn, at least a half-century before the arrival of the Spaniards, the warrior class was yielding its positions of privilege to a very powerful group that was unconnected to the nobility: the pochtecas, merchants who obtained great political power by virtue of their economic power.
The Postclassic period is divided into two phases. The first is the early Postclassic, which includes the 10th to the 13th century, and is characterized by the Toltec hegemony of Tula. The 12th century marks the beginning of the late Postclassic period, which begins with the arrival of the Chichimec, linguistically related to the Toltecs and the Mexica, who established themselves in the Valley of Mexico in 1325, following a two-century pilgrimage from Aztlán, the exact location of which is unknown. Many of the social changes of this final period of Mesoamerican civilization are related to the migratory movements of the northern peoples. These peoples came from Oasisamerica, Aridoamerica, and the northern region of Mesoamerica, driven by climate changes that threatened their survival. The migrations from the north caused, in turn, the displacement of peoples who had been rooted in Mesoamerica for centuries; some of them left for Centroamerica.
There were many cultural changes during that time. One of them was the expansion of metallurgy, imported from South America, and whose oldest remnants in Mesoamerica come from the West, as is the case also with ceramics. The Mesoamericans did not achieve great facility with metals; in fact, their use was rather limited (a few copper axes, needles, and above all jewelry). The most advanced techniques of Mesoamerican metallurgy were developed by the Mixtec, who produced fine, exquisitely handcrafted articles. Remarkable advances were made in architecture as well. The use of nails in architecture was introduced to support the sidings of the temples, the mortar was improved, and the use of columns and stone roofs was widespread—something that only the Maya had used during the Classic period. In agriculture, the system of irrigation became more complex; in the Valley of Mexico especially, chinampas were used extensively by the Mexica, who built a city of 200,000 around them.
The political system also underwent important changes. During the early Postclassic period, the warlike political elites legitimized their position by means of their adherence to a complex set of religious beliefs that López Austin called zuyuanidad. According to this system, the ruling classes proclaimed themselves the descendants of Quetzalcoatl, the Plumed Serpent, one of the creative forces, and a cultural hero in Mesoamerican mythology. They likewise declared themselves the heirs of a no less mythical city, called Tollan in Nahuatl, and Zuyuá in Maya (from which López Austin derives the name for the belief system). Many of the important capitals of the time identified themselves with this name (for example, Tollan Xicocotitlan, Tollan Chollollan, Tollan Teotihuacan). The Tollan of myth was for a long time identified with Tula, in Hidalgo state, but Enrique Florescano and López Austin have claimed that this has no basis. Florescano states that the mythical Tollan was Teotihuacan; López Austin argues that Tollan was simply a product of the Mesoamerican religious imagination. Another feature of the zuyuano system was the formation of alliances with other city-states that were controlled by groups having the same ideology; such was the case with the League of Mayapan in Yucatán, and the Mixtec confederation of Lord Eight Deer, based in the mountains of Oaxaca. These early Postclassic societies can be characterized by their military nature and multi-ethnic populations.
However, the fall of Tula checked the power of the zuyuano system, which finally broke down with the dissolution of the League of Mayapán, the Mixtec state, and the abandonment of Tula. Mesoamerica received new immigrants from the north, and although these groups were related to the ancient Toltecs, they had a completely different ideology than the existing residents. The final arrivals were the Mexica, who established themselves on a small island on Lake Texcoco under the dominion of the Texpanecs of Azcapotzalco. This group would, in the following decades, conquer a large part of Mesoamerica, creating a united and centralized state whose only rivals were the Purépecha Empire of Michoacán. Neither one of them could defeat the other, and it seems that a type of non-aggression pact was established between the two peoples. When the Spaniards arrived many of the peoples controlled by the Mexica no longer wished to continue under their rule. Therefore, they took advantage of the opportunity presented by the Europeans, agreeing to support them, thinking that in return they would gain their freedom, and not knowing that this would lead to the subjugation of all of the Mesoamerican world.
Aztec
Of all prehispanic Mesoamerican cultures, the best-known is the Mexica of the city-state of Tenochtitlan, also known as the Aztecs. The Aztec Empire dominated central Mexico for close to a century before the Spanish conquest of the Aztec empire (1519–1521).
The Mexica people came from the north or the west of Mesoamerica. The Nayaritas believed that the mythic Aztlán was located on the island of Mexcaltitán. Some hypothesize that this mythical island could have been located somewhere in the state of the Zacatecas, and it has even been proposed that it was as far north as New Mexico. Whatever the case, they were probably not far removed from the classic Mesoamerican tradition. In fact, they shared many characteristics with the people of central Mesoamerica. The Mexicas spoke Nahuatl, the same language spoken by the Toltecs and the Chichimecs who came before them.
The date of the departure from Aztlán is debated, with suggested dates of 1064, 1111, and 1168. After much wandering, the Mexicas arrived at the basin of the Valley of Mexico in the 14th century. They established themselves at various points along the bank of the river (for example, Culhuacán and Tizapán), before settling on the Islet of Mexico, protected by Tezozómoc, king of the Texpanecas. The city of Tenochtitlan was founded in 1325 as an ally of Azcapotzalco, but less than a century later, in 1430, the Mexicas joined with Texcoco and Tlacopan to wage war against Azcapotzalco and emerged victoriously. This gave birth to the Triple Alliance that replaced the ancient confederation ruled by the Tecpanecas (which included Coatlinchan and Culhuacán).
In the earliest days of the Triple Alliance, the Mexica initiated an expansionist phase that led them to control a good part of Mesoamerica. During this time only a few regions retained their independence: Tlaxcala (Nahua), Meztitlán (Otomí), Teotitlán del Camino (Cuicatec), Tututepec (Mixtec), Tehuantepec (Zapotec), and the northwest (ruled at that time by their rivals, the Tarascans). The provinces controlled by the Triple Alliance were forced to pay a tribute to Tenochtitlan; these payments are recorded in another codex known as the Matrícula de Los tributos (Registry of Tribute). This document specifies the quantity and type of every item that each province had to pay to the Mexicas.
The Mexica state was conquered by the Spanish forces of Hernán Cortés and their Tlaxcalan and Totonac allies in 1521. The defeat of Mesoamerica was complete when, in 1697, Tayasal was burned and razed by the Spanish.
Postconquest era
Colonial Period, 1521–1821
With the destruction of the superstructure of the Aztec Empire in 1521, central Mexico was brought under the control of the Spanish Empire. Over the course of the succeeding decades, virtually all of Mesoamerica was brought under Spanish control, which resulted in a fairly uniform policies toward indigenous populations. Spaniards' established the fallen Aztec capital of Tenochtitlan as Mexico City, the seat of government for the Viceroyalty of New Spain. The great initial project for Spanish conquerors was converting the indigenous peoples to Christianity, the only permitted religion. This endeavor was undertaken by Franciscan, Dominican, and Augustinian friars immediately after conquest. Divvying up of the spoils of the war was of key interest to the Spanish conquerors. The major ongoing benefit to conquerors after the obvious material plunder was to appropriate the existing system of tribute and obligatory labor to the Spanish victors.
This was done by the establishment of the encomienda, which awarded the tribute and labor from individual indigenous polities to particular Spanish conquerors. In that way, the economic and political arrangements at the level of the indigenous community were largely kept intact. The indigenous polity (altepetl) in the Nahua area, cah in the Maya region was the key to cultural survival of indigenous under Spanish rule, while at the same time also providing the structure for their economic exploitation. Spaniards classified all indigenous peoples as "Indians" (indios), a term that the indigenous peoples never embraced. They were classified legally as being under the jurisdiction of the República de Indios. They were legally separated from the República de Españoles, which comprised Europeans, Africans, and mixed-race castas. In general, indigenous communities in Mesoamerica kept much of their prehispanic social and political structures, with indigenous elites continuing to function as leaders in their communities. These elites acted as intermediaries with the Spanish crown, so long as they remained loyal. There were significant changes in Mesoamerican communities during the colonial era, but during the entire colonial period Mesoamericans were the largest single non-Hispanic group in the colonial Mexico, far larger than the entire Hispanic sphere. Although the Spanish colonial system imposed many changes on Mesoamerican peoples, they did not force the acquisition of Spanish and Mesoamerican languages continued to flourish to the present day.
Postcolonial Period, 1821–present
Mexico and Central America became independent from Spain in 1821, with some participation of indigenous in decade-long political struggles, but for their own motivations. With the fall of colonial government, the Mexican state abolished distinctions between ethnic groups, that is the separate governance for indigenous populations in the República de Indios. The new sovereign country made, in theory at least, all Mexicans citizens of the independent nation-state rather than vassals of the Spanish crown, with different legal standing. A long period of political chaos in the post-independence period among white elites largely did not affect indigenous peoples and their communities. Mexican conservatives were largely in charge of the national government and kept in place practices from the old colonial order.
However, in the 1850s, Mexican liberals gained power and attempted to formulate and implement reforms that did affect indigenous communities, as well as the Catholic Church. The Mexican Constitution of 1857 abolished the ability of corporations to hold land, which aimed at taking assets out of the hands of the Catholic Church in Mexico and forcing indigenous communities to divide their community-held lands. Liberals' aimed at turning indigenous community members pursuing subsistence farming into yeoman farmers holding their own land. Mexican conservatives repudiated the liberal reform laws since they attacked the Catholic Church, but indigenous communities also participated in a three-year civil war.
In the late nineteenth century, liberal army general Porfirio Díaz, a Mestizo did much for modernizing Mexico and integrating it into the world economy, but there were renewed pressures on indigenous communities and their lands. These exploded in certain areas of Mexico during the ten-year long civil war, the Mexican Revolution (1910–1920). In the aftermath of the Revolution, the Mexican government attempted simultaneously shore up indigenous culture, while at the same time also attempting to integrate the indigenous as citizens of the nation, turning indigenous into peasants (campesinos). This has proved more difficult than policy planners imagined, with resilient indigenous communities continuing to struggle for rights within the nation.
See also
Regional communications in ancient Mesoamerica
History of Central America, the colonial to post-colonial period
History of the west coast of North America
List of pre-Columbian civilizations
Spanish conquest of the Aztec Empire
Spanish conquest of Yucatán
References
Bibliography
Duverger, Christian (1999): Mesoamérica, arte y antropología. CONACULTA-Landucci Editores. Paris.
Luján Muñoz, Jorge; Chinchilla Aguilar, Ernesto; Zilbermann de Luján, Maria Cristina; Herrarte, Alberto; Contreras, J. Daniel. (1996) Historia General de Guatemala. .
Miller, Mary Ellen. (2001). El arte de mesoamérica. "Colecciones El mundo del arte". Ediciones Destino. Barcelona, España. .
Ramírez, Felipe (2009). "La Altiplanicie Central, del Preclásico al Epiclásico"/en El México Antiguo. De Tehuantepec a Baja California/Pablo Escalante Gonzalbo (coordinador)/CIDE-FCE/México.
External links
The Civilizations of Ancient Mesoamerica (archived 12 December 2009)
Mesoamerica
History, Myth, and Migration in Mesoamerica (archived 24 June 2010)
Mesoweb Research Center
FAMSI, Foundation for Mesoamerican Study (archived 4 September 2019)
European Mayanist
The Mirador Basin Project
Guatemala Cradle of the Maya Civilization (archived 18 July 2007)
Archaeological cultures of North America
Archaeology timelines
Timelines of North American history
History of Central America by period
Mexico history-related lists
Chronology
Historical timelines
Ancient timelines | 0.769165 | 0.995093 | 0.76539 |
Gender inequality | Gender inequality is the social phenomenon in which people are not treated equally on the basis of gender. This inequality can be caused by gender discrimination or sexism. The treatment may arise from distinctions regarding biology, psychology, or cultural norms prevalent in the society. Some of these distinctions are empirically grounded, while others appear to be social constructs. While current policies around the world cause inequality among individuals, it is women who are most affected. Gender inequality weakens women in many areas such as health, education, and business life. Studies show the different experiences of genders across many domains including education, life expectancy, personality, interests, family life, careers, and political affiliation. Gender inequality is experienced differently across different cultures.
Sex differences
Biology
Natural differences exist between the sexes based on biological and anatomic factors, mostly differing reproductive roles. These biological differences include chromosomes and hormonal differences. There is a natural difference also in the relative physical strengths (on average) of the sexes, both in the lower body and more pronouncedly in the upper-body, though this does not mean that any given man is stronger than any given woman. Men, on average, are taller, which provides both advantages and disadvantages. Women, on average, live significantly longer than men, though it is not clear to what extent this is a biological difference ─ see Life expectancy. Men have larger lung volumes and more circulating blood cells and clotting factors, while women have more circulating white blood cells and produce antibodies faster. Differences such as these are hypothesized to be an adaptation allowing for sexual specialization.
Psychology
Prenatal hormone exposure influences the extent to which a person exhibits typical masculine or feminine traits. Negligible differences between males and females exist in general intelligence. Women are significantly less likely to take risks than men. Men are also more likely than women to be aggressive, a trait influenced by prenatal and possibly current androgen exposure. It has been theorized that these differences combined with physical differences are an adaptation representing sexual division of labour. A second theory proposes sex differences in intergroup aggression represent adaptations in male aggression to allow for territory, resource and mate acquisition. Females are (on average) more empathetic than males, though this does not mean that any given woman is more empathetic than any given man. Men have an increased visuospatial and verbal memory over women. One study by three psychologists, Daniel Voyer, Susan D. Voyer, and Jean Saint-Aubin examined visuospatial working memory differences in different sexes. They found that men outperformed women when tested on the basis of visuospatial working memory tasks. These changes are influenced by the male sex hormone testosterone, which increases visuospatial memory in both genders when administered. While studies about the increase of these abilities that result from testosterone production differ throughout age groups and genders, one study found that high circulating levels in testosterone in men decreased males visuospatial performance while in females it increased their performance.
From birth, males and females are socialized differently, and experience different environments throughout their lives. Due to societal influence, gender often greatly influences many major characteristics in life; such as personality. Males and females are led on different paths due to the influences of gender role expectations and gender role stereotypes often before they are able to choose for themselves. For instance, in Western societies, the colour blue and the colour pink are commonly associated with boys and girls respectively (though this was not so before the 20th century). Boys are often given toys that are associated with traditional masculine roles, such as machines and trucks. Girls are often given toys related to traditional feminine roles, such as dolls, dresses, and dollhouses. These influences by parents or other adult figures in the child's life encourage them to fit into these roles. This tends to affect personality, career paths, or relationships. Throughout life, males and females are seen as two very different species who have very different personalities and should stay on separate paths.
Researcher Janet Hyde found that, although much research has traditionally focused on the differences between the genders, they are actually more alike than different, which is a position proposed by the gender similarities hypothesis.
In the workplace
Income disparities linked to job stratification
Across the board, a number of industries are stratified across the genders. This is the result of a variety of factors. These include differences in education choices, preferred job and industry, work experience, number of hours worked, and breaks in employment (such as for bearing and raising children). Men also typically go into higher paid and higher risk jobs when compared to women. These factors result in 60% to 75% difference between men's and women's average aggregate wages or salaries, depending on the source. Various explanations for the remaining 25% to 40% have been suggested, including women's lower willingness and ability to negotiate salary and sexual discrimination. According to the European Commission direct discrimination only explains a small part of gender wage differences. According to the International Labour Organisation, women continue to be paid around 20% less than males worldwide.
On average, women represented 37.8% of all agricultural workers in 2021; this share is above 50% in 22 countries, most of them in Africa. Women and men working in agriculture might have different employment status. Generally, the women employed in agriculture are more likely to be engaged as contributing family workers whereas men are more likely to be engaged on their own account as workers generating an income.In addition, women often spend more time than men on activities such as food processing and food preparation for the household, child and elder care, water and fuel collection and other unpaid household duties.
In the United States, the average female's unadjusted annual salary has been cited as 78% of that of the average male. However, multiple studies from OECD, AAUW, and the US Department of Labor have found that pay rates between males and females varied by 5–6.6% or, females earning 94 cents to every dollar earned by their male counterparts, when wages were adjusted to different individual choices made by male and female workers in college major, occupation, working hours, and maternal/parental leave. The remaining 6% of the gap has been speculated to originate from deficiency in salary negotiating skills and sexual discrimination.
Human capital theories refer to the education, knowledge, training, experience, or skill of a person which makes them potentially valuable to an employer. This has historically been understood as a cause of the gendered wage gap but is no longer a predominant cause as women and men in certain occupations tend to have similar education levels or other credentials. Even when such characteristics of jobs and workers are controlled for, the presence of women within a certain occupation leads to lower wages. This earnings discrimination is considered to be a part of pollution theory. This theory suggests that jobs which are predominated by women offer lower wages than do jobs simply because of the presence of women within the occupation. Gender discrimination propels the notion that as a field or job becomes more predominated by women, it becomes less prestigious, turning away men and people who are discriminatory towards other genders. The entering of women into specific occupations that have higher rates of gender discrimination suggests that less competent workers have begun to be hired or that the occupation is becoming deskilled. Men who are discriminatory towards other genders are reluctant to enter female-dominated occupations because of this and similarly resist the entrance of women into male-dominated occupations. One “Gender Segregation in Occupations” study in Singapore by journalist Jessica Pan, “found that men abandoned formerly all-male professions in droves after women’s participation reach ‘tipping points,’ fearing the social stigma and wage penalties associated with belonging to ‘feminine’ occupations.” Femininity in a job or field is a threat to individuals who discriminate solely based on gender.
Gender discrimination in the workplace is still present today in many places of the world, which can be attributed to occupational segregation. Occupational segregation occurs when groups of people are distributed across occupations according to ascribed characteristics; in this case, gender. Occupational gender segregation can be understood to contain two components or dimensions; horizontal segregation and vertical segregation. With horizontal segregation, occupational sex segregation occurs as men and women are thought to possess different physical, emotional, and mental capabilities. With vertical segregation, occupational sex segregation occurs as occupations are stratified according to the power, authority, income, and prestige associated with the occupation and women and other genders are excluded from holding such jobs. Occupational segregation is found to slow economic growth and drive down wages. Women and other genders who are pushed into lower paid occupations add to the risk of ensuring family economic security.
As women entered the workforce in larger numbers since the 1960s, occupations have become segregated based on the amount of femininity or masculinity presupposed to be associated with each occupation. Census data suggests that while some occupations have become more gender integrated (mail carriers, bartenders, bus drivers, and real estate agents), occupations including teachers, nurses, secretaries, and librarians have become female-dominated while occupations including architects, electrical engineers, and airplane pilots remain predominantly male in composition. Based on the census data, women occupy the service sector jobs at higher rates than men. Women's overrepresentation in service sector jobs, as opposed to jobs that require managerial work acts as a reinforcement of women and men into traditional gender roles that causes gender inequality. According to the World Bank's 2021 FINDEX database, there is a $1.7 trillion funding gap for formal, women-owned micro, small, and medium-sized enterprises globally, and more than 68% of small women-owned firms have insufficient or no access to financial services.
"The gender wage gap is an indicator of women's earnings compared with men's. It is figured by dividing the average annual earnings for women by the average annual earnings for men." (Higgins et al., 2014)
Scholars disagree about how much of the male-female wage gap depends on factors such as experience, education, occupation, and other job-relevant characteristics. Sociologist Douglas Massey found that 41% remains unexplained, while CONSAD analysts found that these factors explain between 65.1 and 76.4 percent of the raw wage gap. CONSAD also noted that other factors such as benefits and overtime explain "additional portions of the raw gender wage gap".
The glass ceiling effect is also considered a possible contributor to the gender wage gap or income disparity. This effect suggests that gender provides significant disadvantages towards the top of job hierarchies which become worse as a person's career goes on. The term glass ceiling implies that invisible or artificial barriers exist which prevent women from advancing within their jobs or receiving promotions. These barriers exist in spite of the achievements or qualifications of the women and still exist when other characteristics that are job-relevant such as experience, education, and abilities are controlled for. The inequality effects of the glass ceiling are more prevalent within higher-powered or higher income occupations, with fewer women holding these types of occupations. The glass ceiling effect also indicates the limited chances of women for income raises and promotion or advancement to more prestigious positions or jobs. As women are prevented by these artificial barriers, from either receiving job promotions or income raises, the effects of the inequality of the glass ceiling increase over the course of a woman's career.
Statistical discrimination is also cited as a cause for income disparities and gendered inequality in the workplace. Statistical discrimination indicates the likelihood of employers to deny women access to certain occupational tracks because women are more likely than men to leave their job or the labor force when they become married or pregnant. Women are instead given positions that dead-end or jobs that have very little mobility.
In developing countries such as the Dominican Republic, female entrepreneurs are statistically more prone to failure in business. In the event of a business failure women often return to their domestic lifestyle despite the absence of income. On the other hand, men tend to search for other employment as the household is not a priority.
The gender earnings ratio suggests that there has been an increase in women's earnings comparative to men. Men's plateau in earnings began after the 1970s, allowing for the increase in women's wages to close the ratio between incomes. Despite the smaller ratio between men and women's wages, disparity still exists. Census data suggests that women's earnings are 71 percent of men's earnings in 1999.
The gendered wage gap varies in its width among different races. Whites comparatively have the greatest wage gap between the genders. With whites, women earn 78% of the wages that white men do. With African Americans, women earn 90% of the wages that African American men do.
There are some exceptions where women earn more than men: According to a survey on gender pay inequality by the International Trade Union Confederation, female workers in the Gulf state of Bahrain earn 40 percent more than male workers.
In 2014, a report by the International Labor Organization (ILO) reveals the wage gap between Cambodian women factory workers and other male counterparts. There was a US$25 monthly pay difference, suggesting that women have a much lower power and being devalued not only at home but also in the workplace.
Women have established firms at a somewhat greater average rate than males in recent years, but female businesses continue to confront financial challenges and have a harder time gaining access to external financing. Women account for 25% of all new business owners and directors. At the same time, women are the major carers for their families, as well as the holders of household purchasing power. Women therefore start more enterprises than males, yet 68% of them need finance.
Professional education and careers
The gender gap has narrowed to various degrees since the mid-1960s. Where some 5% of first-year students in professional programs were female in 1965, by 1985 this number had jumped to 40% in law and medicine, and over 30% in dentistry and business school. Before the highly effective birth control pill was available, women planning professional careers, which required a long-term, expensive commitment, had to "pay the penalty of abstinence or cope with considerable uncertainty regarding pregnancy". This control over their reproductive decisions allowed women to more easily make long-term decisions about their education and professional opportunities. Women are highly underrepresented on boards of directors and in senior positions in the private sector. Gender inequality in professional education is a global issue. Robet Meyers and Amy Griffin studied the underrepresentation of female international students in higher education. In 2019, on 43.6% of international students in the United States were women. The disparity is even greater in the STEM field.
Additionally, with reliable birth control, young men and women had more reason to delay marriage. This meant that the marriage market available to any women who "delay[ed] marriage to pursue a career ... would not be as depleted. Thus the Pill could have influenced women's careers, college majors, professional degrees, and the age at marriage."
Studies on sexism in science and technology fields have produced conflicting results. Moss-Racusin et al. found that science faculty of both sexes rated a male applicant as significantly more competent and hireable than an identical female applicant. These participants also selected a higher starting salary and offered more career mentoring to the male applicant. Williams and Ceci, however, found that science and technology faculty of both sexes "preferred female applicants 2:1 over identically qualified males with matching lifestyles" for tenure-track positions. Studies show parents are more likely to expect their sons, rather than their daughters, to work in a science, technology, engineering or mathematics field – even when their 15-year-old boys and girls perform at the same level in mathematics. There are more men than women trained as dentists, this trend has been changing.
A survey by the U.K. Office for National Statistics in 2016 showed that in the health sector 56% of roles are held by women, while in teaching it is 68%. However equality is less evident in other areas; only 30% of M.P.s are women and only 32% of finance and investment analysts. In the natural and social sciences 43% of employees are women, and in the environmental sector 42%.
In an article by MacNell et al. (2014), researchers used an online course and falsified the names of assistant teachers to make students believe they had either a female or a male teaching assistant. At the end of the semester, they had the students complete a course evaluation. Regardless of whether the teaching assistant was actually male or female, assistants who were perceived as female received lower course evaluations overall with distinctly lower ratings in areas of promptness, praise, fairness, and professionalism.
In an article titled "Gender Differences in Education, Career Choices and Labor Market Outcomes on a Sample of OECD Countries", the researchers focused their work on how both men and women differ from their studies, their focuses, and their objectives within their work. Women are seen to have higher chances to choose the humanities and health fields while decreasing their opportunities in the sciences and social sciences fields. This indicates that there is a larger impact on men's decisions about fields of study.
Customer preference studies
A 2010 study conducted by David R. Hekman and colleagues found that customers, who viewed videos featuring a black male, a white female, or a white male actor playing the role of an employee helping a customer, were 19 percent more satisfied with the white male employee's performance.
This discrepancy with race can be found as early as 1947, when Kenneth Clark conducted a study in which black children were asked to choose between white and black dolls. White male dolls were the ones children preferred to play with.
Gender pay differences
Gender inequalities still exist as social problems and are still growing in places. In 2008, recently qualified female doctors in New York State had a starting salary $16,819 less than their male counterparts. An increase compared to the $3,600 difference of 1999. The pay discrepancy could not be explained by specialty choice, practice setting, work hours, or other characteristics. Nonetheless, some potentially significant factors like family or marital status were not evaluated. A case study carried out on Swedish medical doctors showed that the gender wage gap among physicians was greater in 2007 than in 1975.
Wage discrimination is when an employer pays different wages to two seemingly similar employees, usually on the basis of gender or race. Kampelmann and Rycx (2016) explain two different explanations for the differences observed in wages. They explain that employer tastes and preferences for foreign workers and/or customers can translate into having a lower demand for them as a whole and as a result offering them lower wages, as well as the differences in career dynamics, whereas, if there is large differences between immigrant workers and "native" workers, it could lead to wage discrimination for immigrant workers. Within the discrimination of domestic to foreign workers there is also discrimination among foreign workers based on gender. Female migrant workers are faced with a "triple-discrimination". This "triple-discrimination" states that women foreign workers are more at risk to experience discrimination because they are women, unprotected workers, and migrant workers.
At home
Gender roles in parenting and marriage
Gender roles are heavily influenced by biology, with male-female play styles correlating with sex hormones, sexual orientation, aggressive traits, and pain. Furthermore, females with congenital adrenal hyperplasia demonstrate increased masculinity and it has been shown that rhesus macaque children exhibit preferences for stereotypically male and female toys.
Gender inequality in relationships
Gender inequality in relationships has been growing over the years but for the majority of relationships, the power lies with the male. Even now men and women present themselves as divided along gender lines. A study done by Szymanowicz and Furnham, looked at the cultural stereotypes of intelligence in men and women, showing the gender inequality in self-presentation. This study showed that females thought if they revealed their intelligence to a potential partner, then it would diminish their chance with him. Men however would much more readily discuss their own intelligence with a potential partner. Also, women are aware of people's negative reactions to IQ, so they limit its disclosure to only trusted friends. Females would disclose IQ more often than men with the expectation that a real true friend would respond in a positive way. Intelligence continues to be viewed as a more masculine trait, than feminine trait. The article suggested that men might think women with a high IQ would lack traits that were desirable in a mate such as warmth, nurturance, sensitivity, or kindness. Another discovery was that females thought that friends should be told about one's IQ more so than males. However, males expressed doubts about the test's reliability and the importance of IQ in real life more so than women. The inequality is highlighted when a couple starts to decide who is in charge of family issues and who is primarily responsible for earning income. For example, in Londa Schiebinger's book, "Has Feminism Changed Science?", she claims that "Married men with families on average earn more money, live longer and happier, and progress faster in their careers", while "for a working woman, a family is a liability, extra baggage threatening to drag down her career." Furthermore, statistics had shown that "only 17 percent of the women who are full professors of engineering have children, while 82 percent of the men do."
Attempts in equalizing household work
Despite the increase in women in the labor force since the mid-1900s, traditional gender roles are still prevalent in American society. Many women are expected to put their educational and career goals on hold in order to raise a family, while their husbands become primary breadwinners. However, some women choose to work and also fulfill a perceived gender role of cleaning the house and caring for children. Despite the fact that certain households might divide chores more evenly, there is evidence supporting the issue that women have continued being the primary care-giver in family life even if they work full-time jobs. This evidence suggests that women who work outside the home often put an extra 18 hours a week doing household or childcare related chores as opposed to men who average 12 minutes a day in childcare activities. One study by van Hooff showed that modern couples do not necessarily purposefully divide things like household chores along gender lines, but instead may rationalize it and make excuses. One excuse used is that women are more competent at household chores and have more motivation to do them, and some say the jobs men have are much more demanding.
In The Unsettling of America: Culture and Agriculture, Wendell Berry wrote in the 1970s that the "home became a place for the husband to go when he was not working ... it was the place where the wife was held in servitude." A study conducted by Sarah F. Berk, called "The Gender Factory", researched this aspect of gender inequality as well. Berk found that "household labor is about power". The reason the spouse performing less housework is not the spouse in power is simple; they have more free time than their counterpart; therefore, they are able to do more of what they want after the average workday.
Gender roles have changed drastically over the past few decades. In a study assessing changing gender roles between males and females from Human Ecology scholar, Robin Douhthitt, date from 1920-1966 was recorded which surmised that women spent most of their time tending the home and family. The same study showed that as women begin to spend less time in the house, men begin to take on the role of the caretaker and spend more time with children as compared to their female counterparts. Douthitt concludes "The Division of Labor Within the Home: Have Gender Roles Changed?" concluded by saying, "(1) men do not spend significantly more time with children when their wives are employed and (2) employed women spend significantly less time in child care than their full-time homemaker counterparts (3) over a 10-year period both mothers and fathers are spending more total time with children."
Women bear a disproportionate burden when it comes to unpaid work. In the Asia and Pacific region, women spend 4.1 times more time in unpaid work than men do. Additionally, looking at 2019 data by the OECD (Organization for Economic Co-operation and Development) countries, the average time women spent in unpaid work is 264 minutes per day compared to men who spent 136 minutes per day. Although men spend more time in paid work, women still spend more time, in general, doing both paid and unpaid work. The numbers are 482.5 minutes per day for women and 454.4 minutes per day for men. These statistics show us that there is a double burden for women.
Change in Gender Equality
Gender equality started to drastically change in America when women gained the right to vote in 1920. Women’s rights were strengthened after this milestone including “the flapper which symbolized the personal freedom trumpeted by the emerging mass culture, including a freer approach to relationships with the opposite sex.” After the first International Women’s conference in 1975, women had more avenues to advocate for their rights globally. In 1995, following the fourth International Women’s Conference in Beijing China, “American women no longer needed to stop their activism at the border or limit it to one gender. They were now part of a truly global community.” However, gender inequality is still ongoing today. Some examples include women working longer hours than men, women suffering from education inequality throughout the world, being unable to express themselves freely, and being underpaid for performing the same as men. In a study about how to best achieve gender equality by political scholars, Kristen Renwickrenwick Monroe, Jenny Choi, Emily Howell, Chloe Lampros-Monroe, Crystal Trejo, and Valentina Perez, many policies are recommended as to best reach equity. Some of these policies include hiring and appointing qualified women to positions of power, providing mentoring programs, promoting salary equity programs, and encouraging and helping implement a shift in knowledge aboutabut different cultures, genders, races, and sexualties.
Gender inequalities in relation to technology
One survey showed that men rate their technological skills in activities such as basic computer functions and online participatory communication higher than women. However, this study was a self-reporting study, where men evaluate themselves on their own perceived capabilities. It thus is not data based on actual ability, but merely perceived ability, as participants' ability was not assessed. Additionally, this study is inevitably subject to the significant bias associated with self-reported data.
In contrary to such findings, a carefully controlled study that analyzed data sets from 25 developing countries led to the consistent finding that the reason why fewer women access and use digital technology is a direct result of their unfavorable conditions and ongoing discrimination with respect to employment, education and income. When controlling for these variables, women turn out to be more active users of digital tools than men. This turns the alleged digital gender divide into an opportunity: given women's affinity for information and communications technology (ICT), and given that digital technologies are tools that can improve living conditions, ICT represents a concrete and tangible opportunity to tackle longstanding challenges of gender inequalities in developing countries, including access to employment, income, education and health services.
Women are often drastically underrepresented within university technology and ICT focused programs while being overrepresented within social programs and humanities. Although data has shown women in western society generally outperform men in higher education, the labor markets of women often provide less opportunity and lower wages than that of men. Gender stereotypes and expectations may have an influence on the underrepresentation of women within technology and ICT focused programs and careers. Females are also underrepresented in science, technology, engineering, and mathematics (STEM) at all levels of society. Fewer females are completing STEM school subjects, graduating with STEM degrees, being employed as STEM professionals, and holding senior leadership and academic positions in STEM. The gender pay gap, family role expectations, lack of visible role models or mentors, discrimination and harassment, and bias in hiring and promotion practices exacerbate this problem.
Through socialization, women may feel obligated to choose programs with characteristics that emulate gender roles and stereotypes. Studies have shown domestic expectations may also lead to less opportunities in professional progression within the technology and ICT industry. Workplace practices of technology industries often include long, demanding hours which often conflict with gendered domestic expectations. This conflict leads to less opportunity and women opting for less demanding jobs. Gendered roles and expectations may cause discriminatory tendencies during the hiring process in which employers are reluctant to hire women as a way to avoid extra costs and benefits. Tech employers' reluctance to hire women result in placing them in less demanding and opportune jobs, situating female employees in lower positions that are difficult to advance in.
The lack of women and the existence of gender stereotypes within the technology industry often lead to discrimination and marginalization of women by colleagues and co-workers. Women often feel as though they are not taken seriously or feel unheard. Discrimination and gendered expectations often prevent or create difficulties for women to obtain higher positions within technology companies.
Energy poverty
Property inheritance
Many countries such as India, Indonesia, and Egypt have laws that give less inheritance of the ancestral property for women compared to men. According to Equality now, over 40% of countries currently have at least one constraint on women’s property rights. Contrary to the belief that gender discrimination in property rights primarily happens in non-Western societies, a study by gender studies scholars Marco Casari and Maurizio Lisciandra explore the subject in Italy. They found that women in the Alps had more rights on collective properties in the medieval period than in the modern period, an indication that gender discrimination on the basis of property inheritance increased over time in some areas based on factors which include economics and other systems that were in place at the time. This discrimination especially affects widowed women and persons of other genders in countries that have harmful laws and cultural practices set in place.
Advocating for women’s rights to property and property inheritance also helps climate justice. Psychologist Martha Merrow writes a study about women’s property rights and states “according to The Committee on the Elimination of Discrimination Against Women, ‘women’s rights to land and natural resources are a fundamental human right.’ Emerging evidence from Landesa suggest that ‘when women have secure land rights, efforts to tackle climate change are more successful and responsibilities and benefits associated with climate change response programs are more equitably distributed.” Women and people of other genders play an important role in environmental management and decisions but are often excluded from access to the things they need to be able to continue their work. Gender equality on the basis of property inheritance is essential to preventing more discrimination and advancing the application of universal climate goals throughout the world.
Structural marginalization
Gender inequalities often stem from social structures that have institutionalized conceptions of gender differences.
Marginalization occurs on an individual level when someone feels as if they are on the fringes or margins of their respective society. This is a social process and displays how current policies in place can affect people. For example, media advertisements display young girls with easy bake ovens (promoting being a housewife) as well as with dolls that they can feed and change the diaper of (promoting being a mother).
Gender stereotypes
Cultural stereotypes, which can dictate specific roles, are ingrained in both men and women and these stereotypes are a possible explanation for gender inequality and the resulting gendered wage disparity. Women have traditionally been viewed as being caring and nurturing and are designated to occupations which require such skills. While these skills are culturally valued, they were typically associated with domesticity, so occupations requiring these same skills are not economically valued. Men have traditionally been viewed as the main worker in the home, so jobs held by men have been historically economically valued and occupations predominated by men continue to be economically valued and earn higher wages.
Gender Stereotypes influenced greatly by gender expectations, different expectations on gender influence how people determine their roles, appearance, behaviors, etc. When expectations of gender roles deeply rooted in people's mind, people' values and ideas started to be influenced and leading to situation of stereotypes, which actualize their ideas into actions and perform different standards labelling the behaviors of people. Gender stereotypes limit opportunities of different gender when their performance or abilities were standardizing according to their gender-at-birth, that women and men may encounter limitations and difficulties when challenging the society through performing behaviors that their gender is "not supposed" to perform. For example, men may receive judgments when they are trying to stay at home and finish housework and support their wives to go out and work instead, as men are expected to work outside to earn money for the family. The traditional concepts of gender stereotypes are being challenged nowadays in different societies and improvement could be observed that men could also be responsible for housework, and women could also be a construction worker in some societies. It is still a long process when traditional concepts and values have deep-rooted in people's mind, that higher acceptance towards gender roles and characteristics is homely to be gradually developed.
Biological fertilization stereotypes
Bonnie Spanier coined the term hereditary inequality. Her opinion is that some scientific publications depict human fertilization such that sperms seem to actively compete for the "passive" egg, even though in reality it is complicated (e.g. the egg has specific active membrane proteins that select sperm etc.)
Sexism and discrimination
Gender inequality can further be understood through the mechanisms of sexism. Discrimination takes place due to the prejudiced treatment of men and women based on gender alone. Sexism occurs when men and women are framed within two dimensions of social cognition.
Discrimination also plays out with networking and in preferential treatment within the economic market. Men typically occupy positions of power in society. Due to socially accepted gender roles or preference to other men, males in power are more likely to hire or promote other men, thus discriminating against women.
Due to Environmental Injustice
Injustices caused by climate and environmental issues are a leading cause of gender inequality throughout the world. Women and people of other genders are most vulnerable. Sociology and Anthropology scholars Timothy W. Collins, Sara E. Grineski, and Danielle X. Morales contributed to a study in Houston, Texas, about unequal air pollution risks to people of different genders and sexual orientations. They state, “Women are often physically and socially relegated to some of the most toxic residential and occupational spaces in communities.” Environmental Injustices harm women in many ways. Geography and environmental studies scholar Miriam Gay-Antaki elaborates on this claim, saying, “Environmental Injustices are damaging to women’s and communities’ ability to reproduce their own culture and tradition and thus their ability to be different than the mainstream.” Environmental injustices do not only damage the land, but they equally damage the bodies on that land because there is a direct connection between the two. Gay-Antaki also states, “Women, people of color, and queer people are especially vulnerable in areas that have been conquered for economic pursuits.” The areas where these people live are often disrupted for economic gain and promote gender violence. Industries causing environmental disasters create injustices for the people living in the surrounding areas and also promote gender discrimination and inequality.
The United States has some of the best environmental laws in the world: “on the surface, the history of U.S. environmental policy is one of sweeping success through pivotal regulation.” However, it still suffers from flaws in policy which prevent it from achieving environmental justice, and the United States experiences environmental injustice at high rates. Women, people of other genders, people of color, and people of different sexualities are especially affected by environmental injustices. One leading cause of environmental hazards and injustices across the U.S. are landfills. Landfills emit greenhouse gas emissions, accumulate toxins in human and natural ecosystems, and contaminate water. Human ecology scholar Clare Cannon conducted a study about the negative effects of landfills on vulnerable communities and highlights the differences between how it affects men versus women and other genders. Cannon It states, “Environmental hazards may be disproportionately located in minority neighborhoods with a concentration of female-headed households since institutionalized racial housing discrimination impedes housing choices, restricts residential movement, and concentrates poor women and racial minorities to neighborhoods with high levels of environmental hazard.” This research acknowledges socioeconomic factors that explain discrimination and inequality on the basis of gender. Distribution among health and social impacts among gender, the environment, and social inequality differ because of inequality due to environmental injustice. Industrial plants and other pollutants in the environment are often placed and emitted near communities with a high population of women and other gender identities, as they are more likely to suffer from low wages and live in vulnerable communities.
Environmental injustice resulting from gender inequality and discrimination is a worldwide issue that is faced all around the globe. It is not constant to one area, however it does affect vulnerable and impoverished parts of the world where women and people who identify with other gender identities live. Cannon also In the same study from 2020, discusses worldwide gender inequality, stating that “The relative poverty of women worldwide also creates greater barriers in the face of environmental hazards, since women tend to experience poorer nutrition, limited health care, and, in the case of single, divorced, and widowed women, fewer sources of social support.”Gender studies scholar, Kadri Aavik conducted a gender study in Estonia comparing men’s activities to women’s, and In a study from 2021 about women and men in Estonia, research they found that “men’s daily activities pollute the environment more and are more harmful to health.” Aavik notes that intersectional activism, where social justice and environmentalism combine, is necessary to combating the inequalities seen today.
In Indigenous Communities
Throughout history, colonization of Indigenous peoples and lands has promoted gender inequality. Many Native American communities in Canada and what is now the United States first operated on Matriarchal systems of leadership. Many of these systems approached the environment much differently than contemporary patriarchal systems. According to Indigenous scholars Courtney Defriend and Celeta M. Cook, matriarchal systems of leadership practiced by Native American Nations in the Pacific Northwest “not only welcomes women in leadership roles, but also are rooted in a deeper concept that women are direct reflections of the climate, land, and waters.” Defriend and Cook also state, “Many First Nations used matrilineal systems for clan membership, with men marrying into a woman’s family.” Gender inequality within these Native American communities are intertwined with the negative history of colonialism and exacerbate issues of environmental degradation and environmental injustice in different cultures, their gender roles, and the way that they live. However, according to Indigenous scholars Margo Greenwood, Sarah de Leeuw, Roberta Stout, Roseann Larstone, and Julie Sutherland, “Despite violence, injustice, adversity, and inequities, Indigenous women are surviving, thriving, and resurging. They are asserting their matriarchal power and wisdom.”
In the criminal justice system
A 2008 study of three US district courts gave some explanations for gender disparity in sentencing: 1) that
women are sentenced more leniently than men because they are convicted of less serious crimes and have less serious criminal records than men; that judges take personal factors relating to defendants (e.g. family responsibilities) into account; 2) that judges exercise chivalry or paternalism towards women in ways that discriminate against men; and 3) that apparent disparities are caused by the intersection of other factors, such as race (as data shows it is only white women rather than women of color that benefit from disparities).The study concluded that the second first explanation is not evidenced in their data, but were unable to confirm the other two. Sonja B. Starr conducted a study in the US, published in 2012, that found that the prison sentences that men serve are on average 63% longer than those that women serve when controlling for arrest offense and criminal history.
Men's rights advocates have argued that men being over-represented in both those who commit murder and the victims of murder is evidence that men are being harmed by outmoded cultural attitudes. There is evidence for this that was found in a study by Henchel and Grant. When looking at 97 different cases of sexual misconduct, 72 of them were against men, and 25 against women. Of those cases, less than half of the female cases went to trial, with only 44% of those ending in incarceration, compared to 55% of the males cases. The reasoning for this inequal sentencing based on gender was explained by Martin Horn, director of the New York State Sentencing Commission and a professor at the John Jay College of Criminal Justice, "There’s a general societal disposition that does continue to treat women as the gentler sex, so typically the threshold for sending women to prison is higher,". Sentencing is historically based on traditional gender biases and perceived notions about race and ethnicity. While sentences should be decided on a case by case basis, they are not and that is what creates this sentencing disparity.
Men are 63% more likely to be given longer and harsher sentences. Of those 63%, Black males sentences are around 13.4% longer, Hispanic males 11.2% than White men. Similarly, Hispanic females received sentences 27.8% longer, and other non-white females received sentences about 10% shorter. The likely hood of receiving probation is also differentiated based on race and gender. According to the United States Sentencing Commission, Black males were 23.4% less likely, Hispanic males were 26.6% less likely than white males to receive a probationary sentencing. Similar trends were observed for women, with Black and Hispanic females less likely to receive a probation sentence than White females (11.2% percent less likely and 29.7% less likely, respectively).
The gender inequality that men face in prison carries over to their lives after incarceration as well. A major reason that people of color are more affected by having a criminal record is because of the imposing financial burdens and collateral consequences on people with criminal convictions, and the diverting of public resources from effective interventions to promote public safety. Employment during and after incarceration pays almost nothing and afterwards it is difficult for former inmates to enter the workforce with their record. The price for lawyers, court processings, legal fees, etc that people in higher tax brackets have led to many people having no choice but to go to prison because of lack of funds to help them have a fair trial. Also, high cost of mass incarceration comes at expense of investing in effective crime prevention, drug rehabilitation, and youth programs that can help decrease the incarceration rates in low income, high POC areas. These burdens are what make people more likely to fall back into their old lives of crime and leads to them becoming repeat offenders.
In 2022, Vicki Dabrowki and Emma Milne assessed female health care in the prison system in the United Kingdom. They found that there was an inconsistency in female and reproductive healthcare across prisons. More specifically, imprisoned women who had given birth reported feeling isolated and without access to healthcare professionals. Additionally, they reported a lack of access to feminine hygiene products. Our prison system is one of the largest in the world, with 8% being privately funded/for profit facilities, according to The Sentencing Project. Yet, there is still a lack of proper healthcare and medical attention for many women in need.
In a study done by the U.S. Department of Justice, women's prisons are far more likely to have pseudo-families (role-playing as male and female characters in the lives of prisoners) which are shown to be very helpful and supporting to many inmates. Women have tend to have much more love based relationships, sometimes never intending to be physical with each other than men, who primarily only have sexual relationships in prisons, without any real homosexual love. In this study, they found that the deprivation of sustained interpersonal support from family and friends is far more damaging to females.
With this context, the fact that women's prisons are much less violent than men's prisons is a very sensible fact. However, when government mandated reforms are put in place for correctional facilities, they are usually mandated in both male and female prisons. This is an example of gender inequality because male and female prison subcultures are constantly developing, most of the time in different directions, and they cannot be compared and changed in the same ways.
In a report by the Movement Advancement Project and Center for American Progress, researchers found that transgender people are overrepresented in the criminal justice system. 21% of transgender women reported that they spent time in jail compared to 5% of all U.S. adults. The reason for this disproportionate rate was stated to be because transgender people are more likely to be put in vulnerable situations due to gender discrimination. Transgender people are more likely to face discrimination in the domains of housing, employment, healthcare, and identification documents, leading to higher interactions with the criminal justice system. The report also found transgender women are more likely to experience gendered violence while in prison. When transgender women were placed in men's prisons in California, 59% reported that they had been sexually assaulted compared to the 4.4% of all male-respondents. Otherwise said, Transgender women are 13 times more likely to be assaulted than incarcerated men.
In television and film
The New York Film Academy took a closer look at the women in Hollywood and gathered statistics from the top 500 films from 2007 to 2012, for their history and achievements, or lack of.
There was a 5:1 ratio of men to women working in films. 30.8% of women having speaking characters, who may or may not have been a part of the 28.8% of women who were written to wear revealing clothing compared to the 7% of men who did, or the 26.2% of women who wore little to no clothing opposed to the 9.4% of men who did the same. A study analyzing five years of text from over 2,000 news sources found a similar 5:1 ratio of male to female names overall, and 3:1 for names in entertainment.
Hollywood actresses are paid less than actors. Topping Forbes highest paid actors list of 2013 was Robert Downey Jr. with $75 million. Angelina Jolie topped the highest paid actresses list with $33 million, which tied with Denzel Washington ($33 million) and Liam Neeson ($32 million), who were the last two on the top ten highest paid actors list.
In the 2013 Academy Awards, 140 men were nominated for an award, but only 35 women were nominated. No woman was nominated for directing, cinematography, film editing, writing (original screenplay), or original score that year. Since the Academy Awards began in 1929, only seven women producers have won the Best Picture category (all of whom were co-producers with men), and only eight women have been nominated for Best Original Screenplay. Lina Wertmuller (1976), Jane Campion (1994), Sofia Coppola (2004), and Kathryn Bigelow (2012) were the only four women to be nominated for Best Director, with Bigelow being the first woman to win for her film The Hurt Locker. The Academy Awards' voters are 77% male.
A group of Hollywood actors have launched their own social movement titled #AskMoreOfHim. This movement is built on the basis of men speaking out against sexual misconduct against females. A number of male activists, specifically in the film industry, have signed an open letter explaining their responsibility in the ownership of their actions, as well as calling out the actions of others. The letter has been signed and supported by Friends actor David Schwimmer, shown above, among many others. The Hollywood Reporter published their support saying, "We applaud the courage and pledge our support to the courageous women — and men, and gender non-conforming individuals — who have come forward to recount their experiences of harassment, abuse and violence at the hands of men in our country. As men, we have a special responsibility to prevent abuse from happening in the first place ... After all, the vast majority of sexual harassment, abuse and violence is perpetrated by men, whether in Hollywood or not." This accountability is set to change the way women are seen and treated in the film and television industry, hopefully ending in the closing of the gap women are experiencing in pay, promotion, and overall respect. This initiative was created in response to the #MeToo movement. The #MeToo movement, started by a single tweet, asked women to share their stories of sexual assault against men in a professional setting. Within one day, 30,000 women had used the hashtag sharing their stories. Many women feel as if they have more power in their voices than they ever had and are choosing to make personal claims that may have been brushed under the rug prior to the internet culture we are now living in. According to Time magazine, 95% of women in the film and entertainment industry report being sexually harassed by men in their industry. In addition to the #MeToo movement, women in industry are using #TimesUp, with the goal of aiming to help prevent sexual harassment in the workplace for victims who cannot afford their own resources.
In sports
The media gives more weight to men in sports news: according to a study by Sports Illustrated on the news in the sports media, women's sports account for only 5.7% of the news in the media by ESPN.
Another problem that has been causing increasing controversy lately is wage inequality. The fact that male athletes earn more money than females in almost all sports branches is the focus of discussion. The argument most often presented as the reason for this issue is that men's sports provide more income. However, according to the arguments that offer more realistic evaluations, women and men are not given equal opportunities in the field of sports, and women start and continue sports at a disadvantage. Some work has been done recently to prevent this inequality. According to the statements made, countries such as the United States, Spain, Sweden, and Brazil announced that men and women national football team athletes will receive equal pay. It can be said that these developments are the initial steps to end gender inequality in sports.
Impact and counteractions
Gender inequality and discrimination are argued to cause and perpetuate vulnerability|vulnerability]] in society as a whole. Households and intra-households knowledge and resources are key influences in individuals' abilities to take advantage of external livelihood opportunities or respond appropriately to threats. High education levels and social integration significantly improve the productivity of all members of the household and improve equity throughout society. Gender Equity Indices seek to provide the tools to demonstrate this feature of poverty.
Poverty has many different factors, one of which is the gender wage gap. Women are more likely to be living in poverty and the wage gap is one of the causes.
There are many difficulties in creating a comprehensive response. It is argued that the Millennium Development Goals (MDGs) fail to acknowledge gender inequality as a cross-cutting issue. Gender is mentioned in MDG3 and MDG5: MDG3 measures gender parity in education, the share of women in wage employment and the proportion women in national legislatures. MDG5 focuses on maternal mortality and on universal access to reproductive health. These targets are significantly off-track.
Addressing gender inequality through social protection programmes designed to increase equity would be an effective way of reducing gender inequality, according to the Overseas Development Institute (ODI). Researchers at the ODI argue for the need to develop the following in social protection in order to reduce gender inequality and increase growth:
Community childcare to give women greater opportunities to seek employment
Support parents with the care costs (e.g. South African child/disability grants)
Education stipends for girls (e.g. Bangladesh's Girls Education Stipend scheme)
Awareness-raising regarding gender-based violence, which has surged globally in recent years, and other preventive measures, such as financial support for women and children escaping abusive environments (e.g. NGO pilot initiatives in Ghana)
Inclusion of programme participants (women and men) in designing and evaluating social protection programmes
Gender-awareness and analysis training for programme staff
Collect and distribute information on coordinated care and service facilities (e.g. access to micro-credit and micro-entrepreneurial training for women)
Developing monitoring and evaluation systems that include sex-disaggregated data
The ODI maintains that society limits governments' ability to act on economic incentives.
NGOs tend to protect women against gender inequality and structural violence.
During war, combatants primarily target men. Both sexes die however, due to disease, malnutrition and incidental crime and violence, as well as the battlefield injuries which predominantly affect men. A 2009 review of papers and data covering war related deaths disaggregated by gender concluded "It appears to be difficult to say whether more men or women die from conflict conditions overall." The ratio also depends on the type of war, for example in the Falklands War 904 of the 907 dead were men. Conversely figures for war deaths in 1990, almost all relating to civil war, gave ratios in the order of 1.3 males per female.
Another opportunity to tackle gender inequality is presented by modern information and communication technologies. In a carefully controlled study, it has been shown that women embrace digital technology more than men. Given that digital information and communication technologies have the potential to provide access to employment, education, income, health services, participation, protection, and safety, among others (ICT4D), the natural affinity of women with these new communication tools provide women with a tangible bootstrapping opportunity to tackle social discrimination. A target of global initiatives such as the United Nations Sustainable Development Goal 5 is to enhance the use of enabling technology to promote the empowerment of women.
Variations by country or culture
Gender inequality is a result of the persistent discrimination of one group of people based upon gender and it manifests itself differently according to race, culture, politics, country, and economic situation. While gender discrimination happens to both men and women in individual situations, discrimination against women is more common.
In the Democratic Republic of the Congo, rape and violence against women and girls is used as a tool of war. In Afghanistan, girls have had acid thrown in their faces for attending school. Considerable focus has been given to the issue of gender inequality at the international level by organizations such as the United Nations (UN), the Organisation for Economic Co-operation and Development (OECD), and the World Bank, particularly in developing countries. The causes and effects of gender inequality vary geographically, as do methods for combating it.
According to the World Economic Forum's Global Gender Gap Report 2023, it will take exactly 131 years for the gender gap to close.
Asia
One example of the continued existence of gender inequality in Asia is the "missing girls" phenomenon. "Many families desire male children in order to ensure an extra source of income. In China, females are perceived as less valuable for labor and unable to provide sustenance." Moreover, gender inequality is also reflected in the educational aspect of rural China. Gender inequality exists because of gender stereotypes in rural China. For example, families may consider that it is useless for girls to acquire knowledge at school because they will marry someone eventually, and their major responsibility is to take care of housework.
Furthermore, the current formal education in Asia might be also a result of the historical tendencies. For instance, insufficient supply and demand for education of women reflect the development of numeracy levels throughout Asia between 1900 and 1960. Regions like South and West Asia had low numeracy levels during the early and mid-20th century. As a consequence, there were no significant gender equality trends. East Asia in its turn was characterized by a high numeracy level and gender equality. The success of this region is related to the higher education and hence higher participation rate of females in the economic life of the region.
China
Gender inequality in China derives from deeply rooted Confucian beliefs about gender roles in society.
Despite that, gender inequality in China was relatively modest before the beginning of the Chinese economic reform in 1978. The transition period to an economic system with market elements during the 1980s though was characterized by increasing gender inequality in China. On the other hand, the gender inequality was also influenced by the "One-child policy" because of the son preference. Nowadays women still face discrimination in China, despite the existence of state programs
According to the United Nations Development Program, China was ranked 39 out of 162 countries on the Gender Inequality Index in 2018, while it was ranked 91 out of 187 in 2014.
According to the World Economic Forum's global gender gap index, China's gap has widened and its rank has dropped to 106 out of 153 countries in 2020. It ranked last in terms of health and survival.
According to Human Rights Watch, job discrimination remains a significant issue as 11% of postings specify a preference or requirement of men. In fact, Chinese women are often asked whether they expect to have children during interview as it is considered an obstacle to the job application, and as women generally retire around 40, it is difficult for them to advance.
In addition, Chinese women earn 78.2% for every dollar paid to a man in 2019, according to a study conducted by BOSS Zhipin.
South Korea
Gender inequality in South Korea is derived from deeply rooted patriarchal ideologies with specifically defined gender-roles. The gender-based stereotypes are often unchallenged and even encouraged by the government.
South Korea has the lowest rank among OECD countries in The Economist's Glass Ceiling Index, which evaluates women's higher education, number of women in managerial positions and in parliament.
The gap has improved in healthcare and education, but it is still prevalent in the economy and politics. In fact, out of 36 OECD countries, South Korea ranked 30 for women's employment in 2018.
Victims of gender-based discrimination struggle to make a case and get justice as it is hard to prove gender discrimination and sometimes do not complain because they are afraid of the repercussions.
The existing directives against gender discrimination are not effective because the law is weakly enforced and corporations do not comply.
The inequality is even stronger in politics, with women holding 17% of the seats in the parliament.
Cambodia
A Cambodian said, "Men are gold, women are white cloth", emphasizing that women had a lower value and importance compared to men. In Cambodia, approximately 15% (485,000 hectares) of land was owned by women. In Asian culture, there is a stereotype that women usually have lower status than men because males carry on the family name and hold the responsibilities to take care of the family. Females have a less important role, mainly to carry out domestic chores, and taking care of husbands and children. Women are also the main victims of poverty as they have little or no access to education, low pay and low chances owning assets such as lands, homes or even basic items.
In Cambodia, the Ministry of Women's Affairs (MoWA) was formed in 1998 with the role of improving women's overall power and status in the country.
India
India ranking remains low in gender equality measures by the World Economic Forum, although the rank has been improving in recent years. When broken down into components that contribute the rank, India performs well on political empowerment, but is scored near the bottom with China on sex-selective abortion. India also scores poorly on overall female to male literacy and health rankings. India with a 2013 ranking of 101 out of 136 countries had an overall score of 0.6551, while Iceland, the nation that topped the list, had an overall score of 0.8731 (no gender gap would yield a score of 1.0). Gender inequalities impact India's sex ratio, women's health over their lifetimes, their educational attainment, and economic conditions. It is a multifaceted issue that concerns men and women alike.
The labor force participation rate of women was 80.7% in 2013. Nancy Lockwood of the Society for Human Resource Management, the world's largest human resources association with members in 140 countries, in a 2009 report wrote that female labor participation is lower than men, but has been rapidly increasing since the 1990s. Out of India's 397 million workers in 2001, 124 million were women, states Lockwood.
India is on target to meet its Millennium Development Goal of gender parity in education before 2016. UNICEF's measures of attendance rate and Gender Equality in Education Index (GEEI) attempt to capture the quality of education. Despite some gains, India needs to triple its rate of improvement to reach GEEI score of 95% by 2015 under the Millennium Development Goals. A 1998 report stated that rural India girls continue to be less educated than the boys.
In India, integrating women in forest and energy initiatives is linked to a 28% higher likelihood of forest regeneration and a 30% rise in sales of off-grid energy solutions.
Africa
Although African nations have made considerable strides towards improving gender parity, the World Economic Forum's 2018 Global Gender Gap Index reported that sub-Saharan African and North African countries have only bridged 66% and 60% of their gender inequality. Women face considerable barriers to attending equal status to men in terms of property ownership, gainful employment, political power, credit, education, and health outcomes. In addition, women are disproportionately affected by poverty and HIV/AIDs because of their lack of access to resources and cultural influences. Other key issues are adolescent births, maternal mortality, gender-based violence, child marriage, and female genital mutilation. It is estimated that 50% of adolescent childbirths and 66% of all maternal deaths occurred in sub-Saharan African nations. Women have few rights and legal protections which have led to the highest numbers of child marriage and female genital mutilation than any other continent. Furthermore, Burkina Faso, Côte d'Ivoire, Egypt, Lesotho, Mali, and Niger do not have any legal protections for gender-based domestic violence. Religion contributes to the gender inequality experienced by women in Africa. For example, religious norms in Nigeria limit women's ability to hold leadership roles and place blame on women who seek traditionally "male" roles
Europe
The Global Gender Gap Report put out by the World Economic Forum (WEF) in 2013 ranks nations on a scale of 0 to 1, with a score of 1.0 indicating full gender equality. A nation with 35 women and 65 men in political office would get a score of 0.538 as the WEF is measuring the gap between the two figures and not the actual percentage of women in a given category. While Europe holds the top four spots for gender equality, with Iceland, Finland, Norway and Sweden ranking first through fourth respectively, it also contains two nations ranked in the bottom 30 countries, Albania at 108 and Turkey at 120. The Nordic Countries, for several years, have been at the forefront of bridging the gap in gender inequality. Every Nordic country, aside from Denmark which is at 0.778, has reached above a 0.800 score. In contrast to the Nordic nations, the countries of Albania and Turkey continue to struggle with gender inequality. Albania and Turkey failed to break the top 100 nations in two of four and three of four factors, respectively.
Acting to promote gender equality might contribute $13 trillion to the global GDP by 2030. According to the European Institute for Gender Equality, improving gender equality in the EU might result in a 9.6% rise in EU GDP per capita, or €3.15 trillion, as well as an additional 10.5 million employment by 2050. This would help both genders.
Gender is also an important aspect of economic inequality. Because women continue to hold lower-paying jobs, they earn 13% less than men on average across the European Union. According to European Quality of Life Survey and European Working Conditions Survey data, women in the European Union work more hours but for less pay. Adult men (including the retired) work an average of 23 hours per week, compared to 15 hours for women.
The surveys found that while men spend up to 14 hours per week doing unpaid housework and caring for children and other family members, women spend up to 28 hours per week doing the same unpaid tasks. Women work up to six hours longer than men. If all unpaid work done by men and women at the EU median wage were to be valued, it would be worth nearly €6 trillion, or 40% of European gross domestic product.
Western Europe
Western Europe, a region most often described as comprising the non-communist members of post-WWII Europe, has, for the most part been doing well in eliminating the gender gap. Western Europe holds 12 of the top 20 spots on the Global Gender Gap Report for overall score. While remaining mostly in the top 50 nations, four Western European nations fall below that benchmark. Portugal is just outside of the top 50 at number 51 with score of 0.706 while Italy (71), Greece (81) and Malta (84) received scores of 0.689, 0.678 and 0.676, respectively.
According to the United Nations, 21 EU's member states are in the top 30 in the world in terms of gender equality. However, since 2005, the European Union has slowly improved its gender equality score according to the European Institute for Gender Equality.
The Council of Europe Commissioner for Human Rights has raised gender inequality as one of the main human rights problems the European countries are facing and acknowledged the slow progress in bridging gender pay gap and addressing discrimination at work.
According to the European Institute for Gender Equality, the EU seems to be the closest to gender equality in the health and money domains but has a more worrying score in the domain of power. As acknowledged by the Council of Europe Commissioner for Human Rights, the EU is only slowly progressing when it comes to tackling women's underrepresentation in political decision-making.
The progress towards gender equality is uneven between member states. In fact, while Sweden and Denmark appear to be the most gender-equal societies, Greece and Hungary are far from it. Italy and Cyprus are the states which improved the most.
Eastern Europe
A large portion of Eastern Europe, a region most often described as the former communist members of post-WWII Europe, resides between 40th and 100th place in the Global Gender Gap Report. A few outlier countries include Lithuania, which jumped nine places (37th to 28th) from 2011 to 2013, Latvia, which has held the 12th spot for two consecutive years, Albania and Turkey.
Russia
According to the United Nations Development Programme, Russia's gender inequality index is 0.255, ranking it 54 out of 162 countries in 2018. Women hold 16.1% of parliamentary seats and 96.3% have reached at least a secondary level of education. Researchers calculate the loss to the annual budget due to gender segregation to be roughly 40–50%. Although women hold prominent positions in Russia's government, traditional gender roles are still prevalent, and there is room for improvement when dealing with gender pay gap, domestic violence and sexual harassment.
Turkey
According to the 2020 Gender Decoupling Index, which was created by the World Economic Forum with data on education, participation in the economy, political representation and health, Turkey is 130th out of 153 countries. in line. In other words, Turkey is the country with the highest gender Decoupling after 23 countries, including sharia-governed countries such as Iran, Pakistan, Saudi Arabia and undeveloped African countries such as Mali, Togo and Gambia.According to TurkStat data, 57% of women in Turkey are happy. The happiness rate of men is at the level of 47.6%.The labor force participation rate of women in Turkey refers to the place of women in working life, this rate is 36.2% in Turkey; the OECD average is 63.6%. Turkey is one of the few countries not only among the OECD countries of which it is a member, but also in the whole world, where the participation rate of women in the labor force is the lowest. Dec. According to the Human Development Report of the United Nations Development Program dated 2016, the labor force participation rate of women is 49.6% on average in the world and is significantly higher than that of Turkey.It shows that female unemployment in Turkey (14%) is higher than the OECD average (9.8%). In other words, there is a serious danger of protection for women in Turkey.The unequal position of women in working life is also reflected in economic income inequality. Women's share of gross national income is lower than that of men in all countries. But gender income inequality in Turkey is higher than the inequality seen in the OECD and world averages. Gross national income per capita for women in Turkey is 39.3% of that for men; the OECD average is 59.6%, and the world average is 55.5%.
United States
The World Economic Forum measures gender equity through a series of economic, educational, and political benchmarks. It has ranked the United States as 19th (up from 31st in 2009) in terms of achieving gender equity. The US Department of Labor has indicated that in 2009, "the median weekly earnings of women who were full-time wage and salary workers was ... 80 percent of men's". The Department of Justice found that in 2009, "the percentage of female victims (26%) of intimate partner violence was about 5 times that of male victims (5%)". As of 2019, the average number of women killed by an intimate partner each day has gone up from three to around four. The United States ranks 41st in a ranking of 184 countries on maternal deaths during pregnancy and childbirth, below all other industrialized nations and a number of developing countries, and women are just 20% of members of the United States Congress. Economically, women are significantly underrepresented in prestigious and high paying occupations like company ownership and CEO roles, where they account for just 5.5% of the latter. Women are around 15% of self-made millionaires and 11.8% of billionaires.
Political affiliations and behaviors
Existing research on the topic of gender/sex and politics has found differences in political affiliation, beliefs, and voting behavior between men and women, although these differences vary across cultures. Gender is omnipresent in every culture, and while there are many factors to consider when labeling people "Democrat" or "Republican"—such as race and religion—gender is especially prominent in politics. Studying gender and political behavior poses challenges, as it can be difficult to determine if men and women actually differ in substantial ways in their political views and voting behavior, or if biases and stereotypes about gender cause people to make assumptions. However, trends in voting behavior among men and women have been proven through research.
Research shows that women in postindustrial countries like the United States, Canada, and Germany primarily identified as conservative before the 1960s; however, as time has progressed and new waves of feminism have occurred, women have become more left-wing due to shared beliefs and values between women and parties more on the left. Women in these countries typically oppose war and the death penalty, favor gun control, support environment protection, and are more supportive of programs that help people of lower socioeconomic statuses. Voting behaviors of men have not experienced as drastic of a shift over the last fifty years as women in their voting behavior and political affiliations. These behaviors tend to consistently be more conservative than women overall. These trends change with every generation, and factors such as culture, race, and religion also must be considered when discussing political affiliation. These factors make the connection between gender and political affiliation complex due to intersectionality.
Candidate gender also plays a role in voting behavior. Women candidates are far more likely than male candidates to be scrutinized and have their competence questioned by both men and women when they are seeking information on candidates in the beginning stages of election campaigns. Democrat male voters tend to seek more information about female Democrat candidates over male Democrat candidates. Female Republican voters tend to seek more information about female Republican candidates. For this reason, female candidates in either party typically need to work harder to prove themselves competent more than their male counterparts.
Challenges to women in politics
Overall, politics in the United States is dominated by men, which can pose many challenges to women who decide to enter the political sphere. As the number of women participants in politics continue to increase around the world, the gender of female candidates serves as both a benefit and a hindrance within their campaign themes and advertising practices. The overarching challenge seems to be that—no matter their actions—women are unable to win in the political sphere as different standards are used to judge them when compared to their male counterparts.
One area in particular that exemplifies varying perceptions between male and female candidates is the way female candidates decide to dress and how their choice is evaluated. When women decide to dress more masculine, they are perceived as being "conspicuous". When they decide to dress more feminine, they are perceived as "deficient". At the same time, however, women in politics are generally expected to adhere to the masculine standard, thereby validating the idea that gender is binary and that power is associated with masculinity. As illustrated by the points above, these simultaneous, mixed messages create a "double-bind" for women. Some scholars go on to claim that this masculine standard represents symbolic violence against women in politics.
Political knowledge is a second area where male and female candidates are evaluated differently and where political science research has consistently shown women with a lower level of knowledge than their male counterparts. One reason for this finding is the argument that there are different areas of political knowledge that different groups consider. Due to this line of thought, scholars are advocating the replacement of traditional political knowledge with gender-relevant political knowledge because women are not as politically disadvantaged as it may appear.
A third area that affects women's engagement in politics is their low level of political interest and perception of politics as a "men's game". Despite female candidates' political contributions being equal to that of male candidates, research has shown that women perceive more barriers to office in the form of rigorous campaigns, less overall recruitment, inability to balance office and family commitments, hesitancy to enter competitive environments, and a general lack of belief in their own merit and competence. Male candidates are evaluated most heavily on their achievements, while female candidates are evaluated on their appearance, voice, verbal dexterity, and facial features in addition to their achievements.
Steps needed for change
Several forms of action have been taken to combat institutionalized sexism. People are beginning to speak up or "talk back" in a constructive way to expose gender inequality in politics, as well as gender inequality and under-representation in other institutions. Researchers who have delved into the topic of institutionalized sexism in politics have introduced the term "undoing gender". This term focuses on education and an overarching understanding of gender by encouraging "social interactions that reduce gender difference". Some feminists argue that "undoing gender" is problematic because it is context-dependent and may actually reinforce gender. For this reason, researchers suggest "doing gender differently" by dismantling gender norms and expectations in politics, but this can also depend on culture and level of government (e.g. local versus federal).
Another key to combating institutionalized sexism in politics is to diffuse gender norms through "gender-balanced decision-making", particularly at the international level, which "establishes expectations about appropriate levels of women in decision-making positions." In conjunction with this solution, scholars have started placing emphasis on "the value of the individual and the importance of capturing individual experience". This is done throughout a candidate's political career—whether that candidate is male or female—instead of the collective male or female candidate experience. Five recommended areas of further study for examining the role of gender in U.S. political participation are (1) realizing the "intersection between gender and perceptions"; (2) investigating the influence of "local electoral politics"; (3) examining "gender socialization"; (4) discerning the connection "between gender and political conservatism"; and (5) recognizing the influence of female political role models in recent years. Due to the fact that gender is intricately entwined in every societal institution, gender in politics can only change once gender norms in other institutions change, as well.
Sexual violence
In a national survey conducted in the United States of America, 14.8% of women over 17 years of age reported having been raped in their lifetime (with an additional 2.8% having experienced attempted rape) and 0.3% of the sample reported having been raped in the previous year.
See also
References
Sources
Bibliography
Leila Schneps and Coralie Colmez, Math on trial. How numbers get used and abused in the courtroom, Basic Books, 2013. . (Sixth chapter: "Math error number 6: Simpson's paradox. The Berkeley sex bias case: discrimination detection").
Higgins, M. and Reagan, M. (n.d). The gender wage gap, 9th ed. North Mankato: Abdo Publishing, pp. 9–11
Further reading
Sexism
Psychoanalytic terminology
Social inequality
Gender equality
Sex differences in humans | 0.766843 | 0.998049 | 0.765346 |
Decadence | The word decadence refers to a late-19th-century movement emphasizing the need for sensationalism, egocentricity; bizarre, artificial, perverse, and exotic sensations and experiences. By extension, it may refer to a decline in art, literature, science, technology, and work ethics, or (very loosely) to self-indulgent behavior.
Usage of the term sometimes implies moral censure, or an acceptance of the idea, met with throughout the world since ancient times, that such declines are objectively observable and that they inevitably precede the destruction of the society in question; for this reason, modern historians use it with caution. The word originated in Medieval Latin (dēcadentia), appeared in 16th-century French, and entered English soon afterwards. It bore the neutral meaning of decay, decrease, or decline until the late 19th century, when the influence of new theories of social degeneration contributed to its modern meaning.
The idea that a society or institution is declining is called declinism. This may be caused by the predisposition, caused by cognitive biases such as rosy retrospection, to view the past more favourably and future more negatively. Declinism has been described as "a trick of the mind" and as "an emotional strategy, something comforting to snuggle up to when the present day seems intolerably bleak." Other cognitive factors contributing to the popularity of declinism may include the reminiscence bump as well as both the positivity effect and negativity bias.
In literature, the Decadent movement began in France's fin de siècle intermingling with Symbolism and the Aesthetic movement while spreading throughout Europe and The United States. The Decadent title was originally used as a criticism but it was soon triumphantly adopted by some of the writers themselves. The Decadents praised artifice over nature and sophistication over simplicity, defying contemporary discourses of decline by embracing subjects and styles that their critics considered morbid and over-refined. Some of these writers were influenced by the tradition of the Gothic novel and by the poetry and fiction of Edgar Allan Poe.
History
Ancient Rome
Decadence is a popular criticism of the culture of the later Roman Empire's elites, seen also in much of its earlier historiography and 19th and early 20th century art depicting Roman life. This criticism describes the later Roman Empire as reveling in luxury, in its extreme characterized by corrupting "extravagance, weakness, and sexual deviance", as well as "orgies and sensual excesses".
Victorian-era Artwork on Roman Decadence
According to Professor Joseph Bristow of UCLA, decadence in Rome and the Victorian-era movement are connected through the idea of "decadent historicism." In particular, decadent historicism refers to the "interest among…1880s and 1890s writers in the enduring authority of perverse personas from the past" including the later Roman era. As such, Bristow's argument references how Heliogabalus, the title subject of Simeon Solomon's painting Heliogabalus, High Priest of the Sun (1866), was "a decadent icon" for the Victorian movement. Bristow also notes that "[t]he image [of the painting] summons many qualities linked with fin-de-siècle decadence [alongside his]…queerness[,]" thus "inspir[ing] late-Victorian writers [as]…they…imagine anew sexual modernity."
Heliogabalus is also the subject of The Roses of Heliogabalus (1888) by Sir Lawrence Alma-Tadema, which, according to Professor Rosemary Barrow, represents "the artist['s]…most glorious revel in Roman Decadence." To Barrow, "[t]he authenticity of the [scene]…perhaps had little importance for the artist[, meaning that] its appeal is the entertaining and extravagant vision it gives of later imperial Rome." Barrow also makes a point to mention "that Alma-Tadema’s Roman-subject paintings [tend to]…make use of historical, literary and archaeological sources" within themselves. Thus, the presence of roses within the painting as opposed to the original "'violets and other flowers'" of the source material emphasizes how "the Roman world…h[eld] extra connotations of revelry and luxuriant excess" about them.
Decadent movement
Decadence was the name given to a number of late nineteenth-century writers who valued artifice over the earlier Romantics' naïve view of nature. Some of them triumphantly adopted the name, referring to themselves as Decadents. For the most part, they were influenced by the tradition of the Gothic novel and by the poetry and fiction of Edgar Allan Poe, and were associated with Symbolism and/or Aestheticism.
This concept of decadence dates from the eighteenth century, especially from Montesquieu and Wilmot. It was taken up by critics as a term of abuse after Désiré Nisard used it against Victor Hugo and Romanticism in general. A later generation of Romantics, such as Théophile Gautier and Charles Baudelaire took the word as a badge of pride, as a sign of their rejection of what they saw as banal "progress." In the 1880s, a group of French writers referred to themselves as Decadents. The classic novel from this group is Joris-Karl Huysmans' Against Nature, often seen as the first great decadent work, though others attribute this honor to Baudelaire's works.
In Britain and Ireland the leading figure associated with the Decadent movement was Irish writer, Oscar Wilde. Other significant figures include Arthur Symons, Aubrey Beardsley and Ernest Dowson.
The Symbolist movement has frequently been confused with the Decadent movement. Several young writers were derisively referred to in the press as "decadent" in the mid-1880s. Jean Moréas' manifesto was largely a response to this polemic. A few of these writers embraced the term while most avoided it. Although the aesthetics of Symbolism and Decadence can be seen as overlapping in some areas, the two remain distinct.
1920s Berlin
This "fertile culture" of Berlin extended onwards until Adolf Hitler rose to power in early 1933 and stamped out any and all resistance to the Nazi Party. Likewise, the German far-right decried Berlin as a haven of degeneracy. A new culture had developed in and around Berlin throughout the previous decade, including architecture and design (Bauhaus, 1919–33), a variety of literature (Döblin, Berlin Alexanderplatz, 1929), film (Lang, Metropolis, 1927, Dietrich, Der blaue Engel, 1930), painting (Grosz), and music (Brecht and Weill, The Threepenny Opera, 1928), criticism (Benjamin), philosophy/psychology (Jung), and fashion. This culture was considered decadent and disruptive by rightists.
Film was making huge technical and artistic strides during this period of time in Berlin, and gave rise to the influential movement called German Expressionism. "Talkies", the sound films, were also becoming more popular with the general public across Europe, and Berlin was producing very many of them.
Berlin in the 1920s also proved to be a haven for English-language writers such as W. H. Auden, Stephen Spender and Christopher Isherwood, who wrote a series of 'Berlin novels', inspiring the play I Am a Camera, which was later adapted into a musical, Cabaret, and an Academy Award winning film of the same name. Spender's semi-autobiographical novel The Temple evokes the attitude and atmosphere of the place at the time.
Decadent Nihilistic Art
The philosophy of decadence comes from the work of German philosopher Arthur Schopenhauer (1788–1860), however, Friedrich Nietzsche (1844–1900), a specific philosopher of decadence, conceptualized modern decadence on a more influential scale. Holding decadence to be in any condition, ultimately limiting what something or someone can be, Nietzsche used his exploration in nihilism to critique traditional values and morals that threatened the decline in art, literature, and science. Nihilism, generally, is the rejection of moral principles, ultimately believing that life is meaningless. Nihilism, for Nietzsche, was the ultimate fate of Western civilization as old values lost their influence and purpose, in turn, disappeared among society. Predicting a rise in decadence and aesthetic nihilism, creators would renounce the pursuit of beauty and instead welcome the incomprehensible chaos. In art, there have been movements connected to nihilism, such as cubism and surrealism, that pushes for abandoned viewpoints to ultimately tap into the potential of one's conscious mind. Because of this, paintings like 1875-76's "L’Absinthe" by Edgar Degas and 1915's "Black Square" by Kazimir Malevich were created. L’Absinthe, which first showed in 1876, was mocked and called disgusting when panned by critics. Some say the painting is a blow to morality, as a glass filled with Absinthe, an alcoholic drink, rests in front of a woman at a table. Taken to be in bad faith and quite uncouth, Degas's art took decadence as a way to portray ambiguity in random subjects that seem to be drifting between depression and euphoria. Using nihilism in a synonymously way, Degas denoted his paintings to a general mood of despair, mainly at existence as a whole. Comparing this piece to Kazimir Malevich's "Black Square," abstract nihilistic art in the Western tradition was only beginning to take shape as the 20th century came about. Malevich's perception of this piece embraced a philosophy connected to Suprematism – a new realism in painting that evokes non-objectivity to experience "white emptiness of a liberated nothing," as said by Malevich himself. In nihilism, life has, in a sense, no truth, therefore no action is objectively preferable to another. Malevich's decadent painting shows the complete abandonment of depicting reality, and instead creates his own world of new form. When the painting was first exhibited, the public was in chaos, as society was in its first World War and Malevich reflected a new social revolution as a symbol of a new tomorrow, disregarding the past to move forward. Because of this painting and Degas's, decadence can be portrayed as a physiological foundation for nihilism, bringing out a term called "Decadent Nihilism:" existing beyond the world, and that of vain virtues. According to Nietzsche, Western metaphysical and nihilistic thought is decadent because of its confirmation from 'others' (apart from oneself) based on ideas of a nihilistic God. The extreme position an artist takes is what makes their pieces decadent.
Decadent Aesthetics
Controversy
Vladimir Nabokov's Lolita (1955)
Aesthetics falling under the category of decadence often include controversy. An example of a controversial style founded through decadent literary influence is the novel, Lolita, by Vladimir Nabokov, a Russian-American citizen. Lolita, while expressing the prose through a pedophile's narration, directly expresses Nabokov's discourse with decadent literature. According to Will Norman from the University of Kent, the novel makes many references to prominent historical figures related to decadence, such as Edgar Allan Poe and Charles Baudelaire. Norman states, "... Lolita emerges as the risky reinstatement of a transatlantic decadent tradition, in which the failure of temporal and ethical containment disrupts a dominant narrative of modernism's history in American letters". Lolita purposefully exemplifies a moral decline, while simultaneously disregarding the ethics of Nabokov’s time. The emphasis on its temporal standing in history captures an intermediate state of decadent literature itself. Norman describes that "... Nabokov reproduces the tension between American regionalism and modernist cosmopolitanism in his own 'Edgar H. Humbert', as the European aesthete embarks on his road-trip with Dolores...". The text exemplifies Nabokov's desire to replicate the many social disparities of American culture while using his character, H. Humbert, to demonstrate a lack of moral judgement. Norman continues, "Nabokov's text positions itself as the dynamic historical agent, importing Poe wholesale (from caricature through to complex literary intellectual) into the present and facilitating his critique in the hands of the reader". Leaving the judgement in the hands of the reader, Nabokov uses Lolita to work through the complexities that decadence presents for ethical or moral obligations to society. Norman concludes, "Lolita joins American works such as William Faulkner's The Sound and the Fury (1929) and F. Scott Fitzgerald's Tender Is the Night (1934) which assimilate themes of incest and sexual pathology into their decadent aesthetics, with the effect of bringing European temporalities into conflict with American social modernity". Using a controversial method, Nabokov employs decadent aesthetics to document a moment of historical transition.
Women in Decadence
Not only do the stylistic choices of literature in decadence cause ethical debate, but the presence of women in literature also causes controversy in politics. Viola Parente-Čapková, a Lecturer from the university in Prague, Czech Republic, argues that women writers following decadent literary structure have been overlooked due to their simultaneous influence of the feminist movement. The belief that women could not separate morality from their writing due to their purposeful prose to argue for women's rights suggests a theme of misogyny, in which men excluded women from being considered Decadent writers because of the possibility of a desire for social change.
Social Change
Decadence offers a world-view, in that "it is an ideological phenomenon originating in the experience of a particular group, and it became the aesthetics of the upper-middle class". Changes in European industrialization and urbanization led to the development of the proletariat, nuclear family, and entrepreneurial class . The values of Decadence formed as an opposition to "those of an earlier and supposedly more vital bourgeoisie". Aesthetically, progress turns into decay, activity becomes play instead of goal-oriented work, and art becomes a way of life. To individuals that observe the changes in social structure after rapid industrialization, the idea of progress becomes something to rebel against, because this real-world progress seems to be leaving them behind.
Modern-day perspectives
Postmodernist Connection to Decadence
Nearly a century after the supposed end of the decadent period itself, the spirit and drive of it continued in the next end of the century. Unknowingly following in the footsteps of the decadents before them, postmodernists have subscribed to many of the same habits. Both groups have found themselves simultaneously exhausted by all the new experiences of society while still putting all their efforts into experiencing it all. The postmodernist is simultaneously aware of their desire for modernist disintegration all while enjoying the products of their dying predecessor. This ravenous eye for the new is reflective of the lives of the practicing decadent, where they too enjoyed all the new experiences offered by their own time's modernity. Both events were deeply intertwined with expanding globalization. As seen in the lives of decadents in their literary and visual art pursuit and creation, so too has the postmodernist been given more global connection and experience. During the rise of postmodernism, there has been a clear concentration of power and wealth that supported globalization. This resurgence of power to apply has restructured the desires of the disintegration-loving postmodernist, indulging themselves in all the newness of globalized life. This renewed interest in a global view of the world brought along a renewed interest in different forms of artistic representation as well.
Modernism tends to belittle popular culture through its oppressive nature, which can be seen as elitist and controlling, as it privileges certain works of art above others. As a result, postmodern artists and writers developed a contempt for the canon, rejecting tradition and essentialism. This disdain for privilege extended to the fields of philosophy, science, and of course, politics. Pierre Bourdieu provides some insight into the attitudes of this new sub-class and its relation to post-modern theorists, embodied through students of bourgeois descent. They began to pursue their artistic interests at their schools after being shadowed academically. They are victims of verdicts which, like those of the school, appeal to reason and science to block off the paths leading (back) to power, and are quick to denounce science, power, the power of science, and above all perhaps a power which, like the triumphant technology of the moment, appeals to science to legitimate itself. This postmodern way of thought is guided by an anti-institutional temperament that flees competitions and hierarchies. These systems allow art to become confined by labels – postmodern work is difficult to define. In the name of the fight against 'taboos' and the liquidation of 'complexes' they adopt the most external and most easily borrowed aspects of the intellectual life-style, liberated manners, cosmetic or sartorial outrages, emancipated poses and postures, and systematically apply the cultivated disposition to not-yet-legitimate culture (cinema, strip cartoons, the underground), to every-day life (street art), the personal sphere (sexuality, cosmetics, child-rearing, leisure) and the existential (the relation to nature, love, death). Their craft evolves into a way of being that directly criticizes modernist attitudes, and enables postmodern artists and writers with a newfound sense of freedom through rebellion.
Jacques Barzun
The historian Jacques Barzun (1907–2012) gives a definition of decadence which is independent from moral judgement. In his bestseller From Dawn to Decadence: 500 Years of Western Cultural Life (published 2000) he describes decadent eras as times when "the forms of art as of life seem exhausted, the stages of development have been run through. Institutions function painfully." He emphasizes that "decadent" in his view is "not a slur" but "a technical label".
With reference to Barzun, New York Times columnist Ross Douthat characterizes decadence as a state of "economic stagnation, institutional decay and cultural and intellectual exhaustion at a high level of material prosperity and technological development". Douthat sees the West in the 21st century in an "age of decadence", marked by stalemate and stagnation. He is the author of the book The Decadent Society, published by Simon & Schuster in 2020.
Pria Viswalingam
Pria Viswalingam, an Australian documentary and film maker, sees the western world in decay since the late 1960s. Viswalingam is the author of the six-episode documentary TV series Decadence: The Meaninglessness of Modern Life, broadcast in 2006 and 2007, and the 2011 documentary film Decadence: The Decline of the Western World.
According to Viswalingam, western culture started in 1215 with the Magna Carta, continued to the Renaissance, the Reformation, the founding of America, the Enlightenment and culminated with the social revolutions of the 1960s.
Since 1969, the year of the moon landing, the My Lai massacre, the Woodstock Festival and the Altamont Free Concert, „decadence depicts the west's decline". As symptoms he names increasing suicide rates, addiction to anti-depressants, exaggerated individualism, broken families and a loss of religious faith as well as „treadmill consumption, growing income-disparity, b-grade leadership" and money as the only benchmark for value.
Use in Marxism
Leninism
According to Vladimir Lenin, capitalism had reached its highest stage and could no longer provide for the general development of society. He expected reduced vigor in economic activity and a growth in unhealthy economic phenomena, reflecting capitalism's gradually decreasing capacity to provide for social needs and preparing the ground for socialist revolution in the West. Politically, World War I proved the decadent nature of the advanced capitalist countries to Lenin, that capitalism had reached the stage where it would destroy its own prior achievements more than it would advance.
One who directly opposed the idea of decadence as expressed by Lenin was José Ortega y Gasset in The Revolt of the Masses (1930). He argued that the "mass man" had the notion of material progress and scientific advance deeply inculcated to the extent that it was an expectation. He also argued that contemporary progress was opposite the true decadence of the Roman Empire.
Left communism
Decadence is an important aspect of contemporary left communist theory. Similar to Lenin's use of it, left communists, coming from the Communist International themselves started in fact with a theory of decadence in the first place, yet the communist left sees the theory of decadence at the heart of Marx's method as well, expressed in famous works such as The Communist Manifesto, Grundrisse, Das Kapital but most significantly in Preface to the Critique of Political Economy.
Contemporary left communist theory defends that Lenin was mistaken on his definition of imperialism (although how grave his mistake was and how much of his work on imperialism is valid varies from groups to groups) and Rosa Luxemburg to be basically correct on this question, thus accepting capitalism as a world epoch similarly to Lenin, but a world epoch from which no capitalist state can oppose or avoid being a part of. On the other hand, the theoretical framework of capitalism's decadence varies between different groups while left communist organizations like the International Communist Current hold a basically Luxemburgist analysis that makes an emphasis on the world market and its expansion, others hold views more in line with those of Vladimir Lenin, Nikolai Bukharin and most importantly Henryk Grossman and Paul Mattick with an emphasis on monopolies and the falling rate of profit.
See also
Acedia
Anomie
Behavioral sink
Bread and circuses
Buraiha
Competence (human resources)
Kleptocracy
Late capitalism
Moral relativism
Privilege hazard
Societal collapse
Twilight of the Idols
The Decline of the West
Degenerate art
References
Further reading
Richard Gilman, Decadence: The Strange Life of an Epithet (1979).
Matei Calinescu, Five Faces of Modernity.
Mario Praz, The Romantic Agony (1930).
Jacques Barzun, From Dawn to Decadence, (2000).
A. E. Carter, The Idea of Decadence in French Literature (1978).
Michael Murray, Jacques Barzun: Portrait of a Mind (2011).
External links
Collection of the articles of the International Communist Current on the Theory of Decadence
Chronology of Decadence
Decadence, symbolist, and the fin de siècle: a notebook
Art movements
Concepts in ethics
Modern art
Political theories | 0.768028 | 0.996501 | 0.765341 |
Paleoclimatology | Paleoclimatology (British spelling, palaeoclimatology) is the scientific study of climates predating the invention of meteorological instruments, when no direct measurement data were available. As instrumental records only span a tiny part of Earth's history, the reconstruction of ancient climate is important to understand natural variation and the evolution of the current climate.
Paleoclimatology uses a variety of proxy methods from Earth and life sciences to obtain data previously preserved within rocks, sediments, boreholes, ice sheets, tree rings, corals, shells, and microfossils. Combined with techniques to date the proxies, the paleoclimate records are used to determine the past states of Earth's atmosphere.
The scientific field of paleoclimatology came to maturity in the 20th century. Notable periods studied by paleoclimatologists include the frequent glaciations that Earth has undergone, rapid cooling events like the Younger Dryas, and the rapid warming during the Paleocene–Eocene Thermal Maximum. Studies of past changes in the environment and biodiversity often reflect on the current situation, specifically the impact of climate on mass extinctions and biotic recovery and current global warming.
History
Notions of a changing climate most likely evolved in ancient Egypt, Mesopotamia, the Indus Valley and China, where prolonged periods of droughts and floods were experienced. In the seventeenth century, Robert Hooke postulated that fossils of giant turtles found in Dorset could only be explained by a once warmer climate, which he thought could be explained by a shift in Earth's axis. Fossils were, at that time, often explained as a consequence of a biblical flood. Systematic observations of sunspots started by amateur astronomer Heinrich Schwabe in the early 19th century, starting a discussion of the Sun's influence on Earth's climate.
The scientific study of paleoclimatology began to take shape in the early 19th century, when discoveries about glaciations and natural changes in Earth's past climate helped to understand the greenhouse effect. It was only in the 20th century that paleoclimatology became a unified scientific field. Before, different aspects of Earth's climate history were studied by a variety of disciplines. At the end of the 20th century, the empirical research into Earth's ancient climates started to be combined with computer models of increasing complexity. A new objective also developed in this period: finding ancient analog climates that could provide information about current climate change.
Reconstructing ancient climates
Paleoclimatologists employ a wide variety of techniques to deduce ancient climates. The techniques used depend on which variable has to be reconstructed (this could be temperature, precipitation, or something else) and how long ago the climate of interest occurred. For instance, the deep marine record, the source of most isotopic data, exists only on oceanic plates, which are eventually subducted; the oldest remaining material is old. Older sediments are also more prone to corruption by diagenesis. This is due to the millions of years of disruption experienced by the rock formations, such as pressure, tectonic activity, and fluid flowing. These factors often result in a lack of quality or quantity of data, which causes resolution and confidence in the data decrease over time.
Specific techniques used to make inferences on ancient climate conditions are the use of lake sediment cores and speleothems. These utilize an analysis of sediment layers and rock growth formations respectively, amongst element-dating methods utilizing oxygen, carbon and uranium.
Proxies for climate
Direct Quantitative Measurements
The Direct Quantitative Measurements method is the most direct approach to understand the change in a climate. Comparisons between recent data to older data allows a researcher to gain a basic understanding of weather and climate changes within an area. There is a disadvantage to this method. Data of the climate only started being recorded in the mid-1800s. This means that researchers can only utilize 150 years of data. That is not helpful when trying to map the climate of an area 10,000 years ago. This is where more complex methods can be used.
Ice
Mountain glaciers and the polar ice caps/ice sheets provide much data in paleoclimatology. Ice-coring projects in the ice caps of Greenland and Antarctica have yielded data going back several hundred thousand years, over 800,000 years in the case of the EPICA project.
Air trapped within fallen snow becomes encased in tiny bubbles as the snow is compressed into ice in the glacier under the weight of later years' snow. The trapped air has proven a tremendously valuable source for direct measurement of the composition of air from the time the ice was formed.
Layering can be observed because of seasonal pauses in ice accumulation and can be used to establish chronology, associating specific depths of the core with ranges of time.
Changes in the layering thickness can be used to determine changes in precipitation or temperature.
Oxygen-18 quantity changes in ice layers represent changes in average ocean surface temperature. Water molecules containing the heavier O-18 evaporate at a higher temperature than water molecules containing the normal Oxygen-16 isotope. The ratio of O-18 to O-16 will be higher as temperature increases but it also depends on factors such as water salinity and the volume of water locked up in ice sheets. Various cycles in isotope ratios have been detected.
Pollen has been observed in the ice cores and can be used to understand which plants were present as the layer formed. Pollen is produced in abundance and its distribution is typically well understood. A pollen count for a specific layer can be produced by observing the total amount of pollen categorized by type (shape) in a controlled sample of that layer. Changes in plant frequency over time can be plotted through statistical analysis of pollen counts in the core. Knowing which plants were present leads to an understanding of precipitation and temperature, and types of fauna present. Palynology includes the study of pollen for these purposes.
Volcanic ash is contained in some layers, and can be used to establish the time of the layer's formation. Volcanic events distribute ash with a unique set of properties (shape and color of particles, chemical signature). Establishing the ash's source will give a time period to associate with the layer of ice.
A multinational consortium, the European Project for Ice Coring in Antarctica (EPICA), has drilled an ice core in Dome C on the East Antarctic ice sheet and retrieved ice from roughly 800,000 years ago. The international ice core community has, under the auspices of International Partnerships in Ice Core Sciences (IPICS), defined a priority project to obtain the oldest possible ice core record from Antarctica, an ice core record reaching back to or towards 1.5 million years ago.
Dendroclimatology
Climatic information can be obtained through an understanding of changes in tree growth. Generally, trees respond to changes in climatic variables by speeding up or slowing down growth, which in turn is generally reflected by a greater or lesser thickness in growth rings. Different species however, respond to changes in climatic variables in different ways. A tree-ring record is established by compiling information from many living trees in a specific area. This is done by comparing the number, thickness, ring boundaries, and pattern matching of tree growth rings.
The differences in thickness displayed in the growth rings in trees can often indicate the quality of conditions in the environment, and the fitness of the tree species evaluated. Different species of trees will display different growth responses to the changes in the climate. An evaluation of multiple trees within the same species, along with one of trees in different species, will allow for a more accurate analysis of the changing variables within the climate and how they affected the surrounding species.
Older intact wood that has escaped decay can extend the time covered by the record by matching the ring depth changes to contemporary specimens. By using that method, some areas have tree-ring records dating back a few thousand years. Older wood not connected to a contemporary record can be dated generally with radiocarbon techniques. A tree-ring record can be used to produce information regarding precipitation, temperature, hydrology, and fire corresponding to a particular area.
Sedimentary content
On a longer time scale, geologists must refer to the sedimentary record for data.
Sediments, sometimes lithified to form rock, may contain remnants of preserved vegetation, animals, plankton, or pollen, which may be characteristic of certain climatic zones.
Biomarker molecules such as the alkenones may yield information about their temperature of formation.
Chemical signatures, particularly Mg/Ca ratio of calcite in Foraminifera tests, can be used to reconstruct past temperature.
Isotopic ratios can provide further information. Specifically, the record responds to changes in temperature and ice volume, and the record reflects a range of factors, which are often difficult to disentangle.
Sedimentary facies
On a longer time scale, the rock record may show signs of sea level rise and fall, and features such as "fossilised" sand dunes can be identified. Scientists can get a grasp of long-term climate by studying sedimentary rock going back billions of years. The division of Earth history into separate periods is largely based on visible changes in sedimentary rock layers that demarcate major changes in conditions. Often, they include major shifts in climate.
Sclerochronology
Corals (see also sclerochronology)
Coral “rings'' share similar evidence of growth to that of trees, and thus can be dated in similar ways. A primary difference is their environments and the conditions within those that they respond to. Examples of these conditions for coral include water temperature, freshwater influx, changes in pH, and wave disturbances. From there, specialized equipment, such as the Advanced Very High Resolution Radiometer (AVHRR) instrument, can be used to derive the sea surface temperature and water salinity from the past few centuries. The δ18O of coralline red algae provides a useful proxy of the combined sea surface temperature and sea surface salinity at high latitudes and the tropics, where many traditional techniques are limited.
Landscapes and landforms
Within climatic geomorphology, one approach is to study relict landforms to infer ancient climates. Being often concerned about past climates climatic geomorphology is considered sometimes to be a theme of historical geology. Evidence of these past climates to be studied can be found in the landforms they leave behind. Examples of these landforms are those such as glacial landforms (moraines, striations), desert features (dunes, desert pavements), and coastal landforms (marine terraces, beach ridges). Climatic geomorphology is of limited use to study recent (Quaternary, Holocene) large climate changes since there are seldom discernible in the geomorphological record.
Timing of proxies
The field of geochronology has scientists working on determining how old certain proxies are. For recent proxy archives of tree rings and corals the individual year rings can be counted, and an exact year can be determined. Radiometric dating uses the properties of radioactive elements in proxies. In older material, more of the radioactive material will have decayed and the proportion of different elements will be different from newer proxies. One example of radiometric dating is radiocarbon dating. In the air, cosmic rays constantly convert nitrogen into a specific radioactive carbon isotope, 14C. When plants then use this carbon to grow, this isotope is not replenished anymore and starts decaying. The proportion of 'normal' carbon and Carbon-14 gives information of how long the plant material has not been in contact with the atmosphere.
Notable climate events in Earth history
Knowledge of precise climatic events decreases as the record goes back in time, but some notable climate events are known:
Faint young Sun paradox (start)
Huronian glaciation (~2400 Mya Earth completely covered in ice probably due to Great Oxygenation Event)
Later Neoproterozoic Snowball Earth (~600 Mya, precursor to the Cambrian Explosion)
Andean-Saharan glaciation (~450 Mya)
Carboniferous Rainforest Collapse (~300 Mya)
Permian–Triassic extinction event (251.9 Mya)
Oceanic anoxic events (~120 Mya, 93 Mya, and others)
Cretaceous–Paleogene extinction event ( Mya)
Paleocene–Eocene Thermal Maximum (Paleocene–Eocene, 55Mya)
Last Glacial Maximum (~23,000 BCE)
Younger Dryas/Big Freeze (~11,000 BCE)
Holocene climatic optimum (~7000–3000 BCE)
Extreme weather events of 535–536 (535–536 CE)
Medieval Warm Period (900–1300)
Little Ice Age (1300–1800)
Year Without a Summer (1816)
History of the atmosphere
Earliest atmosphere
The first atmosphere would have consisted of gases in the solar nebula, primarily hydrogen. In addition, there would probably have been simple hydrides such as those now found in gas giants like Jupiter and Saturn, notably water vapor, methane, and ammonia. As the solar nebula dissipated, the gases would have escaped, partly driven off by the solar wind.
Second atmosphere
The next atmosphere, consisting largely of nitrogen, carbon dioxide, and inert gases, was produced by outgassing from volcanism, supplemented by gases produced during the late heavy bombardment of Earth by huge asteroids. A major part of carbon dioxide emissions were soon dissolved in water and built up carbonate sediments.
Water-related sediments have been found dating from as early as 3.8 billion years ago. About 3.4 billion years ago, nitrogen was the major part of the then stable "second atmosphere". An influence of life has to be taken into account rather soon in the history of the atmosphere because hints of early life forms have been dated to as early as 3.5 to 4.3 billion years ago. The fact that it is not perfectly in line with the 30% lower solar radiance (compared to today) of the early Sun has been described as the "faint young Sun paradox".
The geological record, however, shows a continually relatively warm surface during the complete early temperature record of Earth with the exception of one cold glacial phase about 2.4 billion years ago. In the late Archaean eon, an oxygen-containing atmosphere began to develop, apparently from photosynthesizing cyanobacteria (see Great Oxygenation Event) which have been found as stromatolite fossils from 2.7 billion years ago. The early basic carbon isotopy (isotope ratio proportions) was very much in line with what is found today, suggesting that the fundamental features of the carbon cycle were established as early as 4 billion years ago.
Third atmosphere
The constant rearrangement of continents by plate tectonics influences the long-term evolution of the atmosphere by transferring carbon dioxide to and from large continental carbonate stores. Free oxygen did not exist in the atmosphere until about 2.4 billion years ago, during the Great Oxygenation Event, and its appearance is indicated by the end of the banded iron formations. Until then, any oxygen produced by photosynthesis was consumed by oxidation of reduced materials, notably iron. Molecules of free oxygen did not start to accumulate in the atmosphere until the rate of production of oxygen began to exceed the availability of reducing materials. That point was a shift from a reducing atmosphere to an oxidizing atmosphere. O2 showed major variations until reaching a steady state of more than 15% by the end of the Precambrian. The following time span was the Phanerozoic eon, during which oxygen-breathing metazoan life forms began to appear.
The amount of oxygen in the atmosphere has fluctuated over the last 600 million years, reaching a peak of 35% during the Carboniferous period, significantly higher than today's 21%. Two main processes govern changes in the atmosphere: plants use carbon dioxide from the atmosphere, releasing oxygen and the breakdown of pyrite and volcanic eruptions release sulfur into the atmosphere, which oxidizes and hence reduces the amount of oxygen in the atmosphere. However, volcanic eruptions also release carbon dioxide, which plants can convert to oxygen. The exact cause of the variation of the amount of oxygen in the atmosphere is not known. Periods with much oxygen in the atmosphere are associated with rapid development of animals. Today's atmosphere contains 21% oxygen, which is high enough for rapid development of animals.
Climate during geological ages
The Huronian glaciation, is the first known glaciation in Earth's history, and lasted from 2400 to 2100 million years ago.
The Cryogenian glaciation lasted from 720 to 635 million years ago.
The Andean-Saharan glaciation lasted from 450 to 420 million years ago.
The Karoo glaciation lasted from 360 to 260 million years ago.
The Quaternary glaciation is the current glaciation period and began 2.58 million years ago.
In 2020 scientists published a continuous, high-fidelity record of variations in Earth's climate during the past 66 million years and identified four climate states, separated by transitions that include changing greenhouse gas levels and polar ice sheets volumes. They integrated data of various sources. The warmest climate state since the time of the dinosaur extinction, "Hothouse", endured from 56 Mya to 47 Mya and was ~14 °C warmer than average modern temperatures.
Precambrian climate
The Precambrian took place between the time when Earth first formed 4.6 billion years (Ga) ago, and 542 million years ago. The Precambrian can be split into two eons, the Archean and the Proterozoic, which can be further subdivided into eras. The reconstruction of the Precambrian climate is difficult for various reasons including the low number of reliable indicators and a, generally, not well-preserved or extensive fossil record (especially when compared to the Phanerozoic eon). Despite these issues, there is evidence for a number of major climate events throughout the history of the Precambrian: The Great Oxygenation Event, which started around 2.3 Ga ago (the beginning of the Proterozoic) is indicated by biomarkers which demonstrate the appearance of photosynthetic organisms. Due to the high levels of oxygen in the atmosphere from the GOE, CH4 levels fell rapidly cooling the atmosphere causing the Huronian glaciation. For about 1 Ga after the glaciation (2-0.8 Ga ago), the Earth likely experienced warmer temperatures indicated by microfossils of photosynthetic eukaryotes, and oxygen levels between 5 and 18% of the Earth's current oxygen level. At the end of the Proterozoic, there is evidence of global glaciation events of varying severity causing a 'Snowball Earth'. Snowball Earth is supported by different indicators such as, glacial deposits, significant continental erosion called the Great Unconformity, and sedimentary rocks called cap carbonates that form after a deglaciation episode.
Phanerozoic climate
Major drivers for the preindustrial ages have been variations of the Sun, volcanic ashes and exhalations, relative movements of the Earth towards the Sun, and tectonically induced effects as for major sea currents, watersheds, and ocean oscillations. In the early Phanerozoic, increased atmospheric carbon dioxide concentrations have been linked to driving or amplifying increased global temperatures. Royer et al. 2004 found a climate sensitivity for the rest of the Phanerozoic which was calculated to be similar to today's modern range of values.
The difference in global mean temperatures between a fully glacial Earth and an ice free Earth is estimated at 10 °C, though far larger changes would be observed at high latitudes and smaller ones at low latitudes. One requirement for the development of large scale ice sheets seems to be the arrangement of continental land masses at or near the poles. The constant rearrangement of continents by plate tectonics can also shape long-term climate evolution. However, the presence or absence of land masses at the poles is not sufficient to guarantee glaciations or exclude polar ice caps. Evidence exists of past warm periods in Earth's climate when polar land masses similar to Antarctica were home to deciduous forests rather than ice sheets.
The relatively warm local minimum between Jurassic and Cretaceous goes along with an increase of subduction and mid-ocean ridge volcanism due to the breakup of the Pangea supercontinent.
Superimposed on the long-term evolution between hot and cold climates have been many short-term fluctuations in climate similar to, and sometimes more severe than, the varying glacial and interglacial states of the present ice age. Some of the most severe fluctuations, such as the Paleocene-Eocene Thermal Maximum, may be related to rapid climate changes due to sudden collapses of natural methane clathrate reservoirs in the oceans.
A similar, single event of induced severe climate change after a meteorite impact has been proposed as reason for the Cretaceous–Paleogene extinction event. Other major thresholds are the Permian-Triassic, and Ordovician-Silurian extinction events with various reasons suggested.
Quaternary climate
The Quaternary geological period includes the current climate. There has been a cycle of ice ages for the past 2.2–2.1 million years (starting before the Quaternary in the late Neogene Period).
Note in the graphic on the right the strong 120,000-year periodicity of the cycles, and the striking asymmetry of the curves. This asymmetry is believed to result from complex interactions of feedback mechanisms. It has been observed that ice ages deepen by progressive steps, but the recovery to interglacial conditions occurs in one big step.
The graph on the left shows the temperature change over the past 12,000 years, from various sources; the thick black curve is an average.
Climate forcings
Climate forcing is the difference between radiant energy (sunlight) received by the Earth and the outgoing longwave radiation back to space. Such radiative forcing is quantified based on the amount in the tropopause, in units of watts per square meter to the Earth's surface. Dependent on the radiative balance of incoming and outgoing energy, the Earth either warms up or cools down. Earth radiative balance originates from changes in solar insolation and the concentrations of greenhouse gases and aerosols. Climate change may be due to internal processes in Earth sphere's and/or following external forcings.
One example of a way this can be applied to study climatology is analyzing how the varying concentrations of CO2 affect the overall climate. This is done by using various proxies to estimate past greenhouse gas concentrations and compare those to that of the present day. Researchers are then able to assess their role in progression of climate change throughout Earth’s history.
Internal processes and forcings
The Earth's climate system involves the atmosphere, biosphere, cryosphere, hydrosphere, and lithosphere, and the sum of these processes from Earth's spheres is what affects the climate. Greenhouse gasses act as the internal forcing of the climate system. Particular interests in climate science and paleoclimatology focus on the study of Earth climate sensitivity, in response to the sum of forcings. Analyzing the sum of these forcings contributes to the ability of scientists to make broad conclusive estimates on the Earth’s climate system. These estimates include the evidence for systems such as long term climate variability (eccentricity, obliquity precession), feedback mechanisms (Ice-Albedo Effect), and anthropogenic influence.
Examples:
Thermohaline circulation (Hydrosphere)
Life (Biosphere)
External forcings
The Milankovitch cycles determine Earth distance and position to the Sun. The solar insolation is the total amount of solar radiation received by Earth.
Volcanic eruptions are considered an internal forcing.
Human changes of the composition of the atmosphere or land use.
Human activities causing anthropogenic greenhouse gas emissions leading to global warming and associated climate changes.
Large asteroids that have cataclysmic impacts on Earth’s climate are considered external forcings.
Mechanisms
On timescales of millions of years, the uplift of mountain ranges and subsequent weathering processes of rocks and soils and the subduction of tectonic plates, are an important part of the carbon cycle. The weathering sequesters , by the reaction of minerals with chemicals (especially silicate weathering with ) and thereby removing from the atmosphere and reducing the radiative forcing. The opposite effect is volcanism, responsible for the natural greenhouse effect, by emitting into the atmosphere, thus affecting glaciation (Ice Age) cycles. Jim Hansen suggested that humans emit 10,000 times faster than natural processes have done in the past.
Ice sheet dynamics and continental positions (and linked vegetation changes) have been important factors in the long term evolution of the Earth's climate. There is also a close correlation between and temperature, where has a strong control over global temperatures in Earth's history.
See also
References
Notes
Bibliography
The Climates of the Geological Past. (Die Klimate der geologischen Vorzeit). 1924, Wladimir Köppen, Alfred Wegener
Facsimile of German original and English translation: The climates of the geological past – Klimate der geologischen Vorzeit. Borntraeger, Berlin / Stuttgart 2015, .
Karl-Heinz Ludwig (2006). Eine kurze Geschichte des Klimas. Von der Entstehung der Erde bis heute, (A short history of climate, From the evolution of earth till today) Herbst,
External links
NOAA Paleoclimatology
Short history of climate
Climate history | 0.770534 | 0.993256 | 0.765337 |
Postmodernity | Postmodernity (post-modernity or the postmodern condition) is the economic or cultural state or condition of society which is said to exist after modernity. Some schools of thought hold that modernity ended in the late 20th century – in the 1980s or early 1990s – and that it was replaced by postmodernity, and still others would extend modernity to cover the developments denoted by postmodernity. The idea of the postmodern condition is sometimes characterized as a culture stripped of its capacity to function in any linear or autonomous state like regressive isolationism, as opposed to the progressive mind state of modernism.
Postmodernity can mean a personal response to a postmodern society, the conditions in a society which make it postmodern or the state of being that is associated with a postmodern society as well as a historical epoch. In most contexts it should be distinguished from postmodernism, the adoption of postmodern philosophies or traits in the arts, culture and society. In fact, today's historical perspectives on the developments of postmodern art (postmodernism) and postmodern society (postmodernity) can be best described as two umbrella terms for processes engaged in an ongoing dialectical relationship like post-postmodernism, the result of which is the evolving culture of the contemporary world.
Some commentators deny that modernity ended, and consider the post-WWII era to be a continuation of modernity, which they refer to as late modernity.
Uses of the term
Postmodernity is the state or condition of being postmodern – after or in reaction to that which is modern, as in postmodern art (see postmodernism). Modernity is defined as a period or condition loosely identified with the Progressive Era, the Industrial Revolution, or the Enlightenment. In philosophy and critical theory postmodernity refers to the state or condition of society which is said to exist after modernity, a historical condition that marks the reasons for the end of modernity. This usage is ascribed to the philosophers Jean-François Lyotard and Jean Baudrillard.
One "project" of modernity is said by Jürgen Habermas to have been the fostering of progress by incorporating principles of rationality and hierarchy into public and artistic life. (See also post-industrial, Information Age) Lyotard understood modernity as a cultural condition characterized by constant change in the pursuit of progress. Postmodernity then represents the culmination of this process where constant change has become the status quo and the notion of progress obsolete. Following Ludwig Wittgenstein's critique of the possibility of absolute and total knowledge, Lyotard further argued that the various metanarratives of progress such as positivist science, Marxism, and structuralism were defunct as methods of achieving progress.
The literary critic Fredric Jameson and the geographer David Harvey have identified postmodernity with "late capitalism" or "flexible accumulation", a stage of capitalism following finance capitalism, characterised by highly mobile labor and capital and what Harvey called "time and space compression". They suggest that this coincides with the breakdown of the Bretton Woods system which, they believe, defined the economic order following the Second World War. (See also consumerism, critical theory.) Other academics, such as the archaeologist Artur Ribeiro, also identify postmodernity with late capitalism. Though in the case of Ribeiro, he places the start of modernity at the beginning of the Bretton Woods system.
Those who generally view modernity as obsolete or an outright failure, a flaw in humanity's evolution leading to disasters like Auschwitz and Hiroshima, see postmodernity as a positive development. Other philosophers, particularly those seeing themselves as within The Modern Project, see the state of postmodernity as a negative consequence of holding postmodernist ideas. For example, Jürgen Habermas and others contend that postmodernity represents a resurgence of long running Counter-Enlightenment ideas, that the modern project is not finished and that universality cannot be so lightly dispensed with. Postmodernity, the consequence of holding postmodern ideas, is generally a negative term in this context.
Postmodernism
Postmodernity is a condition or a state of being associated with changes to institutions and creations and with social and political results and innovations, globally but especially in the West since the 1950s, whereas postmodernism is an aesthetic, literary, political or social philosophy, the "cultural and intellectual phenomenon", especially since the 1920s' new movements in the arts. Both of these terms are used by philosophers, social scientists and social critics to refer to aspects of contemporary culture, economics and society that are the result of features of late 20th century and early 21st century life, including the fragmentation of authority and the commoditization of knowledge (see "Modernity").
The relationship between postmodernity and critical theory, sociology and philosophy is fiercely contested. The terms "postmodernity" and "postmodernism" are often hard to distinguish, the former being often the result of the latter. The period has had diverse political ramifications: its "anti-ideological ideas" appear to have been associated with the feminist movement, racial equality movements, LGBT movements, most forms of late 20th century anarchism and even the peace movement as well as various hybrids of these in the current anti-globalization movement. Though none of these institutions entirely embraces all aspects of the postmodern movement in its most concentrated definition they all reflect, or borrow from, some of its core ideas.
History
Some authors, such as Lyotard and Baudrillard, believe that modernity ended in the late 20th century and thus have defined a period subsequent to modernity, namely postmodernity, while others, such as Zygmunt Bauman and Anthony Giddens, would extend modernity to cover the developments denoted by postmodernity. Others still contend that modernity ended with the Victorian Age at the turn of the 20th century.
Postmodernity has gone through two relatively distinct phases: the first beginning in the late 1940s and 1950s and ending with the Cold War (when analog media with limited bandwidth encouraged a few, authoritative media channels), and the second beginning at the end of the Cold War (marked by the spread of cable television and "new media" based on digital means of information dissemination and broadcast).
The first phase of postmodernity overlaps the end of modernity and is part of the modern period (see lumpers/splitters, periodization). Television became the primary news source, manufacturing decreased in importance in the economies of Western Europe and the United States but trade volumes increased within the developed core. In 1967–1969 a crucial cultural explosion took place within the developed world as the baby boom generation, which had grown up with postmodernity as its fundamental experience of society, demanded entrance into the political, cultural and educational power structure. A series of demonstrations and acts of rebellion – ranging from nonviolent and cultural, through violent acts of terrorism – represented the opposition of the young to the policies and perspectives of the previous age. Opposition to the Algerian War and the Vietnam War, to laws allowing or encouraging racial segregation and to laws which overtly discriminated against women and restricted access to divorce, increased use of marijuana and psychedelics, the emergence of pop cultural styles of music and drama, including rock music and the ubiquity of stereo, television and radio helped make these changes visible in the broader cultural context. This period is associated with the work of Marshall McLuhan, a philosopher who focused on the results of living in a media culture and argued that participation in a mass media culture both overshadows actual content disseminated and is liberating because it loosens the authority of local social normative standards.
The second phase of postmodernity is "digitality" – the increasing power of personal and digital means of communication including fax machines, modems, cable and high speed internet, which has altered the condition of postmodernity dramatically: digital production of information allows individuals to manipulate virtually every aspect of the media environment. This has brought producers into conflict with consumers over intellectual capital and intellectual property and led to the creation of a new economy whose supporters argue that the dramatic fall in information costs will alter society fundamentally.
Digitality, or what Esther Dyson referred to as "being digital", emerged as a separate condition from postmodernity. The ability to manipulate items of popular culture, the World Wide Web, the use of search engines to index knowledge, and telecommunications were producing a "convergence" marked by the rise of "participatory culture" in the words of Henry Jenkins.
One demarcation point of this era is the liberalization of China in the early 1980s and the collapse of the Soviet Union in 1991. Francis Fukuyama wrote "The End of History?" in 1989 in anticipation of the Fall of the Berlin Wall. He predicted that the question of political philosophy had been answered, that large scale wars over fundamental values would no longer arise since "all prior contradictions are resolved and all human needs satisfied." This is a kind of 'endism' also taken up by Arthur Danto who in 1964 acclaimed that Andy Warhol's Brillo boxes asked the right question of art and hence art had ended.
Descriptions
Distinctions in philosophy and critical theory
The debate on postmodernity has two distinct elements that are often confused; (1) the nature of contemporary society and (2) the nature of the critique of contemporary society. The first of these elements is concerned with the nature of changes that took place during the late 20th century. There are three principal analyses. Theorists such as Alex Callinicos and Craig Calhoun offer a conservative position on the nature of contemporary society, downplaying the significance and extent of socio-economic changes and emphasizing a continuity with the past. Second a range of theorists have tried to analyze the present as a development of the "modern" project into a second, distinct phase that is nevertheless still "modernity": this has been termed the "second" or "risk" society by Ulrich Beck, "late" or "high" modernity by Giddens, "liquid" modernity by Bauman, and the "network" society by Manuel Castells. Third are those who argue that contemporary society has moved into a literally post-modern phase distinct from modernity. The most prominent proponents of this position are Lyotard and Baudrillard.
Another set of issues concerns the nature of critique, often replaying debates over (what can be crudely termed) universalism and relativism, where modernism is seen to represent the former and postmodernity the latter. Seyla Benhabib and Judith Butler pursue this debate in relation to feminist politics, Benhabib arguing that postmodern critique comprises three main elements; an anti-foundationalist concept of the subject and identity, the death of history and of notions of teleology and progress, and the death of metaphysics defined as the search for objective truth. Benhabib argues forcefully against these critical positions, holding that they undermine the bases upon which feminist politics can be founded, removing the possibility of agency, the sense of self-hood and the appropriation of women's history in the name of an emancipated future. The denial of normative ideals removes the possibility for utopia, central for ethical thinking and democratic action.
Butler responds to Benhabib by arguing that her use of postmodernism is an expression of a wider paranoia over anti-foundationalist philosophy, in particular, post-structuralism.
Butler uses the debate over the nature of the post-modernist critique to demonstrate how philosophy is implicated in power relationships and defends poststructuralist critique by arguing that the critique of the subject itself is the beginning of analysis, not the end, because the first task of enquiry is the questioning of accepted "universal" and "objective" norms.
The Benhabib-Butler debate demonstrates that there is no simple definition of a postmodern theorist as the very definition of postmodernity itself is contested. Michel Foucault rejected the label of postmodernism explicitly in interviews yet is seen by many, such as Benhabib, as advocating a form of critique that is "postmodern" in that it breaks with utopian and transcendental "modern" critiques by calling universal norms of the Enlightenment into question. Giddens rejects this characterisation of "modern critique", pointing out that a critique of Enlightenment universals was central to philosophers of the modern period, most notably Nietzsche.
Postmodern society
Jameson views a number of phenomena as distinguishing postmodernity from modernity. He speaks of "a new kind of superficiality" or "depthlessness" in which models that once explained people and things in terms of an "inside" and an "outside" (such as hermeneutics, the dialectic, Freudian repression, the existentialist distinction between authenticity and inauthenticity, and the semiotic distinction of signifier and signified) have been rejected.
Second is a rejection of the modernist "Utopian gesture", evident in Van Gogh, of the transformation through art of misery into beauty whereas in the postmodernism movement the object world has undergone a "fundamental mutation" so that it has "now become a set of texts or simulacra". Whereas modernist art sought to redeem and sacralize the world, to give life to world (we might say, following Graff, to give the world back the enchantment that science and the decline of religion had taken away from it), postmodernist art bestows upon the world a "deathly quality… whose glacéd X-ray elegance mortifies the reified eye of the viewer in a way that would seem to have nothing to do with death or the death obsession or the death anxiety on the level of content" (ibid.). Graff sees the origins of this transformative mission of art in an attempted substitution of art for religion in giving meaning to the world that the rise of science and Enlightenment rationality had removed – but in the postmodern period this is seen as futile.
The third feature of the postmodern age that Jameson identifies is the "waning of affect" – not that all emotion has disappeared from the postmodern age but that it lacks a particular kind of emotion such as that found in "Rimbaud's magical flowers 'that look back at you'". He notes that "pastiche eclipses parody" as "the increasing unavailability of the personal style" leads to pastiche becoming a universal practice.
Jameson argues that distance "has been abolished" in postmodernity, that we "are submerged in its henceforth filled and suffused volumes to the point where our now postmodern bodies are bereft of spatial co-ordinates". This "new global space" constitutes postmodernity's "moment of truth". The various other features of the postmodern that he identifies "can all now be seen as themselves partial (yet constitutive) aspects of the same general spatial object". The postmodern era has seen a change in the social function of culture. He identifies culture in the modern age as having had a property of "semi-autonomy", with an "existence… above the practical world of the existent" but, in the postmodern age, culture has been deprived of this autonomy, the cultural has expanded to consume the entire social realm so that all becomes "cultural". "Critical distance", the assumption that culture can be positioned outside "the massive Being of capital" upon which left-wing theories of cultural politics are dependent, has become outmoded. The "prodigious new expansion of multinational capital ends up penetrating and colonizing those very pre-capitalist enclaves (Nature and the Unconscious) which offered extraterritorial and Archimedean footholds for critical effectivity".
Social sciences
Postmodern sociology can be said to focus on conditions of life which became increasingly prevalent in the late 20th century in the most industrialized nations, including the ubiquity of mass media and mass production, the rise of a global economy and a shift from manufacturing to service economies. Jameson and Harvey described it as consumerism, where manufacturing, distribution and dissemination have become exceptionally inexpensive but social connectedness and community have become rarer. Other thinkers assert that postmodernity is the natural reaction to mass broadcasting in a society conditioned to mass production and mass politics. The work of Alasdair MacIntyre informs the versions of postmodernism elaborated by such authors as Murphy (2003) and Bielskis (2005), for whom MacIntyre's postmodern revision of Aristotelianism poses a challenge to the kind of consumerist ideology that now promotes capital accumulation.
The sociological view of postmodernity ascribes it to more rapid transportation, wider communication and the ability to abandon standardization of mass production, leading to a system which values a wider range of capital than previously and allows value to be stored in a greater variety of forms. Harvey argues that postmodernity is an escape from "Fordism", a term coined by Antonio Gramsci to describe the mode of industrial regulation and accumulation which prevailed during the Keynesian era of economic policy in OECD countries from the early 1930s to the 1970s. Fordism for Harvey is associated with Keynesianism in that the first concerns methods of production and capital-labor relations while the latter concerns economic policy and regulation. Post-Fordism is therefore one of the basic aspects of postmodernity from Harvey's point of view.
Artifacts of postmodernity include the dominance of television and popular culture, the wide accessibility of information and mass telecommunications. Postmodernity also exhibits a greater resistance to making sacrifices in the name of progress discernible in environmentalism and the growing importance of the anti-war movement. Postmodernity in the industrialised core is marked by increasing focus on civil rights and equal opportunity as well as movements such as feminism and multiculturalism and the backlash against these movements. The postmodern political sphere is marked by multiple arenas and possibilities of citizenship and political action concerning various forms of struggle against oppression or alienation (in collectives defined by sex or ethnicity) while the modernist political arena remains restricted to class struggle.
Theorists such as Michel Maffesoli believe that postmodernity is corroding the circumstances that provide for its subsistence and will eventually result in a decline of individualism and the birth of a new neo-Tribal era.
According to theories of postmodernity, economic and technological conditions of our age have given rise to a decentralized, media-dominated society in which ideas are only simulacra, inter-referential representations and copies of each other with no real, original, stable or objective source of communication and meaning. Globalization, brought on by innovations in communication, manufacturing and transportation is often cited as one force which has driven the decentralized modern life, creating a culturally pluralistic and interconnected global society lacking any single dominant center of political power, communication or intellectual production. The postmodernist view is that intersubjective, not objective, knowledge will be the dominant form of discourse under such conditions and that ubiquity of dissemination fundamentally alters the relationship between reader and that which is read, between observer and the observed, between those who consume and those who produce.
Postmodernity as a shift of epistemology
Another conception of postmodernity is as an epistemological shift. This perspective suggests that the way people communicate and justify knowledge (i.e. epistemology) changes in conjunction with other societal changes, that the cultural and technological changes of the 1960s and 1970s included such a shift, and that this shift should be denoted as from modernity to postmodernity. [See French (2016), French & Ehrman (2016), or Sørensen (2007)].
Criticisms
Criticisms of the postmodern condition can broadly be put into four categories: criticisms of postmodernity from the perspective of those who reject modernism and its offshoots, criticisms from supporters of modernism who believe that postmodernity lacks crucial characteristics of the modern project, critics from within postmodernity who seek reform or change based on their understanding of postmodernism, and those who believe that postmodernity is a passing, and not a growing, phase in social organization.
See also
Notes
References
Works cited
General sources
Anderson, Perry (1998). The Origins of Postmodernity. London: Verso.
Deely, John (2001). Four Ages of Understanding: The First Postmodern Survey of Philosophy from ancient Times to the Turn of the Twenty-first Century. Toronto: University of Toronto Press.
Guénon, René (1927). The Crisis of the Modern World. Hillsdale: Sophia Perennis.
Guénon, René (1945). The Reign of Quantity & the Signs of the Times. Hillsdale: Sophia Perennis.
Harvey, David (1990). The Condition of Postmodernity. An enquiry into the origins of cultural change. Oxford: Blackwell.
Ihab Hassan (2000), From Postmodernism to Postmodernity: the Local/Global Context, text online.
Jean-François Lyotard (1924–1998) was a French philosopher and literary theorist well known for his embracing of postmodernism after the late 1970s. He published "La Condition postmoderne: Rapport sur le savoir" (The Postmodern Condition: A Report on Knowledge) (1979)
Charles Arthur Willard. Liberalism and the Problem of Knowledge: A New Rhetoric for Modern Democracy. University of Chicago Press. (1996).
Further reading
Ballesteros, Jesús, 1992. Postmodernity: Decadence or Resistance, Pamplona, Emise.
Baudrillard, J. 1984. Simulations. New York: Semiotext(e).
Berman, Marshall. 1982. All That is Solid Melts into Air. The Experience of Modernity. London: Verso.
Bielskis, Andrius. 2005. Towards a Postmodern Understanding of the Political. Houndmills, New York: Palgrave Macmillan.
Chan, Evans. 2001. "Against Postmodernism, etcetera – A Conversation with Susan Sontag" in Postmodern Culture, vol. 12 no. 1, Baltimore: Johns Hopkins University Press.
Docherty, Thomas. 1993. (ed.), Postmodernism: A Reader, New York: Harvester Wheatsheat.
Docker, John. 1994. Postmodernism and Popular Culture: A Cultural History. Cambridge: Cambridge University Press.
Eagleton, Terry. "Capitalism, Modernism and Postmodernism". Against the Grain: Essays 1975–1985. London: Verso, 1986. pp. 131–47.
Foster, H. 1983. The Anti-Aesthetic. USA: Bay Press.
Fuery, Patrick and Mansfield, Nick. 2001. Cultural Studies and Critical Theory. Melbourne: Oxford University Press.
Graff, Gerald. 1973. "The Myth of the Postmodernist Breakthrough" in Triquarterly, no. 26, Winter 1973, pp. 383–417.
Grebowicz, Margret. 2007. Gender After Lyotard. NY: Suny Press.
Grenz, Stanley J. 1996. A Primer on Postmodernism. Grand Rapids: Eerdmans
Habermas, Jürgen "Modernity – An Incomplete Project" (in Docherty ibid)
Habermas, Jürgen. 1981. trans. by Seyla Ben-Habib. "Modernity versus Postmodernity". in V Taylor & C Winquist; originally published in New German Critique, no. 22, Winter 1981, pp. 3–14.
Jencks, Charles. 1986. What is Postmodernism? New York: St. Martin's Press, and London: Academy Editions.
Joyce, James. 1964. Ulysses. London: Bodley Head.
Lipovetsky, Gilles. 2005. Hypermodern Times. Cornwall: Polity Press.
Lyotard, J. 1984. The Postmodern Condition: A report on knowledge. Manchester: Manchester University Press
Mansfield, N. 2000. Subjectivity: Theories of the self from Freud to Harroway. Sydney: Allen & Unwin.
McHale, Brian. 1990. "Constructing (post) modernism: The case of Ulysses" in Style, vol. 24 no. 1, pp. 1–21, DeKalb, Illinois: Northern Illinois University English Department.
Murphy, Mark C. (ed.) 2003. Alasdair MacIntyre. Cambridge: Cambridge University Press.
Palmeri, Frank. 2001. "Other than Postmodern? – Foucault, Pynchon, Hybridity, Ethics" in Postmodern Culture, vol. 12 no. 1, Baltimore: Johns Hopkins University Press.
Pinkney, Tony. 1989. "Modernism and Cultural Theory", editor's introduction to Williams, Raymond. The Politics of Modernism: Against the New Conformists. London: Verso.
Taylor, V & Winquist, (ed). 1998. Postmodernism: Critical concepts (vol. 1–2). London: Routledge.
Wheale, N. 1995. The Postmodern Arts: An introductory reader. New York: Routledge.
External links
Martin Irvine on Postmodernism and Postmodernity in contrast to Modernism and Modernity
Postmodern warfare, by Philip Hammond
Mikhail Epstein on "postmodernism's position in postmodernity"
Extensive list of names related to postmodernism and postmodernity
On the distinction of postmodernity from postmodernism, by Egypt-American critic Ihab Hassan
David Harvey, The Condition of postmodernity
Decadeology Wiki, Postmodern article
Postmodern theory
Historiography
Western culture
Historical eras
20th century
21st century
Cynicism
Relativism
Modernity
et:Uusim aeg | 0.770451 | 0.993353 | 0.76533 |
Design history | Design history is the study of objects of design in their historical and stylistic contexts. With a broad definition, the contexts of design history include the social, the cultural, the economic, the political, the technical and the aesthetic. Design history has as its objects of study all designed objects including those of architecture, fashion, crafts, interiors, textiles, graphic design, industrial design and product design. Design theorists revamp historical techniques and they use these aspects to create more sophisticated techniques of design. It acts as a tool to better future aspects of design.
Design history has had to incorporate criticism of the 'heroic' structure of its discipline in response to the establishment of material culture, much as art history has had to respond to visual culture (although visual culture has been able to broaden the subject area of art history through the incorporation of the televisual, film and new media). Design history has done this by shifting its focus towards the acts of production and consumption. The acts of production and consumption in design history were a result of the modernist approach designers started to take which advanced in the 19th century. Pre-capitalism and feudalism were the main drivers of modernism. They facilitated stylistic features and aesthetics which were exclusive because of the influence of small wealthy elites.
Design history as a component of British practice-based courses
Design history also exists as a component of many practice-based courses.
The teaching and study of design history within art and design programs in Britain are one of the results of the National Advisory Council on Art Education in the 1960s. Among its aims was making art and design education a legitimate academic activity, to which ends a historical perspective was introduced. This necessitated the employment or 'buying in' of specialists from art history disciplines, leading to a particular style of delivery: "Art historians taught in the only way that art historians knew how to teach; they switched off the lights, turned on the slide projector, showed slides of art and design objects, discussed and evaluated them and asked (art and design) students to write essays – according to the scholarly conventions of academia".
The most obvious effect of the traditional approach design history as sequential, in which X begat Y and Y begat Z. This has pedagogical implications in that the realization that assessment requires a fact-based regurgitation of received knowledge leads students to ignore discussions of the situations surrounding a design's creation and reception and to focus instead on simple facts such as who designed what and when.
This 'heroic/aesthetic' view – the idea that there are a few great designers who should be studied and revered unquestioningly – arguably instills an unrealistic view of the design profession. Although the design industry has been complicit in promoting the heroic view of history, the establishment of the UK government of Creative & Cultural Skills has led to calls for design courses to be made less 'academic' and more attuned to the 'needs' of the industry. Design history, as a component of design courses, is under increasing threat in the UK at least and it has been argued that its survival depends on an increased focus on the study of the processes and effects of design rather than the lives of designers themselves.
Ultimately it appears that design history for practice-based courses is rapidly becoming a branch of social and cultural studies, leaving behind its art historical roots. This has led to a great deal of debate as the two approaches forge distinct pedagogical approaches and philosophies.
Debates over the merits of different approaches to teaching design history on practice-based courses
The debate over the best way to approach the teaching of design history to practice-based students is often heated. It is notable that the biggest push to adopt a 'realistic' approach (i.e. non-hero-based and analysing the production as well as the consumption of design that would otherwise be viewed as ephemeral) comes from teachers delivering these programmes, while critics are predominantly those who teach design history by approaching it in a more diverse and geographical standpoint.
The biggest criticism of the 'realistic' approach appears to be that it imposes anonymity on designers, while the counter argument is that the vast majority of designers are anonymous and that it is the uses and users of design that are more important.
The research literature suggests that, contrary to critics' predictions of the death of design history, this realistic approach is beneficial. Baldwin and McLean at the University of Brighton (now at the University of Dundee and Edinburgh College of Art respectively) reported attendance figures for courses using this model rising dramatically, and improved interest in the subject, as did Rain at Central St. Martin's. This compares with the often-reported low attendance and low grades of practice-based students facing the 'death by slideshow' model.
Design history from a global perspective
The rise of western cultures in the 19th century facilitated the idea of having European civilization as culturally advanced which disregarded non-western cultures by representing them as cultures without history. A global perspective of design history meant that there was a growth in understanding design history from a global context. This meant that there became different understandings of design history and acknowledging its processes, production and consumption based on the different cultural contexts. This was done through what is called globalization. One way this was done was by building on to the existing modernist knowledge from Europe and making the processes, production and consumption meet the standards of the different cultures. The problem with this idea is that it assumes that there is only one narrative of design history by limiting it to a specific place and time. Globalizing design history also means popularizing other forms of design that may not constitute as design in the western countries. This means moving beyond the modernists approaches and acknowledging other forms of design other than those based on the European understanding of production and consumption. Such practices ensures that design history from different cultures is acknowledged and is treated equally to that of the West.
Globalization has also meant that design history is no longer only looked at in the perspective of production and consumption but is now also perceived in the lens of theories, policies, social programs, opinions and organizational systems. This perspective allows for acknowledging that design is not only concerned with the materialistic or three-dimensional products, but also includes a wide range of artifacts. Some of these artifacts may be understanding design history as a feature that gives humans a history of ideas on how to live and interact with each other. Aspects such as teamwork, management style and appreciation of different types of creativity are all examples of design history that demonstrates the art of living and interacting with each other. Diversity acts as a form of design technique that is used to facilitate creativity. Having diverse opinions and perspectives allows for a clash in opinions which also enhances creativity and helps build new knowledge. The Chinese design history and design studies has taken this approach by diversifying its approach on design. They take into consideration the Chinese civilizations which includes its history of arts, crafts and philosophy as well as incorporate the western technologies and marketing structures. On the other hand, places in Southern Africa have used design techniques as a form of social communication. These areas used rock paintings as a form of communications and such communication started to advance with the development of pictographs and alphabets.
Museums
Design Museum, London, UK.
V&A Museum, London, UK.
Cooper Hewitt National Design Museum, Smithsonian Museum, NY, USA.
See also
Design History Society
References
External links
Design History Society
Design Is History
Graphic design
Fashion design
Product design
Industrial design | 0.780291 | 0.980764 | 0.765282 |
Christian culture | Christian culture generally includes all the cultural practices which have developed around the religion of Christianity. There are variations in the application of Christian beliefs in different cultures and traditions.
Christian culture has influenced and assimilated much from the Middle Eastern, Zoroastrianism, Greco-Roman, Byzantine, Western culture, Slavic and Caucasian culture. During the early Roman Empire, Christendom has been divided in the pre-existing Greek East and Latin West. Consequently, different versions of the Christian cultures arose with their own rites and practices, Christianity remains culturally diverse in its Western and Eastern branches.
Christianity played a prominent role in the development of Western civilization, in particular, the Catholic Church and Protestantism. Western culture, throughout most of its history, has been nearly equivalent to Christian culture. Outside the Western world, Christianity has had an influence on various cultures, such as in Africa and Asia.
Christians have made a noted contributions to human progress in a broad and diverse range of fields, both historically and in modern times, including science and technology, medicine, fine arts and architecture, politics, literatures, music, philanthropy, philosophy, ethics, humanism, theatre and business. According to 100 Years of Nobel Prizes a review of Nobel prizes award between 1901 and 2000 reveals that (65.4%) of Nobel Prizes Laureates, have identified Christianity in its various forms as their religious preference.
Cultural influence
The Bible has had a profound influence on Western civilization and on cultures around the globe; it has contributed to the formation of Western law, art, texts, and education. With a literary tradition spanning two millennia, the Bible is one of the most influential works ever written. From practices of personal hygiene to philosophy and ethics, the Bible has directly and indirectly influenced politics and law, war and peace, sexual morals, marriage and family life, toilet etiquette, letters and learning, the arts, economics, social justice, medical care and more. The Gutenberg Bible was the first book printed in Europe using movable type.
Since the spread of Christianity from the Levant to Asia Minor, Mesopotamia, Europe, North Africa and Horn of Africa during the early Roman Empire, Christendom has been divided in the pre-existing Greek East and Latin West. Consequently, different versions of the Christian cultures arose with their own rites and practices, centered around the cities such as Rome (Western Christianity) and Carthage, whose communities was called Western or Latin Christendom, and Constantinople (Eastern Christianity), Antioch (Syriac Christianity), Kerala (Indian Christianity) and Alexandria, among others, whose communities were called Eastern or Oriental Christendom. The Byzantine Empire was one of the peaks in Christian history and Christian civilization. From the 11th to 13th centuries, Latin Christendom rose to the central role of the Western world and Western culture.
Outside the Western world, Christianity has had an influence on various cultures, such as in Africa, the Near East, Middle East, Central Asia, East Asia, Southeast Asia, and the Indian subcontinent. Scholars and intellectuals agree Christians in the Middle East have made significant contributions to Arab and Islamic civilization since the introduction of Islam, and they have had a significant impact contributing the culture of the Mashriq, Turkey, and Iran. Eastern Christian scientists and scholars of the medieval Islamic world (particularly Jacobite and Nestorian Christians) contributed to the Arab Islamic civilization during the reign of the Umayyad and the Abbasid, by translating works of Greek philosophers to Syriac and afterwards, to Arabic. They also excelled in philosophy, science, theology, and medicine.
Historian Paul Legutko of Stanford University said the Catholic Church is "at the center of the development of the values, ideas, science, laws, and institutions which constitute what we call Western civilization." The Eastern Orthodox Church has played a prominent role in the history and culture of Eastern and Southeastern Europe, the Caucasus, and the Near East. The Oriental Orthodox Churches have played a prominent role in the history and culture of Armenia, Egypt, Turkey, Eritrea, Ethiopia, Sudan and parts of the Middle East and India. Protestants have extensively developed a unique culture that has made major contributions in education, the humanities and sciences, the political and social order, the economy and the arts, and many other fields.
Influence on Western culture
Christianity played a prominent role in the development of Western civilization, in particular, the Catholic Church and Protestantism. Western culture, throughout most of its history, has been nearly equivalent to Christian culture, and much of the population of the Western hemisphere could broadly be described as cultural Christians. The notion of Europe and the Western world has been intimately connected with the concept of Christianity and Christendom, many even consider Christianity to be the link that created a unified European identity, although some progress originated elsewhere: Renaissance and Romanticism began with the curiosity and passion of the pagan world of old.
Although Western culture contained several polytheistic religions during its early years under the Greek and Roman Empires, as the centralized Roman power waned, the dominance of the Catholic Church was the only consistent force in Western Europe. Until the Age of Enlightenment, Christian culture guided the course of philosophy, literature, art, music, and science. Christian disciplines of the respective arts have subsequently developed into Christian philosophy, Christian art, Christian music, Christian literature, etc. Art and literature, law, education, and politics were preserved in the teachings of the Church, in an environment that, otherwise, would have probably seen their loss. The Church founded many cathedrals, universities, monasteries and seminaries, some of which continue to exist today. Medieval Christianity created the first modern universities. The Catholic Church established a hospital system in Medieval Europe that vastly improved upon the Roman valetudinaria. These hospitals were established to cater to "particular social groups marginalized by poverty, sickness, and age", according to historian of hospitals, Guenter Risse. Christianity also had a strong impact on all other aspects of life: marriage and family, education, the humanities and sciences, the political and social order, the economy, and the arts.
Christianity had a significant impact on education and science and medicine as the church created the basis of the Western system of education, and was the sponsor of founding universities in the Western world as the university is generally regarded as an institution that has its origin in the Medieval Christian setting. Many clerics throughout history have made significant contributions to science and Jesuits in particular have made numerous significant contributions to the development of science. Some scholars state that Christianity contributed to the rise of the Scientific Revolution. Protestantism also has had an important influence on science. According to the Merton Thesis, there was a positive correlation between the rise of English Puritanism and German Pietism on the one hand, and early experimental science on the other.
The cultural influence of Christianity includes social welfare, founding hospitals, economics (as the Protestant work ethic), natural law (which would later influence the creation of international law), politics, architecture, literature, personal hygiene (ablution), and family life. Historically, extended families were the basic family unit in the Christian culture and countries.
Christianity played a role in ending practices common among pagan societies, such as human sacrifice, slavery, infanticide and polygamy. Scientists such as Newton and Galileo believed that God would be better understood if God's creation was better understood.
Architecture
The architecture of cathedrals, basilicas and abbey churches is characterised by the buildings' large scale and follows one of several branching traditions of form, function and style that all ultimately derive from the Early Christian architectural traditions established in the Constantinian period.
Cathedrals in particular, as well as many abbey churches and basilicas, have certain complex structural forms that are found less often in parish churches. They also tend to display a higher level of contemporary architectural style and the work of accomplished craftsmen, and occupy a status both ecclesiastical and social that an ordinary parish church does not have. Such a cathedral or great church is generally one of the finest buildings within its region and is a focus of local pride. Many cathedrals and basilicas, and a number of abbey churches are among the world's most renowned works of architecture. These include St. Peter's Basilica, Notre Dame de Paris, Cologne Cathedral, Salisbury Cathedral, Prague Cathedral, Lincoln Cathedral, the Basilica of St Denis, the Basilica of Santa Maria Maggiore, the Basilica of San Vitale, St Mark's Basilica, Westminster Abbey, Saint Basil's Cathedral, Washington National Cathedral, Basilica of the National Shrine of the Immaculate Conception, Cathedral Basilica of Saint Louis, Gaudí's incomplete Sagrada Familia and the ancient church of Hagia Sophia, now a museum. Hagia Sophia has been described as architectural and cultural icon of Byzantine and Eastern Orthodox civilization.
The earliest large churches date from Late Antiquity. As Christianity and the construction of churches and cathedrals spread throughout the world, their manner of building was dependent upon local materials and local techniques. Different styles of architecture developed and their fashion spread, carried by the establishment of monastic orders, by the posting of bishops from one region to another and by the travelling of master stonemasons who served as architects. The styles of the great church buildings are successively known as Early Christian, Byzantine, Romanesque, Gothic, Renaissance, Baroque, various Revival styles of the late 18th to early 20th centuries and Modern. Overlaid on each of the academic styles are the regional characteristics. Some of these characteristics are so typical of a particular country or region that they appear, regardless of style, in the architecture of churches designed many centuries apart.
Art
Christian art is sacred art which uses themes and imagery from Christianity. Most Christian groups use or have used art to some extent, although some have had strong objections to some forms of religious image, and there have been major periods of iconoclasm within Christianity.
Images of Jesus and narrative scenes from the Life of Christ are the most common subjects, and scenes from the Old Testament play a part in the art of most denominations. Images of the Virgin Mary and saints are much rarer in Protestant art than that of Roman Catholicism and Eastern Orthodoxy.
Christianity makes far wider use of images than related religions, in which figurative representations are forbidden, such as Islam and Judaism. However, there is also a considerable history of aniconism in Christianity from various periods.
Illumination
An illuminated manuscript is a manuscript in which the text is supplemented by the addition of decoration. The earliest surviving substantive illuminated manuscripts are from the period AD 400 to 600, primarily produced in Ireland, Constantinople and Italy. The majority of surviving manuscripts are from the Middle Ages, although many illuminated manuscripts survive from the 15th-century Renaissance, along with a very limited number from Late Antiquity.
Most illuminated manuscripts were created as codices, which had superseded scrolls; some isolated single sheets survive. A very few illuminated manuscript fragments survive on papyrus. Most medieval manuscripts, illuminated or not, were written on parchment (most commonly of calf, sheep, or goat skin), but most manuscripts important enough to illuminate were written on the best quality of parchment, called vellum, traditionally made of unsplit calfskin, although high quality parchment from other skins was also called parchment.
Iconography
Christian art began, about two centuries after Christ, by borrowing motifs from Roman Imperial imagery, classical Greek and Roman religion and popular art. Religious images are used to some extent by the Abrahamic Christian faith, and often contain highly complex iconography, which reflects centuries of accumulated tradition. In the Late Antique period iconography began to be standardised, and to relate more closely to biblical texts, although many gaps in the canonical Gospel narratives were plugged with matter from the apocryphal gospels. Eventually the Church would succeed in weeding most of these out, but some remain, like the ox and ass in the Nativity of Christ.
An icon is a religious work of art, most commonly a painting, from Orthodox Christianity. Christianity has used symbolism from its very beginnings. In both East and West, numerous iconic types of Christ, Mary and saints and other subjects were developed; the number of named types of icons of Mary, with or without the infant Christ, was especially large in the East, whereas Christ Pantocrator was much the commonest image of Christ.
Christian symbolism invests objects or actions with an inner meaning expressing Christian ideas. Christianity has borrowed from the common stock of significant symbols known to most periods and to all regions of the world. Religious symbolism is effective when it appeals to both the intellect and the emotions. Especially important depictions of Mary include the Hodegetria and Panagia types. Traditional models evolved for narrative paintings, including large cycles covering the events of the Life of Christ, the Life of the Virgin, parts of the Old Testament, and, increasingly, the lives of popular saints. Especially in the West, a system of attributes developed for identifying individual figures of saints by a standard appearance and symbolic objects held by them; in the East they were more likely to identified by text labels.
Each saint has a story and a reason why he or she led an exemplary life. Symbols have been used to tell these stories throughout the history of the Church. A number of Christian saints are traditionally represented by a symbol or iconic motif associated with their life, termed an attribute or emblem, in order to identify them. The study of these forms part of iconography in Art history.
Eastern Christian art
The dedication of Constantinople as capital in 330 AD created a great new Christian artistic centre for the Eastern Roman Empire, which soon became a separate political unit. Major Constantinopolitan churches built under Constantine and his son, Constantius II, included the original foundations of Hagia Sophia and the Church of the Holy Apostles. As the Western Roman Empire disintegrated and was taken over by "barbarian" peoples, the art of the Byzantine Empire reached levels of sophistication, power and artistry not previously seen in Christian art, and set the standards for those parts of the West still in touch with Constantinople.
This achievement was checked by the controversy over the use of graven images, and the proper interpretation of the Second Commandment, which led to the crisis of Iconoclasm or destruction of religious images, which racked the Empire between 726 and 843. The restoration of Orthodoxy resulted in a strict standardization of religious imagery within the Eastern Church. Byzantine art became increasingly conservative, as the form of images themselves, many accorded divine origin or thought to have been be painted by Saint Luke or other figures, was held to have a status not far off that of a scriptural text. They could be copied, but not improved upon. As a concession to Iconoclast sentiment, monumental religious sculpture was effectively banned. Neither of these attitudes were held in Western Europe, but Byzantine art nonetheless had great influence there until the High Middle Ages, and remained very popular long after that, with vast numbers of icons of the Cretan School exported to Europe as late as the Renaissance. Where possible, Byzantine artists were borrowed for projects such as mosaics in Venice and Palermo. The enigmatic frescoes at Castelseprio may be an example of work by a Greek artist working in Italy.
The art of Eastern Catholicism has always been rather closer to the Orthodox art of Greece and Russia, and in countries near the Orthodox world, notably Poland, Catholic art has many Orthodox influences. The Black Madonna of Częstochowa may well have been of Byzantine origin – it has been repainted and this is hard to tell. Other images that are certainly of Greek origin, like the Salus Populi Romani and Our Lady of Perpetual Help, both icons in Rome, have been subjects of specific veneration for centuries.
Although the influence has often been resisted, especially in Russia, Catholic art has also affected Orthodox depictions in many respects, especially in countries like Romania, and in the post-Byzantine Cretan School, which led Greek Orthodox art under Venetian rule in the 15th and 16th centuries. El Greco left Crete when relatively young, but Michael Damaskinos returned after a brief period in Venice, and was able to switch between Italian and Greek styles. Even the traditionalist Theophanes the Cretan, working mainly on Mount Athos, nevertheless shows unmistakable Western influence.
Many Eastern Orthodox states in Eastern Europe, as well as to some degree the Muslim states of the eastern Mediterranean, preserved many aspects of the empire's culture and art for centuries afterward. A number of states contemporary with the Byzantine Empire were culturally influenced by it, without actually being part of it (the "Byzantine commonwealth"). These included Bulgaria, Serbia, and Kievan Rus', as well as some non-Orthodox states like the Republic of Venice and the Kingdom of Sicily, which had close ties to the Byzantine Empire despite being in other respects part of western European culture. Art produced by Eastern Orthodox Christians living in the Ottoman Empire is often called "post-Byzantine". Certain artistic traditions that originated in the Byzantine Empire, particularly in regard to icon painting and church architecture, are maintained in Greece, Serbia, Bulgaria, Macedonia, Russia and other Eastern Orthodox countries to the present day.
Catholic art
Roman Catholic art consists of all visual works produced in an attempt to illustrate, supplement and portray in tangible form the teachings of the Catholic Church. This includes sculpture, painting, mosaics, metalwork, embroidery and even architecture. Catholic art has played a leading role in the history and development of Western art since at least the 4th century. The principal subject matter of Catholic art has been the life and times of Jesus Christ, along with those of his disciples, the saints, and the events of the Jewish Old Testament.
The earliest surviving art works are the painted frescoes on the walls of the catacombs and meeting houses of the persecuted Christians of the Roman Empire. The Christian Church in Rome was influenced by the Roman style of art and the religious Christian artists of the time. The stone sarcophagi of Roman Christians exhibit the earliest surviving carved statuary of Jesus, Mary and other biblical figures. The legalisation of Christianity transformed Catholic art, which adopted richer forms such as mosaics and illuminated manuscripts. The iconoclasm controversy briefly divided the eastern and western churches, after which artistic development progressed in separate directions. Romanesque and Gothic art flowered in the Western Church as the style of painting and statuary moved in an increasingly naturalistic direction. The Protestant Reformation produced new waves of image-destruction, to which the Church responded with the dramatic and emotive Baroque and Rococo styles. In the 19th century the leadership in western art moved away from the Catholic Church which, after embracing historical revivalism was increasingly affected by the modernist movement, a movement that in its "rebellion" against nature, counters the Church's emphasis on nature as a good creation of God.
Renaissance artists such as Raphael, Michelangelo, Leonardo da Vinci, Bernini, Botticelli, Fra Angelico, Tintoretto, Caravaggio, and Titian, were among a multitude of innovative virtuosos sponsored by the Church.
British art historian Kenneth Clark wrote that Western Europe's first "great age of civilisation" was ready to begin around the year 1000. From 1100, he wrote, monumental abbeys and cathedrals were constructed and decorated with sculptures, hangings, mosaics and works belonging one of the greatest epochs of art and providing stark contrast to the monotonous and cramped conditions of ordinary living during the period. Abbot Suger of the Abbey of St. Denis is considered an influential early patron of Gothic architecture and believed that love of beauty brought people closer to God: "The dull mind rises to truth through that which is material". Clarke calls this "the intellectual background of all the sublime works of art of the next century and in fact has remained the basis of our belief of the value of art until today".
Later, during The Renaissance and Counter-Reformation, Catholic artists produced many of the unsurpassed masterpieces of Western art – often inspired by biblical themes: from Michelangelo's Moses and David and Pietà sculptures, to Da Vinci's Last Supper and Raphael's various Madonna paintings. Referring to a "great outburst of creative energy such as took place in Rome between 1620 and 1660", Kenneth Clarke wrote:
[W]ith a single exception, the great artists of the time were all sincere, conforming Christians. Guercino spent much of his mornings in prayer; Bernini frequently went into retreats and practised the Spiritual Exercises of Saint Ignatius; Rubens attended Mass every morning before beginning work. The exception was Caravaggio, who was like the hero of a modern play, except that he happened to paint very well.
This conformism was not based on fear of the Inquisition, but on the perfectly simple belief that the faith which had inspired the great saints of the preceding generation was something by which a man should regulate his life.
Protestant art
The Protestant Reformation during the 16th century in Europe almost entirely rejected the existing tradition of Catholic art, and very often destroyed as much of it as it could reach. A new artistic tradition developed, producing far smaller quantities of art that followed Protestant agendas and diverged drastically from the southern European tradition and the humanist art produced during the High Renaissance. In turn, the Catholic Counter-Reformation both reacted against and responded to Protestant criticisms of art in Roman Catholicism to produce a more stringent style of Catholic art. Protestant religious art both embraced Protestant values and assisted in the proliferation of Protestantism, but the amount of religious art produced in Protestant countries was hugely reduced. Artists in Protestant countries diversified into secular forms of art like history painting, landscape painting, portrait painting and still life.
Prominent painters with Protestant background were, for example, Albrecht Dürer, Hans Holbein the Younger, Lucas Cranach, Rembrandt, and Vincent van Gogh. World literature was enriched by the works of Edmund Spenser, John Milton, John Bunyan, John Donne, John Dryden, Daniel Defoe, William Wordsworth, Jonathan Swift, Johann Wolfgang Goethe, Friedrich Schiller, Samuel Taylor Coleridge, Edgar Allan Poe, Matthew Arnold, Conrad Ferdinand Meyer, Theodor Fontane, Washington Irving, Robert Browning, Emily Dickinson, Emily Brontë, Charles Dickens, Nathaniel Hawthorne, Thomas Stearns Eliot, John Galsworthy, Thomas Mann, William Faulkner, John Updike, and many others.
Education
The university is generally regarded as an institution that has its origin in the Medieval Christian setting. Prior to the establishment of universities, European higher education took place for hundreds of years in Christian cathedral schools or monastic schools (Scholae monasticae), in which monks and nuns taught classes; evidence of these immediate forerunners of the later university at many places dates back to the 6th century AD.
Missionary activity for the Catholic Church has always incorporated education of evangelized peoples as part of its social ministry. History shows that in evangelized lands, the first people to operate schools were Roman Catholics. In some countries, the Church is the main provider of education or significantly supplements government forms of education. Presently, the Church operates the world's largest non-governmental school system. Many of Western Civilization's most influential universities were founded by the Catholic Church.
The Catholic Church founded the West's first universities, which were preceded by the schools attached to monasteries and cathedrals, and generally staffed by monks and friars. Universities began springing up in Italian towns like Salerno, which became a leading medical school, translating the work of Greek and Arabic physicians into Latin. Bologna University became the most influential of the early universities, which first specialised in canon law and civil law. Paris University, specialising in such topics as theology, came to rival Bologna under the supervision of Notre Dame Cathedral. Oxford University in England later came rival Paris in Theology and Salamanca University was founded in Spain in 1243. According to the historian Geoffrey Blainey, the universities benefitted from the use of Latin, the common language of the Church, and its internationalist reach, and their role was to "teach, argue and reason within a Christian framework". The medieval universities of Western Christendom were well-integrated across all of Western Europe, encouraged freedom of enquiry and produced a great variety of fine scholars and natural philosophers, including Robert Grosseteste of the University of Oxford, an early expositor of a systematic method of scientific experimentation; and Saint Albert the Great, a pioneer of biological field research The Catholic church has always been involved in education, since the founding of the first universities of Europe. It runs and sponsors thousands of primary and secondary schools, colleges and universities throughout the world.
As the Reformers wanted all members of the church to be able to read the Bible, education on all levels got a strong boost. Compulsory education for both boys and girls was introduced. For example, the Puritans who established Massachusetts Bay Colony in 1628 founded Harvard College only eight years later. About a dozen other colleges followed in the 18th century, including Yale University (1701). Pennsylvania also became a centre of learning. While Princeton University was a Presbyterian foundation. Protestantism also initiated translations of the Bible into national languages and hereby supported the development of national literatures.
A large number of mainline Protestants have played leadership roles in many aspects of American life, including politics, business, science, the arts, and education. They founded most of the country's leading institutes of higher education. The Ivy League universities have strong White Anglo-Saxon Protestant historical ties, and their influence continues today. Until about World War II, Ivy League universities were composed largely of WASP students.
Some of the first colleges and universities in America, including Harvard, Yale, Princeton, Columbia, Dartmouth, Williams, Bowdoin, Middlebury, and Amherst, all were founded by the Mainline Protestantism, as were later Carleton, Duke, Oberlin, Beloit, Pomona, Rollins and Colorado College.
According to Pew Center study there is correlation between education and income, about (59%) of American Anglican have a graduate and post-graduate degree, and about (56%) of Episcopalians and (47%) of Presbyterians and (46%) United Church of Christ, have a graduate and post-graduate degree.
A Pew Center study about religion and education around the world in 2016, found that Christians ranked as the second most educated religious group around in the world after Jews with an average of 9.3 years of schooling, and the highest of years of schooling among Christians found in Germany (13.6), New Zealand (13.5) and Estonia (13.1). Christians were also found to have the second highest number of graduate and post-graduate degrees per capita while in absolute numbers ranked in the first place (220 million). Between the various Christian communities, Singapore outranks other nations in terms of Christians who obtain a university degree in institutions of higher education (67%), followed by the Christians of Israel (63%), and the Christians of Georgia (57%).
According to the study, Christians in North America, Europe, Middle East, North Africa and Asia-Pacific regions are highly educated since many of the world universities were built by the historic Christian Churches, in addition to the historical evidence that "Christian monks built libraries and, in the days before printing presses, preserved important earlier writings produced in Latin, Greek and Arabic". According to the same study, Christians have a significant amount of gender equality in educational attainment, and the study suggests that one of the reasons is the encouragement of the Protestant Reformers in promoting the education of women, which led to the eradication of illiteracy among females in Protestant communities.
According to the same study "there is a large and pervasive gap in educational attainment between Muslims and Christians in sub-Saharan Africa" as Muslim adults in this region are far less educated than their Christian counterparts, with scholars suggesting that this gap is due to the educational facilities that were created by Christian missionaries during the colonial era for fellow believers.
Literature and poetry
Christian literature is writing that deals with Christian themes and incorporates the Christian world view. This constitutes a huge body of extremely varied writing. Christian poetry is any poetry that contains Christian teachings, themes, or references. The influence of Christianity on poetry has been great in any area that Christianity has taken hold. Christian poems often directly reference the Bible, while others provide allegory.
While falling within the strict definition of literature, the Bible is not generally considered literature. However, the Bible has been treated and appreciated as literature; the Bible is a corner stone of Western civilization. The King James Version in particular has long been considered a masterpiece of English prose, whatever may be thought of its religious significance. The Authorized Version has been called "the most influential version of the most influential book in the world, in what is now its most influential language", "the most important book in English religion and culture", and "the most celebrated book in the English-speaking world". David Crystal has estimated that it is responsible for 257 idioms in English, examples include feet of clay and reap the whirlwind. Furthermore, prominent atheist figures such as the late Christopher Hitchens and Richard Dawkins have praised the King James Version as being "a giant step in the maturing of English literature" and "a great work of literature", respectively, with Dawkins then adding, "A native speaker of English who has never read a word of the King James Bible is verging on the barbarian". Several retellings of the Bible, or parts of the Bible, have also been made with the aim of emphasising its literary qualities. With estimated sales of over 5 billion copies, the Bible is widely considered to be the best-selling book of all time. It sells approximately 100 million copies annually, and has been a major influence on literature and history, especially in the West, where the Gutenberg Bible was the first book printed using movable type.
In Byzantine literature, four different cultural elements are recognised: the Greek, the Christian, the Roman, and the Oriental. Byzantine literature is often classified in five groups: historians and annalists, encyclopaedists (Patriarch Photios, Michael Psellus, and Michael Choniates are regarded as the greatest encyclopaedists of Byzantium) and essayists, and writers of secular poetry. The only genuine heroic epic of the Byzantines is the Digenis Acritas. The remaining two groups include the new literary species: ecclesiastical and theological literature, and popular poetry. And it was in Alexandria that Graeco-Oriental Christianity had its birth. There the Septuagint translation had been made; there that that fusion of Greek philosophy and Jewish religion took place which culminated in Philo; there flourished the mystic speculative Neoplatonism associated with Plotinus and Porphyry. At Alexandria the great Greek ecclesiastical writers worked alongside pagan rhetoricians and philosophers; several were born here, e.g. Origen, Athanasius, and his opponent Arius, also Cyril and Synesius. On Egyptian soil monasticism began and thrived. After Alexandria, Antioch held great prestige, where a school of Christian commentators flourished under St. John Chrysostom and where later arose the Christian universal chronicles. In surrounding Syria, we find the germs of Greek ecclesiastical poetry, while from neighboring Palestine came St. John of Damascus, one of the Greek Fathers.
The list of Catholic authors and literary works is vast. With a literary tradition spanning two millennia, the Bible and Papal Encyclicals have been constants of the Catholic canon but countless other historical works may be listed as noteworthy in terms of their influence on Western society. From late Antiquity, St Augustine's book Confessions, which outlines his sinful youth and conversion to Christianity, is widely considered to be the first autobiography of ever written in the canon of Western Literature. Augustine profoundly influenced the coming medieval worldview. The Summa Theologica, written 1265–1274, is the best-known work of Thomas Aquinas (c.1225–1274), and although unfinished, "one of the classics of the history of philosophy and one of the most influential works of Western literature." It is intended as a manual for beginners in theology and a compendium of all of the main theological teachings of the Church. It presents the reasoning for almost all points of Christian theology in the West. The epic poetry of the Italian Dante and his Divine Comedy of the late Middle Ages is also considered immensely influential. The English statesman and philosopher, Thomas More, wrote the seminal work Utopia in 1516. St Ignatius Loyola, a key figure in the Catholic counter-reformation, is the author of an influential book of meditations known as the Spiritual Exercises.
Catholics have also given greater value to the world through literary works by Dante Alighieri, Geoffrey Chaucer, John Dryden, Walker Percy, Jack Kerouac, Evelyn Waugh, Alexander Pope, Honoré de Balzac, Oscar Wilde, Thomas Merton, Toni Morrison, Ernest Hemingway, J.R.R. Tolkien, G. K. Chesterton, Claude McKay, Paul Verlaine, Graham Greene, Sigrid Undset, Tennessee Williams, Francois Mauriac, Flannery O'Connor, Gerard Manley Hopkins, Paul Claudel, F. Scott Fitzgerald, Michel de Montaigne, Siegfried Sassoon, John Henry Newman, Hugo von Hofmannsthal, Arthur Rimbaud, Joseph Conrad, Miguel de Cervantes, Czeslaw Milosz, Hilaire Belloc, John of the Cross, Luis Vaz De Camoes, Edith Sitwell, Thomas More, among others.
Medicine and health care
The administration of the Eastern and Western Roman Empires split and the demise of the Western Empire by the 6th century was accompanied by a series of violent invasions and precipitated the collapse of cities and civic institutions of learning, along with their links to the learning of classical Greece and Rome. For the next thousand years, medical knowledge would change very little. A scholarly medical tradition maintained itself in the more stable East, but in the West, scholarship virtually disappeared outside of the Church, where monks were aware of a dwindling range of medical texts. Hospitality was considered an obligation of Christian charity and bishops' houses and the valetudinaria of wealthier Christians were used to tend the sick. And the legacy of this early period was, in the words of Porter, that "Christianity planted the hospital: the well-endowed establishments of the Levant and the scattered houses of the West shared a common religious ethos of charity."
The Byzantine Empire was one of the first empires to have flourishing medical establishments. Prior to the Byzantine Empire the Roman Empire had hospitals specifically for soldiers and slaves. However, none of these establishments were for the public. The hospitals in Byzantium were originally started by the church to act as a place for the poor to have access to basic amenities. Hospitals were usually separated between men and women. Although the remains of these hospitals have not been discovered by archaeologists, recordings of hospitals from the Byzantine Empire describe large buildings that had the core feature of an open hearth. The establishments of the Byzantine Empire resembled the beginning of what we now know as modern hospitals. The first hospital was erected by Leontius of Antioch between the years 344 to 358 and was a place for strangers and migrants to find refuge. Around the same time, a deacon named Marathonius was in charge of hospitals and monasteries in Constantinople. His main objective was to improve urban aesthetics, illustrating hospitals as a main part of Byzantine cities. These early hospitals were designed for the poor. In fact, most hospitals throughout the Byzantine Empire were almost exclusively utilized by the poor. This may be due to descriptions of hospitals similar to "Gregory Nazianzen who called the hospital a stairway to heaven, implying that it aimed only to ease death for the chronically or terminally ill rather than promote recovery". There is debate between scholars as to why these institutions were started by the church. Many scholars believe that the church founded hospitals in order to receive additional donations. Whatever the case for these hospitals, they began to diffuse across the empire. Soon after, St. Basil of Caesarea developed a place for the sick in which provided refuge for the sick and homeless.
Geoffrey Blainey likened the Catholic Church in its activities during the Middle Ages to an early version of a welfare state: "It conducted hospitals for the old and orphanages for the young; hospices for the sick of all ages; places for the lepers; and hostels or inns where pilgrims could buy a cheap bed and meal". It supplied food to the population during famine and distributed food to the poor. This welfare system the church funded through collecting taxes on a large scale and possessing large farmlands and estates. It was common for monks and clerics to practice medicine and medical students in northern European universities often took minor Holy orders. Mediaeval hospitals had a strongly Christian ethos, and were, in the words of historian of medicine Roy Porter, "religious foundations through and through", and Ecclesiastical regulations were passed to govern medicine, partly to prevent clergymen profiting from medicine. During Europe's Age of Discovery, Catholic missionaries, notably the Jesuits, introduced the modern sciences to India, China and Japan. While persecutions continue to limit the spread of Catholic institutions to some Middle Eastern Muslim nations, and such places as the People's Republic of China and North Korea, elsewhere in Asia the church is a major provider of health care services - especially in Catholic Nations like the Philippines.
Today the Roman Catholic Church is the largest non-government provider of health care services in the world. It has around 18,000 clinics, 16,000 homes for the elderly and those with special needs, and 5,500 hospitals, with 65 percent of them located in developing countries. In 2010, the Church's Pontifical Council for the Pastoral Care of Health Care Workers said that the Church manages 26% of the world's health care facilities. The Church's involvement in health care has ancient origins.
Music
Christian music is music that has been written to express either personal or a communal belief regarding Christian life and faith. Common themes of Christian music include praise, worship, penitence, and lament, and its forms vary widely across the world.
Like other forms of music the creation, performance, significance, and even the definition of Christian music varies according to culture and social context. Christian music is composed and performed for many purposes, ranging from aesthetic pleasure, religious or ceremonial purposes, or as an entertainment product for the marketplace.
In music, Catholic monks developed the first forms of modern Western musical notation in order to standardize liturgy throughout the worldwide Church, and an enormous body of religious music has been composed for it through the ages. This led directly to the emergence and development of European classical music, and its many derivatives. The Baroque style, which encompassed music, art, and architecture, was particularly encouraged by the post-Reformation Catholic Church as such forms offered a means of religious expression that was stirring and emotional, intended to stimulate religious fervor.
The list of Catholic composers and Catholic sacred music which have a prominent place in Western culture is extensive, but includes Ludwig van Beethoven's Ode to Joy; Wolfgang Amadeus Mozart's Ave verum corpus; Franz Schubert's Ave Maria, César Franck's Panis angelicus, and Antonio Vivaldi's Gloria.
Martin Luther, Paul Gerhardt, George Wither, Isaac Watts, Charles Wesley, William Cowper, and many other authors and composers created well-known church hymns. Musicians like Heinrich Schütz, Johann Sebastian Bach, George Frederick Handel, Henry Purcell, Johannes Brahms, and Felix Mendelssohn-Bartholdy composed great works of music.
Philosophy
Christian philosophy is a term to describe the fusion of various fields of philosophy with the theological doctrines of Christianity. Scholasticism, which means "that [which] belongs to the school", and was a method of learning taught by the academics (or school people) of medieval universities c. 1100–1500. Scholasticism originally started to reconcile the philosophy of the ancient classical philosophers with medieval Christian theology. Scholasticism is not a philosophy or theology in itself but a tool and method for learning which places emphasis on dialectical reasoning.
Medieval philosophy is the philosophy of Western Europe and the Middle East during the Middle Ages, roughly extending from the Christianization of the Roman Empire until the Renaissance. Medieval philosophy is defined partly by the rediscovery and further development of classical Greek and Hellenistic philosophy, and partly by the need to address theological problems and to integrate the then widespread sacred doctrines of Abrahamic religion (Islam, Judaism, and Christianity) with secular learning.
The history of western European medieval philosophy is traditionally divided into two main periods: the period in the Latin West following the Early Middle Ages until the 12th century, when the works of Aristotle and Plato were preserved and cultivated; and the "golden age" of the 12th, 13th and 14th centuries in the Latin West, which witnessed the culmination of the recovery of ancient philosophy, and significant developments in the field of philosophy of religion, logic and metaphysics.
The medieval era was disparagingly treated by the Renaissance humanists, who saw it as a barbaric "middle" period between the classical age of Greek and Roman culture, and the "rebirth" or renaissance of classical culture. Yet this period of nearly a thousand years was the longest period of philosophical development in Europe, and possibly the richest. Jorge Gracia has argued that "in intensity, sophistication, and achievement, the philosophical flowering in the thirteenth century could be rightly said to rival the golden age of Greek philosophy in the fourth century B.C."
Some problems discussed throughout this period are the relation of faith to reason, the existence and unity of God, the object of theology and metaphysics, the problems of knowledge, of universals, and of individuation.
Philosophers from the Middle Ages include the Christian philosophers Augustine of Hippo, Boethius, Anselm, Gilbert of Poitiers, Peter Abelard, Roger Bacon, Bonaventure, Thomas Aquinas, Duns Scotus, William of Ockham and Jean Buridan; the Jewish philosophers Maimonides and Gersonides; and the Muslim philosophers Alkindus, Alfarabi, Alhazen, Avicenna, Algazel, Avempace, Abubacer, Ibn Khaldūn, and Averroes. The medieval tradition of Scholasticism continued to flourish as late as the 17th century, in figures such as Francisco Suarez and John of St. Thomas.
Aquinas, father of Thomism, was immensely influential in Catholic Europe, placed a great emphasis on reason and argumentation, and was one of the first to use the new translation of Aristotle's metaphysical and epistemological writing. His work was a significant departure from the Neoplatonic and Augustinian thinking that had dominated much of early Scholasticism.
The Renaissance ("rebirth") was a period of transition between the Middle Ages and modern thought, in which the recovery of classical texts helped shift philosophical interests away from technical studies in logic, metaphysics, and theology towards eclectic inquiries into morality, philology, and mysticism. The study of the classics and the humane arts generally, such as history and literature, enjoyed a scholarly interest hitherto unknown in Christendom, a tendency referred to as humanism. Displacing the medieval interest in metaphysics and logic, the humanists followed Petrarch in making man and his virtues the focus of philosophy.
These new movements in philosophy developed contemporaneously with larger religious and political transformations in Europe: the Reformation and the decline of feudalism. Though the theologians of the Protestant Reformation showed little direct interest in philosophy, their destruction of the traditional foundations of theological and intellectual authority harmonized with a revival of fideism and skepticism in thinkers such as Erasmus, Montaigne, and Francisco Sanches. Meanwhile, the gradual centralization of political power in nation-states was echoed by the emergence of secular political philosophies, as in the works of Niccolò Machiavelli (often described as the first modern political thinker, or a key turning point towards modern political thinking), Thomas More, Erasmus, Justus Lipsius, Jean Bodin, and Hugo Grotius.
Science and technology
Earlier attempts at reconciliation of Christianity with Newtonian mechanics appear quite different from later attempts at reconciliation with the newer scientific ideas of evolution or relativity. Many early interpretations of evolution polarized themselves around a struggle for existence. These ideas were significantly countered by later findings of universal patterns of biological cooperation. According to John Habgood, all man really knows here is that the universe seems to be a mix of good and evil, beauty and pain, and that suffering may somehow be part of the process of creation. Habgood holds that Christians should not be surprised that suffering may be used creatively by God, given their faith in the symbol of the Cross.
Robert John Russell has examined consonance and dissonance between modern physics, evolutionary biology, and Christian theology.
Christian philosophers Augustine of Hippo (354–430) and Thomas Aquinas held that scriptures can have multiple interpretations on certain areas where the matters were far beyond their reach, therefore one should leave room for future findings to shed light on the meanings. The "Handmaiden" tradition, which saw secular studies of the universe as a very important and helpful part of arriving at a better understanding of scripture, was adopted throughout Christian history from early on. Also the sense that God created the world as a self operating system is what motivated many Christians throughout the Middle Ages to investigate nature.
Modern historians of science such as J.L. Heilbron, Alistair Cameron Crombie, David Lindberg, Edward Grant, Thomas Goldstein, and Ted Davis have reviewed the popular notion that medieval Christianity was a negative influence in the development of civilization and science. In their views, not only did the monks save and cultivate the remnants of ancient civilization during the barbarian invasions, but the medieval church promoted learning and science through its sponsorship of many universities which, under its leadership, grew rapidly in Europe in the 11th and 12th centuries, St. Thomas Aquinas, the Church's "model theologian", not only argued that reason is in harmony with faith, he even recognized that reason can contribute to understanding revelation, and so encouraged intellectual development. He was not unlike other medieval theologians who sought out reason in the effort to defend his faith. Some of today's scholars, such as Stanley Jaki, have claimed that Christianity with its particular worldview, was a crucial factor for the emergence of modern science. Some scholars and historians attributes Christianity to having contributed to the rise of the Scientific Revolution.
Professor Noah J Efron says that "Generations of historians and sociologists have discovered many ways in which Christians, Christian beliefs, and Christian institutions played crucial roles in fashioning the tenets, methods, and institutions of what in time became modern science. They found that some forms of Christianity provided the motivation to study nature systematically..." Virtually all modern scholars and historians agree that Christianity moved many early-modern intellectuals to study nature systematically.
Individual scientists' beliefs
Christian Scholars and Scientists have made noted contributions to science and technology fields, as well as Medicine, Many well-known historical figures who influenced Western science considered themselves Christian such as Nicolaus Copernicus, Galileo Galilei, Johannes Kepler, Isaac Newton Robert Boyle, Francis Bacon, Gottfried Wilhelm Leibniz, Carl Friedrich Gauss, Emanuel Swedenborg, Alessandro Volta, Antoine Lavoisier, André-Marie Ampère, John Dalton, James Clerk Maxwell, William Thomson, 1st Baron Kelvin, Louis Pasteur, Michael Faraday, and J. J. Thomson.
Isaac Newton, for example, believed that gravity caused the planets to revolve about the Sun, and credited God with the design. In the concluding General Scholium to the Philosophiae Naturalis Principia Mathematica, he wrote: "This most beautiful System of the Sun, Planets and Comets, could only proceed from the counsel and dominion of an intelligent and powerful being." Other famous founders of science who adhered to Christian beliefs include Galileo, Johannes Kepler, and Blaise Pascal.
Prominent modern scientists advocating Christian belief include Nobel Prize–winning physicists Charles Townes (United Church of Christ member) and William Daniel Phillips (United Methodist Church member), evangelical Christian and past head of the Human Genome Project Francis Collins, and climatologist John T. Houghton.
According to 100 Years of Nobel Prizes a review of Nobel prizes award between 1901 and 2000 reveals that (65.4%) of Nobel Prizes Laureates, have identified Christianity in its various forms as their religious preference. Overall, Christians have won a total of 72.5% in Chemistry between 1901 and 2000, 65.3% in Physics, 62% in Medicine, 54% in Economics.
Eastern Christianity
Byzantine science was essentially classical science, and played an important and crucial role in the transmission of classical knowledge to the Islamic world and to Renaissance Italy. Many of the most distinguished classical scholars held high office in the Eastern Orthodox Church. Therefore, Byzantine science was in every period closely connected with ancient-pagan philosophy, and metaphysics. Despite some opposition to pagan learning, many of the most distinguished classical scholars held high office in the Church. The writings of antiquity never ceased to be cultivated in the Byzantine empire due to the impetus given to classical studies by the Academy of Athens in the 4th and 5th centuries, the vigor of the philosophical academy of Alexandria, and to the services of the University of Constantinople, which concerned itself entirely with secular subjects, to the exclusion of theology, which was taught in the Patriarchical Academy. Even the latter offered instruction in the ancient classics, and included literary, philosophical, and scientific texts in its curriculum. The monastic schools concentrated upon the Bible, theology, and liturgy. Therefore, the monastic scriptoria expended most of their efforts upon the transcription of ecclesiastical manuscripts, while ancient-pagan literature was transcribed, summarized, excerpted, and annotated by laymen or clergy like Photios, Arethas of Caesarea, Eustathius of Thessalonica, and Basilius Bessarion. Byzantine scientists preserved and continued the legacy of the great Ancient Greek mathematicians and put mathematics in practice. In early Byzantium (5th to 7th century) the architects and mathematicians Isidore of Miletus and Anthemius of Tralles used complex mathematical formulas to construct the great Hagia Sophia church, a technological breakthrough for its time and for centuries afterwards due to its striking geometry, bold design and height. In late Byzantium (9th to 12th century) mathematicians like Michael Psellos considered mathematics as a way to interpret the world.
Middle Eastern Christians especially the adherents of the Church of the East (Nestorians), contributed to the Arab Islamic Civilization during the Umayyad and the Abbasid periods by translating works of Greek philosophers to Syriac and afterwards to Arabic. During the 4th through the 7th centuries, scholarly work in the Syriac and Greek languages was either newly initiated, or carried on from the Hellenistic period. Centers of learning and of transmission of classical wisdom included colleges such as the School of Nisibis, and later the School of Edessa, and the renowned hospital and medical academy of Jundishapur; libraries included the Library of Alexandria and the Imperial Library of Constantinople; other centers of translation and learning functioned at Merv, Salonika, Nishapur and Ctesiphon, situated just south of what later became Baghdad.
Many scholars of the House of Wisdom were of Christian background;
the House of Wisdom was a library, translation institute, and academy established in Abbasid-era Baghdad, Iraq. Nestorians played a prominent role in the formation of Arab culture, with the Jundishapur school being prominent in the late Sassanid, Umayyad and early Abbasid periods. Notably, eight generations of the Nestorian Bukhtishu family served as private doctors to caliphs and sultans between the 8th and 11th centuries.
The migration waves of Byzantine scholars and émigrés in the period following the Crusader sacking of Constantinople in 1204 and the end of the Byzantine Empire in 1453, is considered by many scholars key to the revival of Greek and Roman studies that led to the development of the Renaissance humanism, and science. These émigrés brought to Western Europe the relatively well-preserved remnants and accumulated knowledge of their own (Greek) civilization, which had mostly not survived the Early Middle Ages in the West. According to the Encyclopædia Britannica: "Many modern scholars also agree that the exodus of Greeks to Italy as a result of this event marked the end of the Middle Ages and the beginning of the Renaissance".
Catholic Church
While refined and clarified over the centuries, the Roman Catholic position on the relationship between science and religion is one of harmony, and has maintained the teaching of natural law as set forth by Thomas Aquinas. For example, regarding scientific study such as that of evolution, the church's unofficial position is an example of theistic evolution, stating that faith and scientific findings regarding human evolution are not in conflict, though humans are regarded as a special creation, and that the existence of God is required to explain both monogenism and the spiritual component of human origins. Catholic schools have included all manners of scientific study in their curriculum for many centuries.
Galileo once stated "The intention of the Holy Spirit is to teach us how to go to heaven, not how the heavens go." In 1981 John Paul II, then pope of the Roman Catholic Church, spoke of the relationship this way: "The Bible itself speaks to us of the origin of the universe and its make-up, not in order to provide us with a scientific treatise, but in order to state the correct relationships of man with God and with the universe. Sacred Scripture wishes simply to declare that the world was created by God, and in order to teach this truth it expresses itself in the terms of the cosmology in use at the time of the writer".
The influence of the Church on Western letters and learning has been formidable. The ancient texts of the Bible have deeply influenced Western art, literature and culture. For centuries following the collapse of the Western Roman Empire, small monastic communities were practically the only outposts of literacy in Western Europe. In time, the Cathedral schools developed into Europe's earliest universities and the church has established thousands of primary, secondary and tertiary institutions throughout the world in the centuries since. The Church and clergymen have also sought at different times to censor texts and scholars. Thus different schools of opinion exist as to the role and influence of the Church in relation to western letters and learning.
The Catholic Cistercian order used its own numbering system, which could express numbers from 0 to 9999 in a single sign. According to one modern Cistercian, "enterprise and entrepreneurial spirit" have always been a part of the order's identity, and the Cistercians "were catalysts for development of a market economy" in 12th-century Europe. Until the Industrial Revolution, most of the technological advances in Europe were made in the monasteries. According to the medievalist Jean Gimpel, their high level of industrial technology facilitated the diffusion of new techniques: "Every monastery had a model factory, often as large as the church and only several feet away, and waterpower drove the machinery of the various industries located on its floor." Waterpower was used for crushing wheat, sieving flour, fulling cloth and tanning – a "level of technological achievement [that] could have been observed in practically all" of the Cistercian monasteries. The English science historian James Burke examines the impact of Cistercian waterpower, derived from Roman watermill technology such as that of Barbegal aqueduct and mill near Arles in the fourth of his ten-part Connections TV series, called "Faith in Numbers". The Cistercians made major contributions to culture and technology in medieval Europe: Cistercian architecture is considered one of the most beautiful styles of medieval architecture; and the Cistercians were the main force of technological diffusion in fields such as agriculture and hydraulic engineering.
One view, first propounded by Enlightenment philosophers, asserts that the Church's doctrines are entirely superstitious and have hindered the progress of civilization. Communist states have made similar arguments in their education in order to inculcate a negative view of Catholicism (and religion in general) in their citizens. The most famous incidents cited by such critics are the Church's condemnations of the teachings of Copernicus, Galileo Galilei and Johannes Kepler.
The Church's priest-scientists, many of whom were Jesuits, have been among the leading lights in astronomy, genetics, geomagnetism, meteorology, seismology, and solar physics, becoming some of the "fathers" of these sciences. Examples include important churchmen such as the Augustinian abbot Gregor Mendel (pioneer in the study of genetics), Roger Bacon (a Franciscan friar who was one of the early advocates of the scientific method), and Belgian priest Georges Lemaître (the first to propose the Big Bang theory). Other notable priest scientists have included Albertus Magnus, Robert Grosseteste, Nicholas Steno, Francesco Grimaldi, Giambattista Riccioli, Roger Boscovich, and Athanasius Kircher. Even more numerous are Catholic laity involved in science:Henri Becquerel who discovered radioactivity; Galvani, Volta, Ampere, Marconi, pioneers in electricity and telecommunications; Lavoisier, "father of modern chemistry"; Vesalius, founder of modern human anatomy; and Cauchy, one of the mathematicians who laid the rigorous foundations of calculus.
Throughout history many of the Roman Catholic clerics have made contributions to science, mostly during periods of Church domination of public life. These cleric-scientists include Nicolaus Copernicus, Gregor Mendel, Georges Lemaître, Albertus Magnus, Roger Bacon, Pierre Gassendi, Roger Joseph Boscovich, Marin Mersenne, Bernard Bolzano, Francesco Maria Grimaldi, Nicole Oresme, Jean Buridan, Robert Grosseteste, Christopher Clavius, Nicolas Steno, Athanasius Kircher, Giovanni Battista Riccioli, William of Ockham, and others. The Catholic Church has also produced many lay scientists and mathematicians, including 20th-century Nobel laureates like chemist Mario J. Molina, chemist John Polanyi, physicist Riccardo Giacconi, among many others.
Jesuit in science
The Jesuits have made numerous significant contributions to the development of science. For example, the Jesuits have dedicated significant study to earthquakes, and seismology has been described as "the Jesuit science". The Jesuits have been described as "the single most important contributor to experimental physics in the seventeenth century". According to Jonathan Wright in his book God's Soldiers, by the 18th century the Jesuits had "contributed to the development of pendulum clocks, pantographs, barometers, reflecting telescopes and microscopes, to scientific fields as various as magnetism, optics and electricity. They observed, in some cases before anyone else, the colored bands on Jupiter's surface, the Andromeda nebula and Saturn's rings. They theorized about the circulation of the blood (independently of Harvey), the theoretical possibility of flight, the way the moon affected the tides, and the wave-like nature of light."
Protestant
Protestantism had an important influence on science. According to the Merton Thesis there was a positive correlation between the rise of Puritanism and Protestant Pietism on the one hand and early experimental science on the other. The Merton Thesis has two separate parts: Firstly, it presents a theory that science changes due to an accumulation of observations and improvement in experimental techniques and methodology; secondly, it puts forward the argument that the popularity of science in 17th-century England and the religious demography of the Royal Society (English scientists of that time were predominantly Puritans or other Protestants) can be explained by a correlation between Protestantism and the scientific values. In his theory, Robert K. Merton focused on English Puritanism and German Pietism as having been responsible for the development of the Scientific Revolution of the 17th and 18th centuries. Merton explained that the connection between religious affiliation and interest in science was the result of a significant synergy between the ascetic Protestant values and those of modern science. Protestant values encouraged scientific research by allowing science to study God's influence on the world and thus providing a religious justification for scientific research.
According of Scientific Elite: Nobel Laureates in the United States by Harriet Zuckerman, a review of American Nobel prizes winners awarded between 1901 and 1972, 72% of American Nobel Prize laureates, have identified from Protestant background. Overall, Protestant have won a total of 84.2% of all the American Nobel Prizes in Chemistry, 60% in Medicine, 58.6% in Physics, between 1901 and 1972.
Thought and work ethic
The notion of "Christian finance" refers to banking and financial activities which came into existence several centuries ago. Whether the activities of the Knights Templar (12th century), Mounts of Piety (appeared in 1462) or the Apostolic Chamber attached directly to the Vatican, a number of operations of a banking nature (money loan, guarantee, etc.) or a financial nature (issuance of securities, investments) is proved, despite the prohibition of usury and the Church distrust against exchange activities (opposed to production activities).
Francisco de Vitoria, a disciple of Thomas Aquinas and a Catholic thinker who studied the issue regarding the human rights of colonized natives, is recognized by the United Nations as a father of international law, and now also by historians of economics and democracy as a leading light for the West's democracy and rapid economic development. Joseph Schumpeter, an economist of the 20th century, referring to the Scholastics, wrote, "it is they who come nearer than does any other group to having been the 'founders' of scientific economics." Other economists and historians, such as Raymond de Roover, Marjorie Grice-Hutchinson, and Alejandro Chafuen, have also made similar statements.
The Protestant concept of God and man allows believers to use all their God-given faculties, including the power of reason. That means that they are allowed to explore God's creation and, according to Genesis 2:15, make use of it in a responsible and sustainable way. Thus a cultural climate was created that greatly enhanced the development of the humanities and the sciences. Another consequence of the Protestant understanding of man is that the believers, in gratitude for their election and redemption in Christ, are to follow God's commandments. Industry, frugality, calling, discipline, and a strong sense of responsibility are at the heart of their moral code. In particular, John Calvin rejected luxury. Therefore, craftsmen, industrialists, and other businessmen were able to reinvest the greater part of their profits in the most efficient machinery and the most modern production methods that were based on progress in the sciences and technology. As a result, productivity grew, which led to increased profits and enabled employers to pay higher wages. In this way, the economy, the sciences, and technology reinforced each other. The chance to participate in the economic success of technological inventions was a strong incentive to both inventors and investors. The Protestant work ethic was an important force behind the unplanned and uncoordinated mass action that influenced the development of capitalism and the Industrial Revolution. This idea is also known as the "Protestant ethic thesis". In the book The Central Liberal Truth: How Politics Can Change a Culture and Save It from Itself Lawrence E. Harrison argues that Protestantism along with Confucianism, and Judaism have been more successful in promoting progress, culture and society. Due to the Protestant virtues of education, achievement, work ethic, merit, frugality, and honesty.
Some mainline Protestant denominations such as Episcopalians and Presbyterians and congregationalist tend to be considerably wealthier and better educated (having high proportion of graduate and post-graduate degrees per capita) than most other religious groups in America, and are disproportionately represented in the upper reaches of American business, law and politics, especially the Republican Party. Large numbers of the most wealthy and affluent American families as the Vanderbilts, the Astors, Rockefeller, Du Pont, Roosevelt, Forbes, Whitneys, Mellons, the Morgans and Harrimans are Mainline Protestant families. The Boston Brahmins, who were regarded as the nation's social and cultural elites, were often associated with the American upper class, Harvard University; and the Episcopal Church. The Old Philadelphianss were often associated with the American upper class and the Episcopal Church and Quakerism. These families were influential in the development and leadership of arts, culture, science, medicine, law, politics, industry and trade in the United States.
The rise of Protestantism in the 16th contributed to the development of banking in Northern Europe. In the late 18th century, Protestant merchant families began to move into banking to an increasing degree, especially in trading countries such as the United Kingdom (Barings), Germany (Schroders, Berenbergs) and the Netherlands (Hope & Co., Gülcher & Mulder) At the same time, new types of financial activities broadened the scope of banking far beyond its origins. One school of thought attributes Calvinism with setting the stage for the later development of capitalism in northern Europe. The Morgan family is an American Episcopal Church family and banking dynasty, which became prominent in the U.S. and throughout the world in the late 19th and early 20th centuries. Catholic banking families includes House of Medici, Welser family, Fugger family, and Simonetti family.
Some academics have theorized that Lutheranism, the dominant traditional religion of the Nordic countries, had an effect on the development of social democracy there and the Nordic model. Schröder posits that Lutheranism promoted the idea of a nationwide community of believers and led to increased state involvement in economic and social life, allowing for nationwide welfare solidarity and economic co-ordination. Esa Mangeloja says that the revival movements helped to pave the way for the modern Finnish welfare state. During that process, the church lost some of its most important social responsibilities (health care, education, and social work) as these tasks were assumed by the secular Finnish state. Pauli Kettunen presents the Nordic model as the outcome of a sort of mythical "Lutheran peasant enlightenment", portraying the Nordic model as the result of a sort of "secularized Lutheranism"; however, mainstream academic discourse on the subject focuses on "historical specificity", with the centralized structure of the Lutheran church being but one aspect of the cultural values and state structures that led to the development of the welfare state in Scandinavia.
Festivals
Roman Catholics, Anglicans, Eastern Christians, and traditional Protestant communities frame worship around the liturgical year. The liturgical cycle divides the year into a series of seasons, each with their theological emphases, and modes of prayer, which can be signified by different ways of decorating churches, colours of paraments and vestments for clergy, scriptural readings, themes for preaching and even different traditions and practices often observed personally or in the home.
Western Christian liturgical calendars are based on the cycle of the Roman Rite of the Catholic Church, and Eastern Christians use analogous calendars based on the cycle of their respective rites. Calendars set aside holy days, such as solemnities which commemorate an event in the life of Jesus or Mary, the saints, periods of fasting such as Lent, and other pious events such as memoria or lesser festivals commemorating saints. Christian groups that do not follow a liturgical tradition often retain certain celebrations, such as Christmas, Easter and Pentecost: these are the celebrations of Christ's birth, resurrection and the descent of the Holy Spirit upon the Church, respectively. A few denominations make no use of a liturgical calendar.
Christmas (or Feast of the Nativity) is an annual festival commemorating the birth of Jesus Christ, observed as a religious and cultural celebration among billions of people around the world. Christmas Day is a public holiday in many of the world's nations, is celebrated religiously by a majority of Christians, as well as culturally by many non-Christians, and forms an integral part of the holiday season centered around it. Popular modern customs of the holiday include gift giving; completing an Advent calendar or Advent wreath; Christmas music and caroling; viewing a Nativity play; an exchange of Christmas cards; church services; a special meal; and the display of various Christmas decorations, including Christmas trees, Christmas lights, nativity scenes, garlands, wreaths, mistletoe, and holly. In addition, several closely related and often interchangeable figures, known as Santa Claus, Father Christmas, Saint Nicholas, and Christkind, are associated with bringing gifts to children during the Christmas season and have their own body of traditions and lore.
Easter or Resurrection Sunday, is a festival and holiday commemorating the resurrection of Jesus from the dead, described in the New Testament as having occurred on the third day after his burial following his crucifixion by the Romans at Calvary 30 AD. Easter customs vary across the Christian world, and include sunrise services, exclaiming the Paschal greeting, clipping the church, and decorating Easter eggs (symbols of the empty tomb). The Easter lily, a symbol of the resurrection, traditionally decorates the chancel area of churches on this day and for the rest of Eastertide. Additional customs that have become associated with Easter and are observed by both Christians and some non-Christians include egg hunting, the Easter Bunny, and Easter parades. There are also various traditional Easter foods that vary regionally.
Religious life
Roman Catholic theology enumerates seven sacraments: Baptism (Christening), Confirmation (Chrismation), Eucharist (Communion), Penance (Reconciliation), Anointing of the Sick (before the Second Vatican Council generally called Extreme Unction), Matrimony
In Christian belief and practice, a sacrament is a rite, instituted by Christ, that mediates grace, constituting a sacred mystery. The term is derived from the Latin word sacramentum, which was used to translate the Greek word for mystery. Views concerning both what rites are sacramental, and what it means for an act to be a sacrament vary among Christian denominations and traditions.
The most conventional functional definition of a sacrament is that it is an outward sign, instituted by Christ, that conveys an inward, spiritual grace through Christ. The two most widely accepted sacraments are Baptism and the Eucharist (or Holy Communion), however, the majority of Christians also recognize five additional sacraments: Confirmation (Chrismation in the Orthodox tradition), Holy orders (ordination), Penance (or Confession), Anointing of the Sick, and Matrimony (see Christian views on marriage).
Taken together, these are the Seven Sacraments as recognized by churches in the High Church tradition—notably Roman Catholic, Eastern Orthodox, Oriental Orthodox, Independent Catholic, Old Catholic, many Anglicans, and some Lutherans. Most other denominations and traditions typically affirm only Baptism and Eucharist as sacraments, while some Protestant groups, such as the Quakers, reject sacramental theology. Christian denominations, such as Baptists, which believe these rites do not communicate grace, prefer to call Baptism and Holy Communion ordinances rather than sacraments.
Today, most Christian denominations are neutral about religious male circumcision, neither requiring it nor forbidding it. The practice is customary among the Coptic, Ethiopian, and Eritrean Orthodox Churches, and also some other African churches, as they require that their male members undergo circumcision. Even though most Christian denominations does not require male circumcision, male circumcision is widely in many predominantly Christian countries and many Christian communities. Christian communities in Africa, the Anglosphere countries, the Philippines, the Middle East, South Korea and Oceania have high circumcision rates, While Christian communities in Europe and South America have low circumcision rates. The United States and the Philippines are the largest majority Christian countries in the world to extensively practice circumcision. Scholar Heather L. Armstrong writes that, as of 2021, about half of Christian males worldwide are circumcised, with most of them being located in Africa, Anglosphere countries, and the Philippines.
Worship can be varied for special events like baptisms or weddings in the service or significant feast days. In the early church, Christians and those yet to complete initiation would separate for the Eucharistic part of the worship. In many churches today, adults and children will separate for all or some of the service to receive age-appropriate teaching. Such children's worship is often called Sunday school or Sabbath school (Sunday schools are often held before rather than during services).
Family life
Christian culture puts notable emphasis on the family, and according to the work of scholars Max Weber, Alan Macfarlane, Steven Ozment, Jack Goody and Peter Laslett, the huge transformation that led to modern marriage in Western democracies was "fueled by the religio-cultural value system provided by elements of Judaism, early Christianity, Roman Catholic canon law and the Protestant Reformation". Historically, extended families were the basic family unit in the Catholic culture and countries.
Most Christian denominations practice infant baptism to enter children into the faith. Some form of confirmation ritual occurs when the child has reached the age of reason and voluntarily accepts the religion. Ritual circumcision is used to mark Coptic Christian and Ethiopian Orthodox Christian infant males as belonging to the faith. During the early period of capitalism, the rise of a large, commercial middle class, mainly in the Protestant countries of the Netherlands and England, brought about a new family ideology centred around the upbringing of children. Puritanism stressed the importance of individual salvation and concern for the spiritual welfare of children. It became widely recognized that children possess rights on their own behalf. This included the rights of poor children to sustenance, membership in a community, education, and job training. The Poor Relief Acts in Elizabethan England put responsibility on each Parish to care for all the poor children in the area. And prior to the 20th century, three major branches of Christianity—Catholicism, Orthodoxy and Protestantism—as well as leading Protestant reformers Martin Luther and John Calvin generally held a critical perspective of birth control.
The Church of Jesus Christ of Latter-day Saints puts notable emphasis on the family, and the distinctive concept of a united family which lives and progresses forever is at the core of Latter-day Saint doctrine. Church members are encouraged to marry and have children, and as a result, Latter-day Saint families tend to be larger than average. All sexual activity outside of marriage is considered a serious sin. All homosexual activity is considered sinful and same-sex marriages are not performed or supported by the LDS Church. Latter-day Saint fathers who hold the priesthood typically name and bless their children shortly after birth to formally give the child a name and generate a church record for them. Mormons tend to be very family-oriented and have strong connections across generations and with extended family, reflective of their belief that families can be sealed together beyond death. Mormons also have a strict law of chastity, requiring abstention from sexual relations outside heterosexual marriage and fidelity within marriage.
A Pew Center study about Religion and Living arrangements around the world in 2019, found that Christians around the world live in somewhat smaller households, on average, than non-Christians (4.5 vs. 5.1 members). 34% of world's Christian population live in two parent families with minor children, while 29% live in household with extended families, 11% live as couples without other family members, 9% live in household with least one child over the age of 18 with one or two parents, 7% live alone, and 6% live in single parent households. Christians in Asia and Pacific, Latin America and the Caribbean, Middle East and North Africa, and in Sub-Saharan Africa, overwhelmingly live in extended or two parent families with minor children. While more Christians in Europe and North America live alone or as couples without other family members.
Cuisine
In mainstream Nicene Christianity, there is no restriction on kinds of animals that can be eaten. This practice stems from Peter's vision of a sheet with animals, in which Saint Peter "sees a sheet containing animals of every description lowered from the sky." Nonetheless, the New Testament does give a few guidelines about the consumption of meat, practiced by the Christian Church today; one of these is not consuming food knowingly offered to pagan idols, a conviction that the early Church Fathers, such as Clement of Alexandria and Origen preached. In addition, Christians traditionally bless any food before eating it with a mealtime prayer (grace), as a sign of thanking God for the meal they have.
Slaughtering animals for food is often done without the trinitarian formula, although the Armenian Apostolic Church, among other Orthodox Christians, have rituals that "display obvious links with shechitah, Jewish kosher slaughter." The Bible states Norman Geisler, stipulates one to "abstain from food sacrificed to idols, from blood, from meat of strangled animals". In the New Testament, Paul of Tarsus notes that some devout Christians may wish to abstain from consuming meat if it causes "my brother to stumble" in his faith with God. As such, some Christian monks, such as the Trappists, have adopted a policy of Christian vegetarianism. In addition, Christians of the Seventh-day Adventist tradition generally "avoid eating meat and highly spiced food". Christians in the Anglican, Catholic, Lutheran, Methodist, and Orthodox denominations traditionally observe a meat-free day, and meat free seasons especially during the liturgical season of Lent.
Some Christian denominations condone the moderate drinking of alcohol (moderationism), such as Anglicans, Catholics, Lutherans, and the Orthodox, although others, such as Adventists, Baptists, Methodists, and Pentecostals either abstain from or prohibit the consumption of alcohol (abstentionism and prohibitionism). However, all Christian Churches, in view of the biblical position on the issue, universally condemn drunkenness as sinful.
Christian cooking combines the food of many cultures in which Christian have lived. A special Christmas family meal is traditionally an important part of the holiday's celebration, and the food that is served varies greatly from country to country. Some regions, such as Sicily, have special meals for Christmas Eve, when 12 kinds of fish are served. In the United Kingdom and countries influenced by its traditions, a standard Christmas meal includes turkey, goose or other large bird, gravy, potatoes, vegetables, sometimes bread and cider. Special desserts are also prepared, such as Christmas pudding, mince pies, fruit cake and Yule log.
Cleanliness
The Bible has many rituals of purification relating to menstruation, childbirth, sexual relations, nocturnal emission, unusual bodily fluids, skin disease, death, and animal sacrifices. The Ethiopian Orthodox Tewahedo Church prescribes several kinds of hand washing for example after leaving the latrine, lavatory or bathhouse, or before prayer, or after eating a meal. The women in the Ethiopian Orthodox Tewahedo Church are prohibited from entering the church temple during menses; and the men do not enter a church the day after they have had intercourse with their wives.
Christianity has always placed a strong emphasis on hygiene, Despite the denunciation of the mixed bathing style of Roman pools by early Christian clergy, as well as the pagan custom of women naked bathing in front of men, this did not stop the Church from urging its followers to go to public baths for bathing, which contributed to hygiene and good health according to the Church Fathers, Clement of Alexandria and Tertullian. The Church also built public bathing facilities that were separate for both sexes near monasteries and pilgrimage sites; also, the popes situated baths within church basilicas and monasteries since the early Middle Ages. Pope Gregory the Great urged his followers on value of bathing as a bodily need.
Great bath houses were built in Byzantine centers such as Constantinople and Antioch, and the popes allocated to the Romans bathing through diaconia, or private Lateran baths, or even a myriad of monastic bath houses functioning in the 8th and 9th centuries. The popes maintained their baths in their residences, and bath houses including hot baths incorporated into Christian Church buildings or those of monasteries, which known as "charity baths" because they served both the clerics and needy poor people. Public bathing were common in medivail Christendom larger towns and cities such as Paris, Regensburg and Naples. Catholic religious orders of the Augustinians' and Benedictines' rules contained ritual purification, and inspired by Benedict of Nursia encouragement for the practice of therapeutic bathing; Benedictine monks played a role in the development and promotion of spas. Protestant Christianity also played a prominent role in the development of the British spas.
Contrary to popular belief bathing and sanitation were not lost in Europe with the collapse of the Roman Empire. Soapmaking first became an established trade during the so-called "Dark Ages". The Romans used scented oils (mostly from Egypt), among other alternatives. By the 15th century, the manufacture of soap in the Christendom had become virtually industrialized, with sources in Antwerp, Castile, Marseille, Naples and Venice. By the mid-19th century, the English urbanised middle classes had formed an ideology of cleanliness that ranked alongside typical Victorian concepts, such as Christianity, respectability and social progress. The Salvation Army has adopted movement of the deployment of the personal hygiene, and by providing personal hygiene products.
The use of water in many Christian countries is due in part to the biblical toilet etiquette which encourages washing after all instances of defecation. The bidet is common in predominantly Catholic countries where water is considered essential for anal cleansing, and in some traditionally Orthodox and Protestant countries such as Greece and Finland respectively, where bidet showers are common.
Christian pop culture
Christian pop culture (or Christian popular culture), is the vernacular Christian culture that prevails in any given society. The content of popular culture is determined by the daily interactions, needs and desires, and cultural 'movements' that make up everyday lives of Christians. It can include any number of practices, including those pertaining to cooking, clothing, mass media and the many facets of entertainment such as sports and literature.
In modern urban mass societies, Christian pop culture has been crucially shaped by the development of industrial mass production, the introduction of new technologies of sound and image broadcasting and recording, and the growth of mass media industries—the film, broadcast radio and television, radio, video game, and the book publishing industries, as well as the print and electronic news media.
Items of Christian pop culture most typically appeal to a broad spectrum of Christians. Some argue that broad-appeal items dominate Christian pop culture because profit-making Christian companies that produce and sell items of Christian pop culture attempt to maximize their profits by emphasizing broadly appealing items. And yet the situation is more complex. To take the example of Christian pop music, it is not the case that the music industry can impose any product they wish. In fact, highly popular types of music have often first been elaborated in small, counter-cultural circles such as Christian punk rock or Christian rap.
Because the Christian pop industry is significantly smaller than the secular pop industry, a few organizations and companies dominate the market and have a strong influence over what is dominant within the industry.
Another source of Christian pop culture which makes it differ from pop culture is the influence from mega churches. Christian pop culture reflects the current popularity of megachurches, but also the uniting of smaller community churches. The culture has been led by Hillsong Church in particular, which resides in many countries including Australia, France, and the United Kingdom.
Film industry
The Christian film industry is an umbrella term for films containing a Christian themed message or moral, produced by Christian filmmakers to a Christian audience, and films produced by non-Christians with Christian audiences in mind. They are often interdenominational films, but can also be films targeting a specific denomination of Christianity. Popular mainstream studio productions of films with strong Christian messages or biblical stories, like Ben-Hur, The Ten Commandments, The Passion of the Christ, The Chronicles of Narnia: The Lion, the Witch and the Wardrobe, The Book of Eli, Machine Gun Preacher, The Star, The Flying House, Superbook and Silence, are not specifically part of the Christian film industry, being more agnostic about their audiences' religious beliefs. These films generally also have a much higher budget, production values and better known film stars, and are received more favourably with film critics.
The 2014 film God's Not Dead is one of the all-time most successful independent Christian films and the 2015 film War Room became a Box Office number-one film.
Televangelism
Televangelism (tele- "distance" and "evangelism", meaning "ministry", sometimes called teleministry) is the use of media, specifically radio and television, to communicate Christianity. Televangelists are ministers, whether official or self-proclaimed, who devote a large portion of their ministry to television broadcasting. Some televangelists are also regular pastors or ministers in their own places of worship (often a megachurch), but the majority of their followers come from TV and radio audiences. Others do not have a conventional congregation, and work primarily through television. The term is also used derisively by critics as an insinuation of aggrandizement by such ministers.
Televangelism began as a uniquely American phenomenon, resulting from a largely deregulated media where access to television networks and cable TV is open to virtually anyone who can afford it, combined with a large Christian population that is able to provide the necessary funding. It became especially popular among Evangelical Protestant audiences, whether independent or organized around Christian denominations. However, the increasing globalisation of broadcasting has enabled some American televangelists to reach a wider audience through international broadcast networks, including some that are specifically Christian in nature, such as Trinity Broadcasting Network (the world's largest religious television network), The God Channel, Christian Broadcasting Network, Australian Christian Channel, SAT-7 and Emmanuel TV. Domestically produced televangelism is increasingly present in some other nations such as Brazil. Christian television may include broadcast television or cable television channels whose entire broadcast programming schedule is television programs directly related to Christianity or shows including comedy, action, drama, reality, dramatizations and variety shows, movies, and mini-series; which are part of the overall programming of a general-interest television station.
Some countries have more regulated media with either general restrictions on access or specific rules regarding religious broadcasting. In such countries, religious programming is typically produced by TV companies (sometimes as a regulatory or public service requirement) rather than private interest groups.
Christianophile
A Christianophile is a person who expresses a strong interest in or appreciation for Christianity, Christian culture, Christian history, Christendom or the Christian people. That affinity may include Christianity itself or its history, philosophy, theology, music, literature, art, architecture, festivals etc. The term "Christianophile" can be contrasted with Christianophobe, someone who shows hatred or other forms of negative feelings towards all that is Christian.
Christianity and Christian culture has a generally positive image in a number of non-Christian societies such as Hong Kong, Macau, India, Japan, Lebanon, Singapore, South Korea, and Taiwan. In number of traditional Christian societies in Europe, there has been a revival of what has been called by some scholars "Christianophile", and a sympathy for Christianity and its culture, with politicians increasingly speaking of the "Christian roots and heritage" of their countries; this includes Austria, France, Hungary, Italy, Poland, Russia, Serbia, Slovakia, and the United Kingdom.
G. K. Chesterton has been called a Christianophile; he wrote in the early 20th century about the benefits of Christianity. Famous for his use of paradox, Chesterton explained that while Christianity had the most mysteries, it was the most practical religion. He pointed to the advance of Christian civilizations as proof of its practicality. T. S. Eliot has shown a strong affinity to the Christian culture; according to him, the common tradition of Christianity and its culture which has made Europe what it is, and the culture of Europe been rooted in Christianity. Winston Churchill has shown a strong affinity to Protestant culture because he felt it "a step nearer Reason". Historian Geoffrey Blainey on his book A Short History of Christianity, discussed the role of Christianity in civilization, and the extent of Christian influence on the world. Some scholars criticize the concept Eurocentrism as a "Christianophile myth" because it has favored the components (mainly Christianity) of European civilization and allowed eurocentrists to brand diverging societies and cultures as "uncivilized".
See also
Aristotelianism
Astrotheology
Assyrian culture
Celtic Christianity
Christianese
Christian influences in Islam
Christian values
Culture of The Church of Jesus Christ of Latter-day Saints
Cynicism (philosophy)
Gnosticism
International law
Judeo-Christian values
Multiculturalism and Christianity
Natural law
Neoplatonism and Christianity
Platonism
Protestant culture
Role of Christianity in civilization
Stoicism
Syriac Christianity
The night of churches
References
Works cited
.
Further reading
Eva Baer. Ayyubid metalwork with Christian images. BRILL, 1989
Culture | 0.771358 | 0.992097 | 0.765262 |
Slavophilia | Slavophilia was a movement originating from the 19th century that wanted the Russian Empire to be developed on the basis of values and institutions derived from Russia's early history. Slavophiles opposed the influences of Western Europe in Russia. Depending on the historical context, the opposite of Slavophilia could be seen as Slavophobia (a fear of Slavic culture) or also what some Russian intellectuals (such as Ivan Aksakov) called zapadnichestvo (westernism).
History
Slavophilia, as an intellectual movement, was developed in 19th-century Russia. In a sense, there was not one but many Slavophile movements or many branches of the same movement. Some were left-wing and noted that progressive ideas such as democracy were intrinsic to the Russian experience, as proved by what they considered to be the rough democracy of medieval Novgorod. Some were right-wing and pointed to the centuries-old tradition of the autocratic tsar as being the essence of the Russian nature.
The Slavophiles were determined to protect what they believed were unique Russian traditions and culture. In doing so, they rejected individualism. The role of the Russian Orthodox Church was seen by them as more significant than the role of the state. Socialism was opposed by Slavophiles as an alien thought, and Russian mysticism was preferred over "Western rationalism". Rural life was praised by the movement, which opposed industrialization and urban development, and protection of the "mir" was seen as an important measure to prevent the growth of the working class.
The movement originated in Moscow in the 1830s. Drawing on the works of Greek Church Fathers, the philosopher Aleksey Khomyakov (1804–60) and his devoutly Orthodox colleagues elaborated a traditionalistic doctrine that claimed Russia has its own distinct way, which should avoid imitating "Western" institutions. The Russian Slavophiles criticised the modernisation of Peter the Great and Catherine the Great, and some of them even adopted traditional pre-Petrine dress.
Andrei Okara argues that the 19th-century classification of social thought into three groups, the Westernizers, the Slavophiles and the Conservatives, also fits well into the realities of the political and social situation in modern Russia. According to him, examples of modern-day Slavophiles include the Communist Party of the Russian Federation, Dmitry Rogozin and Sergei Glazyev.
Doctrine
The doctrines of Aleksey Khomyakov, Ivan Kireyevsky (1806–56), Konstantin Aksakov (1817–60) and other Slavophiles had a deep impact on Russian culture, including the Russian Revival school of architecture, The Five of Russian composers, the novelist Nikolai Gogol, the poet Fyodor Tyutchev and the lexicographer Vladimir Dahl. Their struggle for purity of the Russian language had something in common with ascetic views of Leo Tolstoy. The doctrine of sobornost, the term for organic unity, intregration, was coined by Kireyevsky and Khomyakov. It was to underline the need for cooperation between people, at the expense of individualism, on the basis that opposing groups focus on what is common between them. According to Khomyakov, the Orthodox Church organically combines in itself the principles of freedom and unity, but the Catholic Church postulates unity without freedom, and in Protestantism, on the contrary, freedom exists without unity. In the Russian society of their time, the Slavophiles saw sobornost ideal in the peasant obshchina. The latter recognized the primacy of collectivity but guaranteed the integrity and the welfare of the individual within that collective.
In the sphere of practical politics, Slavophilism manifested itself as a pan-Slavic movement for the unification of all Slavic people under leadership of the Russian tsar and for the independence of the Balkan Slavs from Ottoman rule. The Russo-Turkish War, 1877-78, is usually considered a high point of this militant Slavophilism, as expounded by the charismatic commander Mikhail Skobelev. The attitude towards other nations with Slavic origins varied, depending on the group involved. Classical Slavophiles believed that "Slavdom", alleged by Slavophile movement common identity to all people of Slavic origin, was based on Eastern Orthodox religion.
The Russian Empire, besides containing Russians, ruled over millions of Ukrainians, Poles and Belarusians, who had their own national identities, traditions and religions. Towards Ukrainians and Belarusians, the Slavophiles developed the view that they were part of the same "Great Russian" nation, Belarusians being the "White Russians" and Ukrainians "Little Russians". Slavophile thinkers such as Mikhail Katkov believed that both nations should be ruled under Russian leadership and were an essential part of the Russian state. At the same time, they denied the separate cultural identity of Ukrainian and Belarusian people, believing their national as well as language and literary aspirations were a result of "Polish intrigue" to separate them from Russians. Other Slavophiles, like Ivan Aksakov, recognized the right of Ukrainians to use the Ukrainian language but saw it as completely unnecessary and harmful.
Aksakov, however, did see some practical use for the "Malorussian" language: it would be beneficial in the struggle against the "Polish civilizational element in the western provinces".
Besides Ukrainians and Belarusians, the Russian Empire also included Poles, whose country had disappeared after being partitioned by three neighboring states, including Russia, which after decisions of the Congress of Vienna expanded into more Polish-inhabited territories. Poles proved to be a problem for the ideology of Slavophilism. The very name Slavophiles indicated that the characteristics of the Slavs were based on their ethnicity, but at the same time, Slavophiles believed that Orthodoxy equaled Slavdom. This belief was belied by very existence of Poles within the Russian Empire, who, while having Slavic origins, were also deeply Roman Catholic, the Catholic faith forming one of the core values of Polish national identity. Also, while Slavophiles praised the leadership of Russia over other nations of Slavic origin, the Poles' very identity was based on Western European culture and values, and resistance to Russia was seen by them as resistance to something representing an alien way of life. As a result, Slavophiles were particularly hostile to the Polish nation, often emotionally attacking it in their writings.
When the Polish uprising of 1863 started, Slavophiles used anti-Polish sentiment to create feelings of national unity in the Russian people, and the idea of cultural union of all Slavs was abandoned. With that Poland became firmly established to Slavophiles as symbol of Catholicism and Western Europe, that they detested, and as Poles were never assimilated within the Russian Empire, constantly resisting Russian occupation of their country, in the end, Slavophiles came to concede that annexation of Poland was a mistake since the Polish nation could not be russified.
"After the struggle with Poles, Slavophiles expressed their belief, that notwithstanding the goal of conquering Constantinople, the future conflict would be between the "Teutonic race" (Germans), and "Slavs", and the movement turned into Germanophobia.
Most Slavophiles were liberals and ardently supported the emancipation of serfs, which was finally realized in the emancipation reform of 1861. Press censorship, serfdom and capital punishment were viewed as baneful influences of Western Europe. Their political ideal was a parliamentary monarchy, as represented by the medieval Zemsky Sobors.
After serfdom
After serfdom was abolished in Russia and the end of the uprising in Poland, new Slavophile thinkers appeared in the 1870s and 1880s, represented by scholars such as Nikolay Danilevsky, who expounded a view of history as circular, and Konstantin Leontiev.
Danilevsky promoted autocracy and imperialistic expansion as part of Russian national interest. Leontiev believed in a police state to prevent European influences from reaching Russia.
Pochvennichestvo
Later writers Fyodor Dostoyevsky, Konstantin Leontyev, and Nikolay Danilevsky developed a peculiar conservative version of Slavophilism, Pochvennichestvo (from the Russian word for soil). The teaching, as articulated by Konstantin Pobedonostsev (Ober-Procurator of the Russian Orthodox Church), was adopted as the official tsarist ideology during the reigns of Alexander III and Nicholas II. Even after the Russian Revolution of 1917, it was further developed by the émigré religious philosophers like Ivan Ilyin (1883–1954).
Many Slavophiles influenced prominent Cold War thinkers such as George F. Kennan, instilling in them a love for the Russian Empire as opposed to the Soviet Union. That, in turn, influenced their foreign policy ideas, such as Kennan's belief that the revival of the Russian Orthodox Patriarchate, in 1943, would lead to the reform or overthrow of Joseph Stalin's rule.
See also
Pan-Slavism
List of 19th-century Russian Slavophiles
Slavophobia
Russian philosophy
Russification
Romantic Nationalism
Sarmatianism
References
External links
An Interpretation of Slavophilism
Anti-Catholicism
Slavic culture
Culture of Russia
Russian philosophy
Political theories
Admiration of foreign cultures
Russian Revival architecture
Russian nationalism
Culture of Serbia
Culture of Poland | 0.773005 | 0.98997 | 0.765252 |
Hegemonic masculinity | In gender studies, hegemonic masculinity is part of R. W. Connell's gender order theory, which recognizes multiple masculinities that vary across time, society, culture, and the individual. Hegemonic masculinity is defined as a practice that legitimizes men's dominant position in society and justifies the subordination of the common male population and women, and other marginalized ways of being a man. Conceptually, hegemonic masculinity proposes to explain how and why men maintain dominant social roles over women, and other gender identities, which are perceived as "feminine" in a given society.
The conceptual beginnings of hegemonic masculinity represented the culturally idealized form of manhood that was socially and hierarchically exclusive and concerned with bread-winning; that was anxiety-provoking and differentiated (internally and hierarchically); that was brutal and violent, pseudo-natural and tough, psychologically contradictory, and thus crisis-prone; economically rich and socially sustained. However, many sociologists criticized that definition of hegemonic masculinity as a fixed character-type, which is analytically limited, because it excludes the complexity of different, and competing, forms of masculinity. Consequently, hegemonic masculinity was reformulated to include gender hierarchy, the geography of masculine configurations, the processes of social embodiment, and the psycho-social dynamics of the varieties of masculinity.
Proponents of the concept of hegemonic masculinity argue that it is conceptually useful for understanding gender relations, and is applicable to life-span development, education, criminology, the representations of masculinity in the mass communications media, the health of men and women, and the functional structure of organizations. Critics argue that hegemonic masculinity is heteronormative, is not self-reproducing, ignores positive aspects of masculinity, relies on a flawed underlying concept of masculinity, or is too ambiguous to have practical application.
Description
Terry Kupers of The Wright Institute describes the concept of hegemonic masculinity in these terms:
History
"Due to social inequalities in Australian high schools, Sociologist Connell introduced the Hegemonic masculinity idea, that takes a look at male roles and their characteristics." These beginnings were organized into an article which critiqued the "male sex role" literature and proposed a model of multiple masculinities and power relations. This model was integrated into a systematic sociological theory of gender. The resulting six pages in Gender and Power by R. W. Connell on "hegemonic masculinity and emphasized femininity" became the most cited source for the concept of hegemonic masculinity. This concept draws its theoretical roots from the Gramscian term hegemony as it was used to understand the stabilization of class relations. The idea was then transferred to the problem of gender relations.
Hegemonic masculinity draws some of its historical roots from both the fields of social psychology and sociology which contributed to the literature about the male sex role that had begun to recognize the social nature of masculinity and the possibilities of change in men's conduct. This literature preceded the Women's Liberation Movement and feminist theories of patriarchy which also played a strong role in shaping the concept of hegemonic masculinity. The core concepts of power and difference were found in the gay liberation movement which had not only sought to analyse the oppression of men but also oppression by men. This idea of a hierarchy of masculinities has since persisted and strongly influenced the reformulation of the concept.
Empirical social research also played an important role as a growing body of field studies documented local gender hierarchies and local cultures of masculinities in schools, male-dominated workplaces, and village communities. Finally, the concept was influenced by psychoanalysis. Sigmund Freud produced the first analytic biographies of men and showed how adult personality was a system under tension and the psychoanalyst Robert J. Stoller popularized the concept of gender identity and mapped its variation in boys' development.
Original framework
The particular normative form of masculinity that is the most honoured way of being a man, which requires all other men to position themselves in relation to it, is known as hegemonic masculinity. In Western society, the dominant form of masculinity or the cultural ideal of manhood was primarily reflective of white, heterosexual, largely middle-class males. The ideals of manhood espoused by the dominant masculinity suggested a number of characteristics that men are encouraged to internalize into their own personal codes and which form the basis for masculine scripts of behaviour. These characteristics include: violence and aggression, stoicism (emotional restraint), courage, toughness, physical strength, athleticism, risk-taking, adventure and thrill-seeking, competitiveness, and achievement and success. Hegemonic masculinity is not completely dominant, however, as it only exists in relation to non-hegemonic, subordinated forms of masculinity. The most salient example of this approach in contemporary European and American society is the dominance of heterosexual men and the subordination of homosexual men. This was manifested in political and cultural exclusion, legal violence, street violence, and economic discrimination. Gay masculinity was the most conspicuous subordinated masculinity during this period of time, but not the only one. Heterosexual men and boys with effeminate characteristics ran the risk of being scorned as well.
Hegemonic masculinity is neither normative in the numerical sense, as only a small minority of men may enact it, nor in an actual sense, as the cultural ideal of masculinity is often a fantasy figure, such as John Wayne or John Rambo. It also affects the construct and perception of the idealised male body from an exclusively Western perspective. Hegemonic masculinity may not even be the commonest pattern in the everyday lives of men. Rather, hegemony can operate through the formation of exemplars of masculinity, symbols that have cultural authority despite the fact that most men and boys cannot fully live up to them. Hegemonic masculinity imposes an ideal set of traits which stipulate that a man can never be unfeminine enough. Thus, fully achieving hegemonic masculinity becomes an unattainable ideal.
Complicity to the aforementioned masculine characteristics was another key feature of the original framework of hegemonic masculinity. Yet still since men benefit from the patriarchal dividend, they generally gain from the overall subordination of women. However, complicity is not so easily defined as pure subordination since marriage, fatherhood, and community life often involve extensive compromises with women rather than simple domination over them. In this way hegemony is not gained through necessarily violent or forceful means, but it is achieved through culture, institutions, and persuasions.
The interplay of gender with class and race creates more extensive relationships among masculinities. For example, new information technology has redefined middle-class masculinities and working-class masculinities in different ways. In a racial context, hegemonic masculinity among whites sustains the institutional oppression and physical terror that have framed the making of masculinities in black communities. It has been suggested that historically suppressed groups like inner city African-American males exhibit the more violent standards of hegemonic masculinity in response to their own subordination and lack of control. This idea of marginalization is always relative to what is allowed by the dominant group, therefore creating subsets of hegemonic masculinity based on existing social hierarchies.
Criticisms
As the earliest model of this concept grew, so did the scrutiny and criticisms surrounding it. The following principal criticisms have been identified since debate about the concept began in the early 1990s.
Underlying concept of masculinity
Some have asserted the fundamental idea of masculinity is seen as a flawed concept. Jeff Hearn and Alan Peterson have argued their views on masculinity. Hearn suggested the concept of masculinity is minimizing the concerning issues of male dominance, while Peterson argues the concept is creating a false idea of men, and furthering the separation of genders. The concept of masculinity is said to rest logically on a dichotomization of sex (biological) and gender (cultural) and thus marginalizes or naturalizes the body. Harry Brod observes that there is a tendency in the field of men's studies to proceed as if women were not a relevant part of the analysis and therefore to analyse masculinities by looking only at men and relations among men. Therefore, a consistently relational approach to gender is being called upon.
Ambiguity and overlap
Early criticisms of the concept raised the question of who actually represents hegemonic masculinity. Many men who hold great social power do not embody other aspects of ideal masculinity. Patricia Yancey Martin criticizes the concept for leading to inconsistent applications sometimes referring to a fixed type and other times to whatever the dominant form is. Margaret Wetherell and Nigel Edley contend this concept fails to specify what conformity to hegemonic masculinity actually looks like in practice. Similarly Stephen M. Whitehead suggests there is confusion over who actually is a hegemonically masculine man. Inspired by Gramsci's differentiation between hegemony as a form of ideological consent and dominance as an expression of conflict, Christian Groes-Green has argued that when hegemonic masculinities are challenged in a society dominant masculinities are emerging based on bodily powers, such as violence and sexuality, rather than based on economic and social powers. Through examples from his fieldwork among youth in Maputo, Mozambique he shows that this change is related to social polarization, new class identities and the undermining of breadwinner roles and ideologies in a neoliberal economy.
The problem of realness
It has also been argued that the concept of hegemonic masculinity does not adequately describe a realness of power. Øystein Gullvåg Holter argues that the concept constructs power from the direct experience of women rather than from the structural basis of women's subordination. Holter believes in distinguishing between patriarchy and gender and argues further that it is a mistake to treat a hierarchy of masculinities constructed within gender relations as logically continuous with the patriarchal subordination of women. In response to the adverse connotations surrounding the concept, Richard Collier remarks that hegemonic masculinity is solely associated with negative characteristics that depict men as unemotional (see affect display), aggressive, independent, and non-nurturing without recognizing positive behaviours such as bringing home a wage or being a father.
The masculine subject
Several authors have argued that the concept of hegemonic masculinity is based on an unsatisfactory theory of the subject because it does not rely enough upon discourses of masculinity. Wetherell and Edley argue that hegemonic masculinity cannot be understood as the characteristics that constitute any group of men. To Whitehead the concept fails to specify how and why some heterosexual men legitimate, reproduce, and generate their dominance and do so as a social minority since they are outnumbered by women and other men they dominate. A related criticism also derives from psychoanalysis which has criticized the lack of attention given to how men actually psychologically relate to hegemonic masculinity. For example, Timothy Laurie argues that the hegemonic masculinity framework lends itself to a modified essentialism, wherein the "achievement of masculine goals is frequently attributed to a way of thinking understood as inherent to the male psyche, and in relation to an innate disposition for homosocial bonding".
The pattern of gender relations
There is considerable evidence that hegemonic masculinity is not a self-reproducing form. Demetrakis Z. Demetriou suggests this is because a kind of simplification has occurred. He identifies two forms of hegemony, internal and external. External hegemony relates to the institutionalization of men's dominance over women and internal hegemony refers to the position of one group of men over all other men. Scholars commonly do not clarify or acknowledge the relationship between the two. This suggests that subordinated and marginalized masculinities do not impact the construction of hegemonic masculinity as much as critics suggest it should.
Reformulation
In one of the most widely cited works analysing the concept, R. W. Connell and James Messerschmidt sought to reformulate their theory of hegemonic masculinity in light of certain criticisms. They readjusted their framework to address four main areas: the nature of gender hierarchy, the geography of masculine configurations, the process of social embodiment, and the dynamics of masculinities.
Gender hierarchy
Gender hierarchy seeks to explain not only why men hold a superior position to women but how each group influences the other. Studies indicates that forms of masculinity outside the mainstream are strong, even under conditions of marginalization due to race, economic status, physical ability, or sexual orientation. The dominant system of gender norms maintains its authority more through the incorporation of these non-traditional masculinities into its overall narrative . An example would include that of the mainstream adoption of black hip hop culture which was created in response to urban structural inequalities. Another example is that of "protest masculinity", in which local working-class settings, sometimes involving ethnically marginalized men, embodies the claim to power typical of regional hegemonic masculinities in Western countries, but lack the economic resources and institutional authority that underpins the regional and global patterns.
This new emphasis on gender hierarchy seeks to take a more relational approach to women as well. Women are central in many of the processes constructing masculinities, as mothers, schoolmates, girlfriends, sexual partners, wives, and workers in the gender division of labour. Gender hierarchies are affected by new configurations of women's identity and practice so more attention has been given to the historical interplay of femininities and masculinities.
Geography of masculinities
Masculinity studies consistently highlight shifts in dominant forms of masculinity shaped by local contexts, yet with globalization's rise, the impact of global spaces on masculinity's construction has gained focus. Charlotte Hooper explains how masculinities play out in international relations, while Connell introduced the concept of "transnational business masculinity," characterizing the lifestyle corporate leaders. Because of this, Connell and Messerschmidt have proposed hegemonic masculinities be analysed at three levels: local, regional, and global. The links between these levels are critical to gender politics since interventions at any level giving women more power and representation can influence from the top down or from the bottom up. Additionally, adopting a framework that distinguishes between the three levels allows one to recognize the importance of place without making generalizations about independent cultures or discourses.
Social embodiment
Social embodiment calls for a more rigid definition of what a hegemonically masculine man is and how the idea is actually carried out in real life. The pattern of embodiment involved in hegemony has been recognized in the earliest formulations of the concept but called for more theoretical attention. This notion continues to manifest itself into many different health and sexual practices such as eating meat or having multiple sexual partners. Marios Kostas writes in Gender and Education that "hegemonic masculinity is also related to professional success in the labour market, which describes the social definition of tasks into as either 'men's work' or 'women’s work' and the definition of some kinds of work as more masculine than others". The emergence of transgender issues has made it particularly clear that embodiment be given more focus in reconceptualizations. The circuits of social embodiment may be very direct and simple or may be long and complex, passing through institutions, economic relations, cultural symbols, and so forth without ceasing to involve material bodies.
Dynamics of masculinities
New theory has recognized the layering and potential internal contradictions within all practices that construct masculinities. This is a departure from a unitary masculinity and focus on compromised formations between contradictory desires or emotions. While these practices may adhere to conventional Western ideas of hegemonic masculinity, this may not necessarily translate into a satisfying life experience. As gender relations evolve and women's movements grow stronger, the dynamics of masculinities may see a complete abolition of power differentials and a more equitable relationship between men and women and between men and other men. This positive hegemony remains a key strategy for contemporary efforts at reforming gender relations. Groes-Green has argued that Connell's theory of masculinities risks excluding the possibility of more gender equitable or "philogynous" forms of masculinity such as those he has identified in Mozambique. He urges social researchers to begin developing theories and concepts that can improve an understanding of how more positive, alternative and less dominant masculinities may develop even if these are always embedded in local gender power relations.
Lifespan development
Early childhood
Children learn at an early age, mostly through educational and peer interactions, what it means to be a boy and what it means to be a girl, and are quick to demonstrate that they understand these roles. This notion of "doing" gender involves differentiating between boys and girls from the day they are born and perpetuating the discourses of gender difference. The idea of dualism of the genders are misconstrued by dominant ideology and feeds into social norms of masculinity. Children learn and show development of gender identity as an ongoing process, based on social situations. Gendered toys can play a large role in demonstrating the preferred actions and behaviour of young boys in early childhood. The male role is also reinforced by observing older boys and reactions of authority figures, including parents. The promotion of idealized masculine roles emphasizing toughness, dominance, self-reliance, and the restriction of emotion can begin as early as infancy. Such norms are transmitted by parents, other male relatives, and members of the community. Media representations of masculinity on websites such as YouTube often promote similar stereotypical gender roles.
Although gender socialization is well underway before children reach preschool, stereotypical differences between boys and girls are typically reinforced, rather than diminished, by their early educational childhood experiences. Teachers have a large role in reinforcing gender stereotypes by limiting children's choices at this young age, thus not allowing boys to explore their feelings or their understandings about gender freely. This is done through the endorsement of hegemonic masculinity embodying physical domination, strength, competitiveness, sport, courage, and aggression. These gendered performances are based on society's construction of femininity and masculinity in relation to heterosexuality. Heteronormativity is the standard for children; despite their obvious sexual innocence, heterosexuality is ingrained in children in their acting of gender from an early age.
Another factor that contributes to gendered behaviour and roles is the greater visibility, importance, and presence of males than females in literature, and in the language that teachers use for communication and instruction. Male-generic pronouns are a special problem in early childhood settings. A recommended method to help gender barriers disappear is specific training for teachers and more education on the topic for parents. Though, an ultimate conclusion by one author notes that young children know, feel, and think gender despite the wishes of adults to make gender disappear in their lives.
Middle childhood
A lifespan perspective must be considered when discussing gender normalization. But one must also consider cultural hegemony in this stage of the lifespan as a child develops more of an understanding of their culture and begins to display original ideas of cultural norms as well as social norms. According to the constructivist emphasis, the man/woman dichotomy is not the "natural" state, but rather a potent metaphor in Western cultures. Building social relationships and developing individuality are essential benchmarks for this age of middle childhood, which ranges from eight years old to puberty. A young boy is trying to navigate falling within the social structure that has been laid out for him, which includes interacting with both sexes, and a dominant notion of maleness. The gender environmentalism, which emphasizes the role of societal practices in generating and maintaining gender differentiation, still plays a part in this stage of life, but is possibly more influenced by immediate and close interactions with boys close to their age. The boys organize themselves in a hierarchical structure in which the high-status boys decide what is acceptable and valued – that which is hegemonically masculine – and what is not. A boy's rank in the hierarchy is chiefly determined by his athletic ability.
One site where gender is performed and socialized is in sport. Violent sports such as football are fundamental in naturalizing the equation of maleness with violence. Displays of strength and violence, through sports like football, help to naturalize elements of competition and hierarchy as inherently male behaviour. There is considerable evidence that males are hormonally predisposed to higher levels of aggression on average than females, due to the effects of testosterone. However, the violent and competitive nature of sports like football can only be an exclusively masculine domain if girls and women are excluded from participating altogether. The only means through which women are permitted to participate in football is as the passive spectator or cheerleader, although women do sometimes participate in other violent contact sports, such as boxing.
When a child engages in behaviour or uses something that is more often associated with the opposite sex, this is referred to as crossing gender borders. When gender borders are crossed in adolescence, the children are policed by themselves. Conflicts and disagreements between boys are resolved by name-calling and teasing, physical aggression, and exclusion from the group. This brings confusion to the natural order of building their individualism, and stifles their creativity and freeplay, critical to developing lifelong skills in problem solving and decision making. Another notion which further confuses youth is "multiple masculinities" is introduced where variables such as social class, race, ethnicity, generation, and family status determines how these young men must perform their masculinity. Boys who fail to fit the social norm are forced to enter adolescence having experienced alienation from their social group and marginalized from the social order they strive to achieve in this stage of life.
Adolescence
The last stage of childhood, adolescence, marks the onset of puberty and the eventual beginning of adulthood. Hegemonic masculinity then positions some boys, and all girls, as subordinate or inferior to others. Bullying is another avenue in which young men assert their dominance over less "masculine" boys. In this bullying schema, adolescent boys are motivated to be at the top of the scale by engaging in more risk taking activities as well. Oftentimes bullying is motivated by social constructs and generalized ideas of what a young man should be. Gendered sexuality in adolescence refers to the role gender takes in the adolescent's life and how it is informed by and impacts others' perceptions of their sexuality. This can lead to gay bashing and other forms of discrimination if young men seem not to perform the appropriate masculinity.
The male gender role is not biologically fixed, yet it is a result of the internalization of culturally defined gender norms and ideologies. In this stage this is an important point as developmental psychologists recognize change in relations with parents, peers, and even their own self-identity. This is a time of confusion and disturbance; they feel influenced as a result of asserted hegemonic masculinity as well as social factors that lead them to become more self-conscious. show that although men need not engage in all masculine behaviour to be considered masculine, enacting in more masculine behaviours increases the likelihood they will be considered more masculine, otherwise known as building "masculine capital". It has been suggested that boys' emotional stoicism leaves them unable to recognize their own and others' emotions, which leaves a risk for developing psychological distress and empty interpersonal skills. Boys in their adolescence are pressured to act masculine in order to fit the hegemonic ideals, yet the possibility of suffering long-term psychological damage as a result looms overhead.
Media representations
The 1995 documentary The Celluloid Closet discusses the depictions of homosexuals throughout film history. In Jackson Katz's film Tough Guise: Violence, Media & the Crisis in Masculinity, he asserts:
Applications
Education
Hegemonic masculinity can be helpful in education as well. It can help discover a social system that is created between male students. Also why males teachers educate the way they do. This concept has also been helpful in structuring violence-prevention programs for youth. and emotional education programs for boys.
Criminology
Hegemonic masculinity's impact on criminology is clear, with evidence that males are more likely to engage in a wider range of crimes, from standard offenses to more grave actions, than females, and have a larger presence in white-collar crime. This concept has facilitated the examination of the relationships between masculinities and various crimes. It has been utilized in specific studies on crimes committed by males, including rape in Switzerland, murder in Australia, football hooliganism and white-collar crime in England, and assaultive violence in the United States. Regarding costs and consequences, research in criminology showed how particular patterns of aggression were linked with hegemonic masculinity, not because criminals already had dominant positions, but because they were pursuing them.
Media and sports
Hegemonic masculinity has also been employed in studying media representations of men. Because the concept of hegemony helps to make sense of both the diversity and the selectiveness of images in mass media, media researchers have begun mapping the relations between different masculinities. Portrayals of masculinity in men's lifestyle magazines have been studied and researchers found elements of hegemonic masculinity woven throughout them. Commercial sports are a focus of media representations of masculinity, and the developing field of sports sociology found significant use of the concept of hegemonic masculinity. It was deployed in understanding the popularity of body-contact confrontational sports which function as an endlessly renewed symbol of masculinity and in understanding the violence and homophobia frequently found in sporting environments. Rugby union, rugby league, American football, and ice hockey, and the prevalence of injuries and concussions in these sports, is a particularly salient example of the impacts of hegemonic masculinity. With the dominant mode of hegemonic masculinity valuing emotionlessness, invulnerability, toughness, and risk-taking, concussions have become normalized. Players have accepted them as simply "part of the game". If a man does not play through a concussion, he risks being blamed for the team's loss, or labelled as effeminate. It is noble to play in pain, nobler to play in agony, and noblest if one never exhibits any sign of pain at all. Coaches buy into this unwritten code of masculinity as well, by invoking euphemisms such as "he needs to learn the difference between injury and pain", while also questioning a player's masculinity to get him back on the field quickly. Players, coaches, and trainers subscribe to the hegemonic model, thus creating a culture of dismissiveness, often resulting in concussions, which can lead to brain diseases like CTE.
Health
Hegemonic masculinity has been increasingly used to understand men's health practices and determinants. Practices such as playing through physical injuries and risk-taking sexual behaviors, such as unprotected sex with multiple partners, have been studied. The concept has also been used to understand men's exposure to risk and their difficulty in responding to disability and injury. Hegemonic masculine ideals, especially stoicism, emotionlessness, and invulnerability, alongside shame and fear of judgement, can help explain an aversion to seeking mental health care. Men are less likely than women to seek professional services psychiatrists or counsellors, informal help through friends, and are more likely to report that they would never seek psychotherapy for depression. In fact, men who adhere to the masculine norm of stoicism have difficulty in identifying grief, sadness, or a depressed mood, some of the conventional diagnostic symptoms of depression. Recognition of weakness would be a recognition of femininity, and as such, men distract themselves, avoid the problem, or get angry – one of the few emotions permissible under hegemonic masculine norms – when depressive symptoms surface. On a global scale, the impact of hegemonic masculinity has been considered in determining unequal social and political relations which are deleterious to the health of both men and women.
Organizations
Hegemonic masculinity has proved significant in organizational studies as the gendered character of workplaces and bureaucracies has been increasingly recognized. A particular focus has been placed on the military, where specific patterns of hegemonic masculinity have been entrenched but have been increasingly problematic. These studies found that negative hegemonically masculine characteristics related to violence and aggression were required to thrive in the military at all ranks and in all branches. Additionally homophobic ideals were commonplace and further subordinated men in these positions. Studies have also traced the institutionalization of hegemonic masculinities in specific organizations and their role in organizational decision making. This can be related to the glass ceiling and gender pay gap women experience.
"Tough guy" attributes like unwillingness to admit ignorance, admit mistakes, or ask for help can undermine safety culture and productivity, by interfering with exchange of useful information. A Harvard Business School study found an intervention to improve the culture at Shell Oil during the construction of the Ursa tension leg platform contributed to increased productivity and an 84% lower accident rate.
War, international relations, and militarism
Hegemonic masculinity has impacted both conflict and international relations, serving as a foundation for militarism. Charlotte Hooper discusses how U.S. foreign policy, following the Vietnam War, was seen as a way of bolstering America's manhood. It was believed that the Vietcong, often categorized "as a bunch of women and children", had humiliated and emasculated America. In order to regain its manhood – both domestically and internationally – America needed to develop a hyper-masculinized and aggressive breed of foreign policy. Hooper also discusses the idea that since the international sphere is largely composed of men, it may greatly shape both "the production and maintenance of masculinities." War, then, exists in a unique feedback loop whereby it is not only perpetuated by hegemonic masculinity, but also legitimates masculinity. Post-conflict Cyprus, presents one such example, as Stratis Andreas Efthymiou discusses, Greek Cypriot hegemonic masculinity is constructed into the post-conflict culture. Embodying bravery, determination, the subordination of women and a taste for guns were key aspects for achieving GC masculinity. In addition, proudly serving conscription in a difficult unit and showing attachment to the nationalist ideals were the pinnacle attributes of the post-war male. In turn, hegemonic masculinity shaping and being shaped by nationalism and militarism places Greek Cypriot men who appeal to peace politics, cross the divide or interact with the ‘other’ at risk of failing the hegemonic model of masculinity. In other words, it is challenging for Greek Cypriot men to find a way to respectfully relate to their self, if they attempt to come closer to Turkish Cypriots, because of the nationalist militarist way that masculinity is shaped in Cyprus. Therefore, masculinity is reproduced and adapted through a co-constitutive relationship with militarism and nationalism.
Hooper discusses how military combat has been fundamental to the very composition of masculinity "symbolically, institutionally", and culturally through body shape. Moreover, Hooper discusses how women are seen as life givers, while men are believed to be life takers. As a result, men can only exist as men if they are willing to charge into war, thereby expressing their "enduring 'natural aggression'." Furthermore, this perception also explains the traditional "exclusion of women from combat", while furthering the myth "that military service is the fullest expression of masculinity." This has troubling implications for the continuation of war, and for the enshrinement of masculine norms. Hooper also ideates about the instillation of militarized masculinity in boys, discussing how military service is a "rite of passage" for young men. As such, "war and the military represent one of the major sites where hegemonic masculinities" are formed and enshrined.
Militarized hegemonic masculinity has also impacted perceptions of citizenship as well as the LGBT community. Conscription is fairly common throughout the world, and has also been utilized in America during key conflicts. The majority of men expect conscription to be the price of adult citizenship, but religious objectors and homosexuals have been largely excluded from this. These restrictions have led to the perceived subordinate status of these groups, and their subsequent exclusion from full citizenship, in the same fashion that women have been excluded. This is reflective of the notion that men unable to, or unwilling to fight for their country are more effeminate, as they are breaking with hegemonic norms. The perceptions that homosexuals are unfit for service, and that women have a responsibility at home, is reflective of the heteronormative nature of the military. The institutional composition of the military, itself, reinforces this hegemony through the armed branch's subordination to a "dominating and organizationally competent" branch. Essentially, there is an armed wing, which is masculinized through conflict, and there is a dominating branch, that is masculinized through power. The hierarchical nature of the military is used to enforce, replicate, and enhance hegemonic masculinity.
Male rape is especially prevalent in male dominant environments, such as in the military and prison. In a 2014 GQ article titled "'Son, Men Don't Get Raped'", nearly 30 sexual assault survivors come forward to discuss rape in the military. According to The Pentagon, 38 military men are sexually assaulted every day. The majority of the victims' stories involve a highly ranked perpetrator, such as senior aides, recruiters, or sergeants, which are positions that young soldiers look up to. Some victims describe being weaker than the attacker and physically unable to stop the rape, while others felt too mentally dominated to speak up. Either way, the men were met with defeat and emasculation. In the article, the psychologist James Asbrand, who specializes in post-traumatic stress disorder, explains: "The rape of a male soldier has a particular symbolism. 'In a hyper masculine culture, what's the worst thing you can do to another man?' Force him into what the culture perceives as a feminine role. Completely dominate and rape him." Asbrand refers to the military as a hypermasculine environment, which is consistent with its media portrayal. Joining the army is considered a noble act for men, which military movies, advertisements, and video games reinforce. Because of this, it is no surprise that recruits would likely embody stereotypical masculine personas, and therefore contribute to an environment of competition.
Toxic masculinity
Connell argues that an important feature of hegemonic masculinity is the use of "toxic" practices such as physical violence, which may serve to reinforce men's dominance over women in Western societies. Other scholars have used the term toxic masculinity to refer to stereotypically masculine gender roles that restrict the kinds of emotions that can be expressed (see affect display) by boys and men, including social expectations that men seek to be dominant (the "alpha male").
According to Terry Kupers, toxic masculinity serves to outline aspects of hegemonic masculinity that are socially destructive, "such as misogyny, homophobia, greed, and violent domination". These traits are contrasted with more positive aspects of hegemonic masculinity such as "pride in [one's] ability to win at sports, to maintain solidarity with a friend, to succeed at work, or to provide for [one's] family".
Hybrid masculinity
Hybrid masculinity is the use of aspects of marginalized gender expressions in the gender performance or identity of privileged men. Scholarship on hybrid masculinities suggests that they simultaneously distance themselves from traditional norms of masculinity while reproducing and reinforcing hegemonic masculinity. Hybrid masculinities allow men to negotiate masculinity in ways that mirror more inclusive behavior and attitudes, but leave larger institutional systems sustaining gender inequality undisturbed. Scholars note that "although 'softer' and more 'sensitive' styles of masculinity are developing among some privileged groups of men, this does not necessarily contribute to the emancipation of women; in fact, quite the contrary may be true." The term was introduced to describe the contemporary trend of men taking on politics and perspectives historically understood as "emasculating."
Hybrid masculinity has been studied in relation to the manosphere, particularly beta males and incels as well as in research on gay male culture, teen behavioral issues, and contraception.
See also
References
Further reading
Feminist theory
Masculinity
Men and feminism
Men's studies | 0.769599 | 0.994328 | 0.765234 |
History of feminism | The history of feminism comprises the narratives (chronological or thematic) of the movements and ideologies which have aimed at equal rights for women. While feminists around the world have differed in causes, goals, and intentions depending on time, culture, and country, most Western feminist historians assert that all movements that work to obtain women's rights should be considered feminist movements, even when they did not (or do not) apply the term to themselves. Some other historians limit the term "feminist" to the modern feminist movement and its progeny, and use the label "protofeminist" to describe earlier movements.
Modern Western feminist history is conventionally split into time periods, or "waves", each with slightly different aims based on prior progress:
First-wave feminism of the 19th and early 20th centuries focused on overturning legal inequalities, particularly addressing issues of women's suffrage
Second-wave feminism (1960s–1980s) broadened debate to include cultural inequalities, gender norms, and the role of women in society
Third-wave feminism (1990s–2000s) refers to diverse strains of feminist activity, seen by third-wavers themselves both as a continuation of the second wave and as a response to its perceived failures
Fourth-wave feminism (early 2010s–present) expands on the third wave's focus on intersectionality, emphasizing body positivity, trans-inclusivity, and an open discourse about rape culture in the social media era
Although the "waves" construct has been commonly used to describe the history of feminism, the concept has also been criticized by non-White feminists for ignoring and erasing the history between the "waves", by choosing to focus solely on a few famous figures, on the perspective of a white bourgeois woman and on popular events, and for being racist and colonialist.
Early feminism
People and activists who discuss or advance women's equality prior to the existence of the feminist movement are sometimes labeled as protofeminist. Some scholars criticize this term because they believe it diminishes the importance of earlier contributions or that feminism does not have a single begin or linear history as implied by terms such as protofeminist or postfeminist.
Around 24 centuries ago, Plato, according to Elaine Hoffman Baruch, "[argued] for the total political and sexual equality of women, advocating that they be members of his highest class, ... those who rule and fight".
Andal, a female Tamil saint, lived around the 7th or 8th century. She is well known for writing Tiruppavai. Andal has inspired women's groups such as Goda Mandali. Her divine marriage to Vishnu is viewed by some as a feminist act, as it allowed her to avoid the regular duties of being a wife and gain autonomy. In the 12th century, the Waldensians Christian sect espoused some feministic ideas.
Renaissance Feminism
Italian-French writer Christine de Pizan (1364 – c. 1430), the author of The Book of the City of Ladies and Epître au Dieu d'Amour (Epistle to the God of Love) is cited by Simone de Beauvoir as the first woman to denounce misogyny and write about the relation of the sexes. Christine de Pizan also wrote one of the early fictional accounts of gender transition in Le Livre de la mutation de fortune. Other early feminist writers include the 16th-century writers Heinrich Cornelius Agrippa, Modesta di Pozzo di Forzi, and Jane Anger, and the 17th-century writers Hannah Woolley in England, Juana Inés de la Cruz in Mexico, Marie Le Jars de Gournay, Anne Bradstreet, Anna Maria van Schurman and François Poullain de la Barre. The emergence of women as true intellectuals effected change also in Italian humanism. Cassandra Fedele was the first woman to join a humanist group and achieved much despite greater constraints on women.
Renaissance defenses of women are present in a variety of literary genre and across Europe with a central claim of equality. Feminists appealed to principles that progressively lead to discourse of economic property injustice themes. Feminizing society was a way for women at this time to use literature to create interdependent and non-hierarchical systems that provided opportunities for both women and men.
Men have also played an important role in the history of defending that women are capable and able to compete equally with men, including Antonio Cornazzano, Vespasiano de Bisticci, and Giovanni Sabadino degli Arienti. Castiglione continues this trend of defending woman's moral character and that traditions are at fault for the appearance of women's inferiority. However, the critique is that there is no advocacy for social change, leaving her out of the political sphere, and abandoning her to traditional domestic roles. Although, many of them would encourage that if women were to be included in the political sphere it would be a natural consequence of their education. In addition, some of these men state that men are at fault for the lack of knowledge of intellectual women by leaving them out of historical records.
One of the most important 17th-century feminist writers in the English language was Margaret Cavendish, Duchess of Newcastle-upon-Tyne. Her knowledge was recognized by some, such as proto-feminist Bathsua Makin, who wrote that "The present Dutchess of New-Castle, by her own Genius, rather than any timely Instruction, over-tops many grave Grown-Men," and considered her a prime example of what women could become through education.
17th century
Margaret Fell's most famous work is "Women's Speaking Justified", a scripture-based argument for women's ministry, and one of the major texts on women's religious leadership in the 17th century. In this short pamphlet, Fell based her argument for equality of the sexes on one of the basic premises of Quakerism, namely spiritual equality. Her belief was that God created all human beings, therefore both men and women were capable of not only possessing the Inner Light but also the ability to be a prophet. Fell has been described as a "feminist pioneer".
In 1622, Marie de Gournay published The Equality of Men and Women, in which she argued for equality of the sexes.
18th century: the Age of Enlightenment
The Age of Enlightenment was characterized by secular intellectual reasoning and a flowering of philosophical writing. Many Enlightenment philosophers defended the rights of women, including Jeremy Bentham (1781), Marquis de Condorcet (1790), and Mary Wollstonecraft (1792). Other important writers of the time that expressed feminist views included Abigail Adams, Catharine Macaulay, and Hedvig Charlotta Nordenflycht.
Jeremy Bentham
The English utilitarian and classical liberal philosopher Jeremy Bentham said that it was the placing of women in a legally inferior position that made him choose the career of a reformist at the age of eleven, though American critic John Neal claimed to have convinced him to take up women's rights issues during their association between 1825 and 1827. Bentham spoke for complete equality between sexes including the rights to vote and to participate in government. He opposed the asymmetrical sexual moral standards between men and women.
In his Introduction to the Principles of Morals and Legislation (1781), Bentham strongly condemned many countries' common practice to deny women's rights due to allegedly inferior minds. Bentham gave many examples of able female regents.
Marquis de Condorcet
Nicolas de Condorcet was a mathematician, classical liberal politician, leading French Revolutionary, republican, and Voltairean anti-clericalist. He was also a fierce defender of human rights, including the equality of women and the abolition of slavery, unusual for the 1780s. He advocated for women's suffrage in the new government in 1790 with De l'admission des femmes au droit de cité (For the Admission to the Rights of Citizenship For Women) and an article for Journal de la Société de 1789.
Olympe de Gouges and a Declaration
Following de Condorcet's repeated, yet failed, appeals to the National Assembly in 1789 and 1790, Olympe de Gouges (in association with the Society of the Friends of Truth) authored and published the Declaration of the Rights of Woman and of the Female Citizen in 1791. This was another plea for the French Revolutionary government to recognize the natural and political rights of women. De Gouges wrote the Declaration in the prose of the Declaration of the Rights of Man and of the Citizen, almost mimicking the failure of men to include more than a half of the French population in egalité. Even though the Declaration did not immediately accomplish its goals, it did set a precedent for a manner in which feminists could satirize their governments for their failures in equality, seen in documents such as A Vindication of the Rights of Woman and A Declaration of Sentiments.
Wollstonecraft and A Vindication
Perhaps the most cited feminist writer of the time was Mary Wollstonecraft, She identified the education and upbringing of women as creating their limited expectations based on a self-image dictated by the typically male perspective. Despite her perceived inconsistencies (Miriam Brody referred to the "Two Wollstonecrafts") reflective of problems that had no easy answers, this book remains a foundation stone of feminist thought.
Wollstonecraft believed that both genders contributed to inequality. She took women's considerable power over men for granted, and determined that both would require education to ensure the necessary changes in social attitudes. Given her humble origins and scant education, her personal achievements speak to her own determination. For many commentators, Wollstonecraft represents the first codification of equality feminism, or a refusal of the feminine role in society.
19th century
The feminine ideal
19th-century feminists reacted to cultural inequities including the pernicious, widespread acceptance of the Victorian image of women's "proper" role and "sphere". The Victorian ideal created a dichotomy of "separate spheres" for men and women that was very clearly defined in theory, though not always in reality. In this ideology, men were to occupy the public sphere (the space of wage labor and politics) and women the private sphere (the space of home and children.) This "feminine ideal", also called "The Cult of Domesticity", was typified in Victorian conduct books such as Mrs Beeton's Book of Household Management and Sarah Stickney Ellis's books. The Angel in the House (1854) and El ángel del hogar (The angel in the house) (1857), bestsellers by Coventry Patmore and María del Pilar Sinués de Marco, came to symbolize the Victorian feminine ideal. Queen Victoria herself disparaged the concept of feminism, which she described in private letters as the "mad, wicked folly of 'Woman's Rights'".
Feminism in fiction
As Jane Austen addressed women's restricted lives in the early part of the century, Charlotte Brontë, Anne Brontë, Elizabeth Gaskell, and George Eliot depicted women's misery and frustration. In her autobiographical novel Ruth Hall (1854), American journalist Fanny Fern describes her own struggle to support her children as a newspaper columnist after her husband's untimely death. Louisa May Alcott penned a strongly feminist novel, A Long Fatal Love Chase (1866), about a young woman's attempts to flee her bigamist husband and become independent.
Male authors also recognized injustices against women. The novels of George Meredith, George Gissing, and Thomas Hardy, and the plays of Henrik Ibsen outlined the contemporary plight of women. Meredith's Diana of the Crossways (1885) is an account of Caroline Norton's life. One critic later called Ibsen's plays "feministic propaganda".
John Neal
John Neal is remembered as America's first women's rights lecturer. Starting in 1823 and continuing at least as late as 1869, he used magazine articles, short stories, novels, public speaking, political organizing, and personal relationships to advance feminist issues in the United States and Great Britain, reaching the height of his influence in this field circa 1843. He declared intellectual equality between men and women, fought coverture, and demanded suffrage, equal pay, and better education and working conditions for women. Neal's early feminist essays in the 1820s fill an intellectual gap between Mary Wollstonecraft, Catharine Macaulay, and Judith Sargent Murray and Seneca Falls Convention-era successors like Sarah Moore Grimké, Elizabeth Cady Stanton, and Margaret Fuller. As a male writer insulated from many common forms of attack against female feminist thinkers, Neal's advocacy was crucial in bringing the field back into the mainstream in England and the US.
In his essays for Blackwood's Magazine (1824-1825), Neal called for women's suffrage and "maintain[ed] that women are not inferior to men, but only unlike men, in their intellectual properties" and "would have women treated like men, of common sense." In The Yankee magazine (1828–1829), he demanded economic opportunities for women, saying "We hope to see the day... when our women of all ages... will be able to maintain herself, without being obliged to marry for bread." At his most well-attended lecture titled "Rights of Women," Neal spoke before a crowd of around 3,000 people in 1843 at New York City's largest auditorium at the time, the Broadway Tabernacle. Neal became even more prominently involved with the women's suffrage movement in his old age following the Civil War, both in Maine and nationally in the US by supporting Elizabeth Cady Stanton's and Susan B. Anthony's National Woman Suffrage Association and writing for its journal, The Revolution. Stanton and Anthony recognized his work after his death in their History of Woman Suffrage.
Marion Reid and Caroline Norton
At the outset of the 19th century, the dissenting feminist voices had little to no social influence. There was little sign of change in the political or social order, nor any evidence of a recognizable women's movement. Collective concerns began to coalesce by the end of the century, paralleling the emergence of a stiffer social model and code of conduct that Marion Reid described as confining and repressive for women. While the increased emphasis on feminine virtue partly stirred the call for a woman's movement, the tensions that this role caused for women plagued many early-19th-century feminists with doubt and worry, and fueled opposing views.
In Scotland, Reid published her influential A Plea for Woman in 1843, which proposed a transatlantic Western agenda for women's rights, including voting rights for women.
Caroline Norton advocated for changes in British law. She discovered a lack of legal rights for women upon entering an abusive marriage. The publicity generated from her appeal to Queen Victoria and related activism helped change English laws to recognize and accommodate married women and child custody issues.
Florence Nightingale and Frances Power Cobbe
While many women including Norton were wary of organized movements, their actions and words often motivated and inspired such movements. Among these was Florence Nightingale, whose conviction that women had all the potential of men but none of the opportunities impelled her storied nursing career. At the time, her feminine virtues were emphasized over her ingenuity, an example of the bias against acknowledging female accomplishment in the mid-1800s.
Due to varying ideologies, feminists were not always supportive of each other's efforts. Harriet Martineau and others dismissed Wollstonecraft's contributions as dangerous, and deplored Norton's candidness, but seized on the abolitionist campaign that Martineau had witnessed in the United States as one that should logically be applied to women. Her Society in America was pivotal: it caught the imagination of women who urged her to take up their cause.
Anna Wheeler was influenced by Saint Simonian socialists while working in France. She advocated for suffrage and attracted the attention of Benjamin Disraeli, the Conservative leader, as a dangerous radical on a par with Jeremy Bentham. She would later inspire early socialist and feminist advocate William Thompson, who wrote the first work published in English to advocate full equality of rights for women, the 1825 "Appeal of One Half of the Human Race".
Feminists of previous centuries charged women's exclusion from education as the central cause for their domestic relegation and denial of social advancement, and women's 19th-century education was no better. Frances Power Cobbe, among others, called for education reform, an issue that gained attention alongside marital and property rights, and domestic violence.
Female journalists like Martineau and Cobbe in Britain, and Margaret Fuller in America, were achieving journalistic employment, which placed them in a position to influence other women. Cobbe would refer to "Woman's Rights" not just in the abstract, but as an identifiable cause.
Ladies of Langham Place
Barbara Leigh Smith and her friends met regularly during the 1850s in London's Langham Place to discuss the united women's voice necessary for achieving reform. These "Ladies of Langham Place" included Bessie Rayner Parkes and Anna Jameson. They focused on education, employment, and marital law. One of their causes became the Married Women's Property Committee of 1855. They collected thousands of signatures for legislative reform petitions, some of which were successful. Smith had also attended the 1848 Seneca Falls Convention in America.
Smith and Parkes, together and apart, wrote many articles on education and employment opportunities. In the same year as Norton, Smith summarized the legal framework for injustice in her 1854 A Brief Summary of the Laws of England concerning Women. She was able to reach large numbers of women via her role in the English Women's Journal. The response to this journal led to their creation of the Society for Promoting the Employment of Women (SPEW). Smith's Married Women's Property committee collected 26,000 signatures to change the law for all women, including those unmarried.
Harriet Taylor published her Enfranchisement in 1851, and wrote about the inequities of family law. In 1853, she married John Stuart Mill, and provided him with much of the subject material for The Subjection of Women.
Emily Davies also encountered the Langham group, and with Elizabeth Garrett created SPEW branches outside London.
Educational reform
The interrelated barriers to education and employment formed the backbone of 19th-century feminist reform efforts, for instance, as described by Harriet Martineau in her 1859 Edinburgh Journal article, "Female Industry". These barriers did not change in conjunction with the economy. Martineau, however, remained a moderate, for practical reasons, and unlike Cobbe, did not support the emerging call for the vote.
The education reform efforts of women like Davies and the Langham group slowly made inroads. Queen's College (1848) and Bedford College (1849) in London began to offer some education to women from 1848. By 1862, Davies established a committee to persuade the universities to allow women to sit for the recently established Local Examinations, and achieved partial success in 1865. She published The Higher Education of Women a year later. Davies and Leigh Smith founded the first higher educational institution for women and enrolled five students. The school later became Girton College, Cambridge in 1869, Newnham College, Cambridge in 1871, and Lady Margaret Hall at Oxford in 1879. Bedford began to award degrees the previous year. Despite these measurable advances, few could take advantage of them and life for female students was still difficult.
In the 1883 Ilbert Bill controversy, a British India bill that proposed Indian judicial jurisdiction to try British criminals, Bengali women in support of the bill responded by claiming that they were more educated than the English women opposed to the bill, and noted that more Indian women had degrees than British women at the time.
As part of the continuing dialogue between British and American feminists, Elizabeth Blackwell, one of the first American women to graduate in medicine (1849), lectured in Britain with Langham support. She eventually took her degree in France. Garrett's very successful 1870 campaign to run for London School Board office is another example of a how a small band of very determined women were beginning to reach positions of influence at the local government level.
Women's campaigns
Campaigns gave women opportunities to test their new political skills and to conjoin disparate social reform groups. Their successes include the campaign for the Married Women's Property Act (passed in 1882) and the campaign to repeal the Contagious Diseases Acts of 1864, 1866, and 1869, which united women's groups and utilitarian liberals like John Stuart Mill.
Generally, women were outraged by the inherent inequity and misogyny of the legislation. For the first time, women in large numbers took up the rights of prostitutes. Prominent critics included Blackwell, Nightingale, Martineau, and Elizabeth Wolstenholme. Elizabeth Garrett, unlike her sister, Millicent, did not support the campaign, though she later admitted that the campaign had done well.
Josephine Butler, already experienced in prostitution issues, a charismatic leader, and a seasoned campaigner, emerged as the natural leader of what became the Ladies National Association for the Repeal of the Contagious Diseases Acts in 1869. Her work demonstrated the potential power of an organized lobby group. The association successfully argued that the Acts not only demeaned prostitutes, but all women and men by promoting a blatant sexual double standard. Butler's activities resulted in the radicalization of many moderate women. The Acts were repealed in 1886.
On a smaller scale, Annie Besant campaigned for the rights of matchgirls (female factory workers) and against the appalling conditions under which they worked in London. Her work of publicizing the difficult conditions of the workers through interviews in bi-weekly periodicals like The Link became a method for raising public concern over social issues.
19th to 21st centuries
Feminists did not recognize separate waves of feminism until the second wave was so named by journalist Martha Weinman Lear in a 1968 New York Times Magazine article "The Second Feminist Wave", according to Alice Echols. Jennifer Baumgardner reports criticism by professor Roxanne Dunbar-Ortiz of the division into waves and the difficulty of categorizing some feminists into specific waves, argues that the main critics of a wave are likely to be members of the prior wave who remain vital, and that waves are coming faster. The "waves debate" has influenced how historians and other scholars have established the chronologies of women's political activism.
First wave
The 19th- and early 20th-century feminist activity in the English-speaking world that sought to win women's suffrage, female education rights, better working conditions, and abolition of gender double standards is known as first-wave feminism. The term "first-wave" was coined retrospectively when the term second-wave feminism was used to describe a newer feminist movement that fought social and cultural inequalities beyond basic political inequalities.
In the United States, feminist movement leaders campaigned for the national abolition of slavery and Temperance before championing women's rights. American first-wave feminism involved a wide range of women, some belonging to conservative Christian groups (such as Frances Willard and the Woman's Christian Temperance Union), others resembling the diversity and radicalism of much of second-wave feminism (such as Stanton, Anthony, Matilda Joslyn Gage, and the National Woman Suffrage Association, of which Stanton was president). First-wave feminism in the United States is considered to have ended with the passage of the Nineteenth Amendment to the United States Constitution (1920), which granted women the right to vote in the United States.
Activism for the equality of women was not limited to the United States. In mid-nineteenth century Persia, Táhirih, an early member of the Bábí Faith, was active as a poet and religious reformer. At a time when it was considered taboo for women to speak openly with men in Persia, and for non-clerics to speak about religion, she challenged the intellectuals of the age in public discourse on social and theological matters. In 1848 she appeared before an assemblage of men without a veil and gave a speech on the rights of women, signaling a radical break with the prevailing moral order and the start of a new religious and social dispensation. After this episode she was put under house arrest by the Persian Government until her execution by strangling at the age of 35 in August 1852. At her execution she is reported as proclaiming "You can kill me as soon as you like, but you cannot stop the emancipation of women." The story of her life rapidly spread to European circles and she would inspire later generations of Iranian feminists. Members of the Bahá'í Faith recognize her as the first women's suffrage martyr and an example of fearlessness and courage in the advancement of the equality of women and men.
Louise Dittmar campaigned for women's rights, in Germany, in the 1840s. Although slightly later in time, Fusae Ichikawa was in the first wave of women's activists in her own country of Japan, campaigning for women's suffrage. Mary Lee was active in the suffrage movement in South Australia, the first Australian colony to grant women the vote in 1894. In New Zealand, Kate Sheppard and Mary Ann Müller worked to achieve the vote for women by 1893.
In the United States, the antislavery campaign of the 1830s served as both a cause ideologically compatible with feminism and a blueprint for later feminist political organizing. Attempts to exclude women only strengthened their convictions. Sarah and Angelina Grimké moved rapidly from the emancipation of slaves to the emancipation of women. The most influential feminist writer of the time was the colourful journalist Margaret Fuller, whose Woman in the Nineteenth Century was published in 1845. Her dispatches from Europe for the New York Tribune helped create to synchronize the women's rights movement.
Elizabeth Cady Stanton and Lucretia Mott met in 1840 while en route to London where they were shunned as women by the male leadership of the first World's Anti-Slavery Convention. In 1848, Mott and Stanton held a woman's rights convention in Seneca Falls, New York, where a declaration of independence for women was drafted. Lucy Stone helped to organize the first National Women's Rights Convention in 1850, a much larger event at which Sojourner Truth, Abby Kelley Foster, and others spoke sparked Susan B. Anthony to take up the cause of women's rights. In December 1851, Sojourner Truth contributed to the feminist movement when she spoke at the Women's Convention in Akron, Ohio. She delivered her powerful "Ain't I a Woman" speech in an effort to promote women's rights by demonstrating their ability to accomplish tasks that have been traditionally associated with men. Barbara Leigh Smith met with Mott in 1858, strengthening the link between the transatlantic feminist movements.
Stanton and Matilda Joslyn Gage saw the Church as a major obstacle to women's rights, and welcomed the emerging literature on matriarchy. Both Gage and Stanton produced works on this topic, and collaborated on The Woman's Bible. Stanton wrote "The Matriarchate or Mother-Age" and Gage wrote Woman, Church and State, neatly inverting Johann Jakob Bachofen's thesis and adding a unique epistemological perspective, the critique of objectivity and the perception of the subjective.
Stanton once observed regarding assumptions of female inferiority, "The worst feature of these assumptions is that women themselves believe them". However this attempt to replace androcentric (male-centered) theological tradition with a gynocentric (female-centered) view made little headway in a women's movement dominated by religious elements; thus she and Gage were largely ignored by subsequent generations.
By 1913, Feminism (originally capitalized) was a household term in the United States. Major issues in the 1910s and 1920s included suffrage, women's partisan activism, economics and employment, sexualities and families, war and peace, and a Constitutional amendment for equality. Both equality and difference were seen as routes to women's empowerment. Organizations at the time included the National Woman's Party, suffrage advocacy groups such as the National American Woman Suffrage Association and the National League of Women Voters, career associations such as the American Association of University Women, the National Federation of Business and Professional Women's Clubs, and the National Women's Trade Union League, war and peace groups such as the Women's International League for Peace and Freedom and the International Council of Women, alcohol-focused groups like the Woman's Christian Temperance Union and the Women's Organization for National Prohibition Reform, and race- and gender-centered organizations like the National Association of Colored Women. Leaders and theoreticians included Jane Addams, Ida B. Wells-Barnett, Alice Paul, Carrie Chapman Catt, Margaret Sanger, and Charlotte Perkins Gilman.
Suffrage
The women's right to vote, with its legislative representation, represented a paradigm shift where women would no longer be treated as second-class citizens without a voice. The women's suffrage campaign is the most deeply embedded campaign of the past 250 years.
At first, suffrage was treated as a lower priority. The French Revolution accelerated this, with the assertions of Condorcet and de Gouges, and the women who led the 1789 march on Versailles. In 1793, the Society of Revolutionary Republican Women was founded, and originally included suffrage on its agenda before it was suppressed at the end of the year. As a gesture, this showed that issue was now part of the European political agenda.
German women were involved in the Vormärz, a prelude to the 1848 revolution. In Italy, Clara Maffei, Cristina Trivulzio Belgiojoso, and Ester Martini Currica were politically active in the events leading up to 1848. In Britain, interest in suffrage emerged from the writings of Wheeler and Thompson in the 1820s, and from Reid, Taylor, and Anne Knight in the 1840s. While New Zealand was the first sovereign state where women won the right to vote (1893), they did not win the right to stand in elections until later. The Australian State of South Australia was the first sovereign state in the world to officially grant full suffrage to women (in 1894).
The suffragettes
The Langham Place ladies set up a suffrage committee at an 1866 meeting at Elizabeth Garrett's home, renamed the London Society for Women's Suffrage in 1867. Soon similar committees had spread across the country, raising petitions, and working closely with John Stuart Mill. When denied outlets by establishment periodicals, feminists started their own, such as Lydia Becker's Women's Suffrage Journal in 1870.
Other publications included Richard Pankhurst's Englishwoman's Review (1866). Tactical disputes were the biggest problem, and the groups' memberships fluctuated. Women considered whether men (like Mill) should be involved. As it went, Mill withdrew as the movement became more aggressive with each disappointment. The political pressure ensured debate, but year after year the movement was defeated in Parliament.
Despite this, the women accrued political experience, which translated into slow progress at the local government level. But after years of frustration, many women became increasingly radicalized. Some refused to pay taxes, and the Pankhurst family emerged as the dominant movement influence, having also founded the Women's Franchise League in 1889, which sought local election suffrage for women.
International suffrage
The Isle of Man, a UK dependency, was the first free standing jurisdiction to grant women the vote (1881), followed by the right to vote (but not to stand) in New Zealand in 1893, where Kate Sheppard had pioneered reform. Some Australian states had also granted women the vote. This included Victoria for a brief period (1863–5), South Australia (1894), and Western Australia (1899). Australian women received the vote at the Federal level in 1902, Finland in 1906, and Norway initially in 1907 (completed in 1913).
Early 20th century
In the early part of the 20th century, also known as the Edwardian era, there was a change in the way women dressed from the Victorian rigidity and complacency. Women, especially women who married a wealthy man, would often wear what we consider today, practical.
Books, articles, speeches, pictures, and papers from the period show a diverse range of themes other than political reform and suffrage discussed publicly. In the Netherlands, for instance, the main feminist issues were educational rights, rights to medical care, improved working conditions, peace, and dismantled gender double standards. Feminists identified as such with little fanfare.
Emmeline Pankhurst formed the Women's Social and Political Union (WSPU) in 1903. As she put it, they viewed votes for women no longer as "a right, but as a desperate necessity". At the state level, Australia and the United States had already granted suffrage to some women. American feminists such as Susan B. Anthony (1902) visited Britain. While WSPU was the best-known suffrage group, it was only one of many, such as the Women's Freedom League and the National Union of Women's Suffrage Societies (NUWSS) led by Millicent Garrett Fawcett. WSPU was largely a family affair, although externally financed. Christabel Pankhurst became the dominant figure and gathered friends such as Annie Kenney, Flora Drummond, Teresa Billington, Ethel Smyth, Grace Roe, and Norah Dacre Fox (later known as Norah Elam) around her. Veterans such as Elizabeth Garrett also joined.
In 1906, the Daily Mail first labeled these women "suffragettes" as a form of ridicule, but the term was embraced by the women to describe the more militant form of suffragism visible in public marches, distinctive green, purple, and white emblems, and the Artists' Suffrage League's dramatic graphics. The feminists learned to exploit photography and the media, and left a vivid visual record including images such as the 1914 photograph of Emmeline.
The protests slowly became more violent, and included heckling, banging on doors, smashing shop windows, and arson. Emily Davison, a WSPU member, unexpectedly ran onto the track during the 1913 Epsom Derby and died under the King's horse. These tactics produced mixed results of sympathy and alienation. As many protesters were imprisoned and went on hunger-strike, the British government was left with an embarrassing situation. From these political actions, the suffragists successfully created publicity around their institutional discrimination and sexism.
Feminist science fiction
At the beginning of the 20th century, feminist science fiction emerged as a subgenre of science fiction that deals with women's roles in society. Female writers of the utopian literature movement at the time of first-wave feminism often addressed sexism. Charlotte Perkins Gilman's Herland (1915) did so. Sultana's Dream (1905) by Bengali Muslim feminist Roquia Sakhawat Hussain depicts a gender-reversed purdah in a futuristic world.
During the 1920s, writers such as Clare Winger Harris and Gertrude Barrows Bennett published science fiction stories written from female perspectives and occasionally dealt with gender- and sexuality-based topics while popular 1920s and 30s pulp science fiction exaggerated masculinity alongside sexist portrayals of women. By the 1960s, science fiction combined sensationalism with political and technological critiques of society. With the advent of feminism, women's roles were questioned in this "subversive, mind expanding genre".
Feminist science fiction poses questions about social issues such as how society constructs gender roles, how reproduction defines gender, and how the political power of men and women are unequal. Some of the most notable feminist science fiction works have illustrated these themes using utopias to explore societies where gender differences or gender power imbalances do not exist, and dystopias to explore worlds where gender inequalities are escalated, asserting a need for feminist work to continue.
During the first and second world wars
Women entered the labor market during the First World War in unprecedented numbers, often in new sectors, and discovered the value of their work. The war also left large numbers of women bereaved and with a net loss of household income. The scores of men killed and wounded shifted the demographic composition. War also split the feminist groups, with many women opposed to the war and others involved in the white feather campaign.
Feminist scholars like Françoise Thébaud and Nancy F. Cott note a conservative reaction to World War I in some countries, citing a reinforcement of traditional imagery and literature that promotes motherhood. The appearance of these traits in wartime has been called the "nationalization of women."
In the years between the wars, feminists fought discrimination and establishment opposition to advances in women's roles in the social world and workforce. In Virginia Woolf's A Room of One's Own, Woolf describes the extent of the backlash and her frustration. By now, the word "feminism" was in use, but with a negative connotation from mass media, which discouraged women from self-identifying as such. When Rebecca West, another prominent writer, had been attacked as "a feminist", Woolf defended her. West has been remembered for her comment "I myself have never been able to find out precisely what feminism is: I only know that people call me a feminist whenever I express sentiments that differentiate me from a doormat, or a prostitute."
In the 1920s, the nontraditional styles and attitudes of flappers were popular among American and British women.
Electoral reform
The United Kingdom's Representation of the People Act 1918 gave near-universal suffrage to men, and suffrage to women over 30. The Representation of the People Act 1928 extended equal suffrage to both men and women. It also shifted the socioeconomic makeup of the electorate towards the working class, favouring the Labour Party, who were more sympathetic to women's issues.
The granting of the vote did not automatically give women the right to stand for Parliament and the Parliament (Qualification of Women) Act was rushed through just before the following election. Seventeen women were among the 1700 candidates nominated. Christabel Pankhurst narrowly failed to win a seat, and Constance Markievicz (Sinn Féin) was the first woman elected in Ireland in 1918, but as an Irish nationalist, refused to take her seat.
In 1919 and 1920, both Lady Astor and Margaret Wintringham won seats for the Conservatives and Liberals respectively by succeeding their husband's seats. Labour swept to power in 1924. Astor's proposal to form a women's party in 1929 was unsuccessful. Women gained considerable electoral experience over the next few years as a series of minority governments ensured almost annual elections. Close affiliation with Labour also proved to be a problem for the National Union of Societies for Equal Citizenship (NUSEC), which had little support in the Conservative party. However, their persistence with Prime Minister Stanley Baldwin was rewarded with the passage of the Representation of the People (Equal Franchise) Act 1928.
European women received the vote in Finland (that time still an autonomous state under Czar Russia) in 1906, in Denmark and Iceland in 1915 (full in 1919), the Russian Republic in 1917, Austria, Germany and Canada in 1918, many countries including the Netherlands in 1919, Czechoslovakia (today Czech Republic and Slovakia) in 1920, and Turkey and South Africa in 1930. French women did not receive the vote until 1945. Liechtenstein was one of the last countries, in 1984.
After French women were given the right to vote in 1945, two women's organizations were founded in the French colony of Martinique. Le Rassemblement féminin and l'Union des femmes de la Martinique both had the goal of encouraging women to vote in the upcoming elections. While l'Union des femmes de la Martinique, founded by Jeanne Lero was influenced by beliefs, Le Rassemblement féminin, founded by Paulette Nardal, claimed to not support any particular political party and only encouraged women to take political action in order to create social change.
Social reform
The political change did not immediately change social circumstances. With the economic recession, women were the most vulnerable sector of the workforce. Some women who held jobs prior to the war were obliged to forfeit them to returning soldiers, and others were excessed. With limited franchise, the UK National Union of Women's Suffrage Societies (NUWSS) pivoted into a new organization, the National Union of Societies for Equal Citizenship (NUSEC), which still advocated for equality in franchise, but extended its scope to examine equality in social and economic areas. Legislative reform was sought for discriminatory laws (e.g., family law and prostitution) and over the differences between equality and equity, the accommodations that would allow women to overcome barriers to fulfillment (known in later years as the "equality vs. difference conundrum"). Eleanor Rathbone, who became a British Member of Parliament in 1929, succeeded Millicent Garrett as president of NUSEC in 1919. She expressed the critical need for consideration of difference in gender relationships as "what women need to fulfill the potentialities of their own natures". The 1924 Labour government's social reforms created a formal split, as a splinter group of strict egalitarians formed the Open Door Council in May 1926. This eventually became an international movement, and continued until 1965. Other important social legislation of this period included the Sex Disqualification (Removal) Act 1919 (which opened professions to women), and the Matrimonial Causes Act 1923. In 1932, NUSEC separated advocacy from education, and continued the former activities as the National Council for Equal Citizenship and the latter as the Townswomen's Guild. The council continued until the end of the Second World War.
Reproductive rights
British laws prevented feminists from discussing and addressing reproductive rights. Annie Besant was tried under the Obscene Publications Act 1857 in 1877 for publishing Charles Knowlton's Fruits of Philosophy, a work on family planning. Knowlton had previously been convicted in the United States. She and her colleague Charles Bradlaugh were convicted but acquitted on appeal. The subsequent publicity resulted in a decline in the UK's birth rate. Besant later wrote The Law of Population.
In America, Margaret Sanger was prosecuted for her book Family Limitation under the Comstock Act in 1914, and fled to Britain until it was safe to return. Sanger's work was prosecuted in Britain. She met Marie Stopes in Britain, who was never prosecuted but regularly denounced for her promotion of birth control. In 1917, Sanger started the Birth Control Review. In 1926, Sanger gave a lecture on birth control to the women's auxiliary of the Ku Klux Klan in Silver Lake, New Jersey, which she referred to as a "weird experience". The establishment of the Abortion Law Reform Association in 1936 was even more controversial. The British penalty for abortion had been reduced from execution to life imprisonment by the Offences against the Person Act 1861, although some exceptions were allowed in the Infant Life (Preservation) Act 1929. Following Aleck Bourne's prosecution in 1938, the 1939 Birkett Committee made recommendations for reform that were set aside at the Second World War's outbreak, along with many other women's issues.
In the Netherlands, Aletta H. Jacobs, the first Dutch female doctor, and Wilhelmina Drucker led discussion and action for reproductive rights. Jacobs imported diaphragms from Germany and distributed them to poor women for free.
1940s
In most front line countries, women volunteered or were conscripted for various duties in support of the national war effort. In Britain, women were drafted and assigned to industrial jobs or to non-combat military service. The British services enrolled 460,000 women. The largest service, Auxiliary Territorial Service, had a maximum of 213,000 women enrolled, many of whom served in anti-aircraft gun combat roles. In many countries, including Germany and the Soviet Union, women volunteered or were conscripted. In Germany, women volunteered in the League of German Girls and assisted the Luftwaffe as anti-aircraft gunners, or as guerrilla fighters in Werwolf units behind Allied lines. In the Soviet Union, about 820,000 women served in the military as medics, radio operators, truck drivers, snipers, combat pilots, and junior commanding officers.
Many American women retained their domestic chores and often added a paid job, especially one related to a war industry. Much more so than in the previous war, large numbers of women were hired for unskilled or semi-skilled jobs in munitions, and barriers against married women taking jobs were eased. The popular Rosie the Riveter icon became a symbol for a generation of American working women. In addition, some 300,000 women served in U.S. military uniform with organizations such as Women's Army Corps and WAVES. With many young men gone, sports organizers tried to set up professional women's teams, such as the All-American Girls Professional Baseball League, which closed after the war. After the war, most munitions plants closed, and civilian plants replaced their temporary female workers with returning veterans, who had priority.
Second wave
"Second-wave feminism" identifies a period of feminist activity from the early 1960s through the late 1980s that saw cultural and political inequalities as inextricably linked. The movement encouraged women to understand aspects of their personal lives as deeply politicized and reflective of a sexist power structure. As first-wave feminists focused on absolute rights such as suffrage, second-wave feminists focused on other cultural equality issues, such as ending discrimination.
Betty Friedan, The Feminine Mystique, and Women's Liberation
A landmark feminist work appeared in 1949 called The Second Sex, a book written by Simone de Beauvoir. The critical text pertained to every facet of what would later be defined as gender discourse. Beauvoir goes into detail on the treatment of women throughout entire history of the world and analyses the modes of oppression enforced by patriarchy and then critiques it. In 1963, Betty Friedan's exposé The Feminine Mystique became the voice for the discontent and disorientation women felt in being shunted into homemaking positions after their college graduations. In the book, Friedan explored the roots of the change in women's roles from essential workforce during World War II to homebound housewife and mother after the war, and assessed the forces that drove this change in perception of women's roles.
The expression "Women's Liberation" has been used to refer to feminism throughout history. "Liberation" has been associated with feminist aspirations since 1895, and appears in the context of "women's liberation" in Simone de Beauvoir's 1949 The Second Sex, which appeared in English translation in 1953. The phrase "women's liberation" was first used in 1964, in print in 1966, though the French equivalent, libération des femmes, occurred as far back as 1911. "Women's liberation" was in use at the 1967 American Students for a Democratic Society (SDS) convention, which held a panel discussion on the topic. In 1968, the term "Women's Liberation Front" appeared in Ramparts magazine, and began to refer to the whole women's movement. In Chicago, women disillusioned with the New Left met separately in 1967, and published Voice of the Women's Liberation Movement in March 1968. When the Miss America pageant took place in Atlantic City in September 1968, the media referred to the resulting demonstrations as "Women's Liberation". The Chicago Women's Liberation Union was formed in 1969. Similar groups with similar titles appeared in many parts of the United States. Bra-burning, although fictional, became associated with the movement, and the media coined other terms such as "libber". "Women's Liberation" persisted over the other rival terms for the new feminism, captured the popular imagination, and has endured alongside the older term "Women's Movement".
This time was marked by increased female enrolment in higher education, the establishment of academic women's studies courses and departments, and feminist ideology in other related fields, such as politics, sociology, history, and literature. This academic shift in interests questioned the status quo, and its standards and authority.
The rise of the Women's Liberation movement revealed "multiple feminisms", or different underlying feminist lenses, due to the diverse origins from which groups had coalesced and intersected, and the complexity and contentiousness of the issues involved. bell hooks is noted as a prominent critic of the movement for its lack of voice given to the most oppressed women, its lack of emphasis on the inequalities of race and class, and its distance from the issues that divide women. Helen Reddy's "I Am Woman", John Lennon's "Woman is the Nigger of the World" and Yoko Ono's "Josei Joui Banzai" were 70s feminist songs. Feminist's wrong protest against rock music movement was started in Los Angeles, where Women Against Violence Against Women was founded in 1976; they campaigned against the Rolling Stones' 1976 album Black and Blue.
Feminist writing
The publication of Betty Friedan's The Feminine Mystique has been credited with beginning the so-called "second wave" of feminist activism, during which time feminist writers furthered conversations about women's political and sexual concerns. Examples include Gloria Steinem's Ms. magazine and Kate Millett's Sexual Politics. Millett's bleak survey of male writers, their attitudes and biases, to demonstrate that sex is politics, and politics is power imbalance in relationships. Shulamith Firestone's The Dialectic of Sex described a feminist revolution based in Marxism, referenced as the "sex war." Considering the debates over patriarchy, she claimed that male domination dated to "back beyond recorded history to the animal kingdom itself."
Germaine Greer's The Female Eunuch, Sheila Rowbotham's Women's Liberation and the New Politics, and Juliet Mitchell's Woman's Estate represent the English perspective. Mitchell argued that the movement should be seen as an international phenomenon with different manifestations based on local culture. British women drew on left-wing politics and organized small local discussion groups, partly through the London Women's Liberation Workshop and its publications, Shrew and the LWLW Newsletter. Although there were marches, the focus was on consciousness-raising, or political activism intended to bring a cause or condition to a wider audience. Kathie Sarachild of Redstockings described its function as such that women would "find what they thought was an individual dilemma is social predicament".
US women's writing included works such as Susan Brownmiller's 1975 Against Our Will, which introduced an explicit agenda against male violence, specifically male sexual violence, in a treatise on rape. Her work has been referred to as "groundbreaking" due to its framing of rape as a social problem; it also had a fair number of critics, primarily from feminists of color, who took issue with Brownmiller's approach to race. Brownmiller's other major book, In Our Time (2000), is a history of women's liberation.
In Academic circles, feminist theology was a growing interest. Phyllis Trible wrote extensively throughout the 1970s to critique biblical interpretation of the time, using a type of critique known as Rhetorical criticism. Trible's analysis of biblical text seeks to explain that the bible itself is not sexist, but that it is centuries of sexism in societies that have produced this narrative.
Feminist views on pornography
Susan Griffin was one of the first feminists to write on pornography's implications in her 1981 Pornography and Silence. Beyond Brownmiller and Griffin's positions, Catharine MacKinnon and Andrea Dworkin influenced debates and activism around pornography and prostitution, particularly at the Supreme Court of Canada. MacKinnon, a lawyer, has stated, "To be about to be raped is to be gender female in the process of going about life as usual." She explained sexual harassment by saying that it "doesn't mean that they [harassers] all want to fuck us, they just want to hurt us, dominate us, and control us, and that is fucking us." According to Pauline B. Bart, some people see radical feminism as the only movement that truly expresses the pain of being a woman in an unequal society, as it portrays that reality with the experiences of the battered and violated, which they claim to be the norm. Critics, including some feminists, civil libertarians, and jurists, have found this position uncomfortable and alienating.
This approach has evolved to transform the research and perspective on rape from an individual experience into a social problem.
Third wave
Third-wave feminism began in the early 1990s in response to what young women perceived as failures of the second-wave. It also responds to the backlash against the second-wave's initiatives and movements. Third-wave feminism seeks to challenge or avoid second-wave "essentialist" definitions of femininity, which over-emphasized the experiences of white, upper-middle-class women. A post-structuralist interpretation of gender and sexuality, or an understanding of gender as outside binary maleness and femaleness, is central to much of the third wave's ideology. Third-wave feminists often describe "micropolitics", and challenge second-wave paradigms about whether actions are unilaterally good for females.
These aspects of third-wave feminism arose in the mid-1980s. Feminist leaders rooted in the second wave like Gloria Anzaldúa, bell hooks, Chela Sandoval, Cherríe Moraga, Audre Lorde, Luisa Accati, Maxine Hong Kingston, and many other feminists of color, called for a new subjectivity in feminist voice. They wanted prominent feminist thought to consider race-related subjectivities. This focus on the intersection between race and gender remained prominent through the 1991 Hill–Thomas hearings, but began to shift with the Freedom Ride 1992, a drive to register voters in poor minority communities whose rhetoric intended to rally young feminists. For many, the rallying of the young is the common link within third-wave feminism.
Sexual politics
Lesbianism during the second wave was visible within and without feminism. Lesbians felt sidelined by both gay liberation and women's liberation, where they were referred to by Betty Friedan as a "lavender menace", provoking "The Woman-Identified Woman," a 1970 manifesto from the Radicalesbians that put lesbian women at the forefront of the liberation movement.
A few years later, Jill Johnston's 1973 Lesbian Nation: The Feminist Solution argued for lesbian separatism, a practice by which lesbian women would separate themselves from the rest of society.
In reproductive rights, feminists sought the right to contraceptives (i.e., birth control), some of which were widely restricted in the US until the late 1960s and through the 1970s; the birth control pill, for example, was primarily available only to married women until the mid 1970s, though other women did find ways to get the pill anyhow. Access to abortion was also widely demanded so as to increase women's economic independence and bodily autonomy, but was more difficult to secure due to existing, deep societal divisions over the issue.
Shulamith Firestone, active during the second wave of feminism, argued reproductive technology is connected to reproductive rights. Firestone believed in the enhancement of technologically concerning reproduction, in order to eliminate the obligation for women to reproduce and end oppression and inequality against them. Enhancing technology to empower women and abolish the gender hierarchy are the main focuses of a newer developing philosophy in feminism, known as cyberfeminism. Cyberfeminism has strong ties to reproductive rights and technology.
Third-wave feminists also fought to hasten social acceptance of female sexual freedom. As societal norms allowed men to have multiple sexual partners without rebuke, feminists sought sexual equality for that freedom and encouraged "sexual liberation" for women, including sex for pleasure with multiple partners, if desired.
Global feminism
UN conferences on women
In 1946, the United Nations established a Commission on the Status of Women, which later joined the UN Economic and Social Council (ECOSOC). In 1948, the UN issued its Universal Declaration of Human Rights, which protects "the equal rights of men and women", and addressed both equality and equity. Starting with the 1975 World Conference of the International Women's Year in Mexico City as part of their Decade for Women (1975–1985), the UN has held a series of world conferences on women's issues. These conferences have worldwide female representation and provide considerable opportunity to advance women's rights. They also illustrate deep cultural divisions and disagreement on universal principles, as evidenced by the successive Copenhagen (1980) and Nairobi (1985) conferences. Examples of such intrafeminism divisions have included disparities between economic development, attitudes towards forms of oppression, the definition of feminism, and stances on homosexuality, female circumcision, and population control. The Nairobi convention revealed a less monolithic feminism that "constitutes the political expression of the concerns and interests of women from different regions, classes, nationalities, and ethnic backgrounds. There is and must be a diversity of feminisms, responsive to the different needs and concerns of women, and defined by them for themselves. This diversity builds on a common opposition to gender oppression and hierarchy which, however, is only the first step in articulating and acting upon a political agenda." The fourth conference was held in Beijing in 1995, where the Beijing Platform for Action was signed. This included a commitment to achieve "gender equality and the empowerment of women" through "gender mainstreaming", or letting women and men "experience equal conditions for realising their full human rights, and have the opportunity to contribute and benefit from national, political, economic, social and cultural development".
Bridging East and West
"The definitional moment of third-wave feminism has been theorized as proceeding from critiques of the white women's movement that were initiated by women of color, as well as from the many instances of coalition work undertaken by U.S. third world feminists" Third world feminists since the 1980s have been critics of class-bias, racism, and Eurocentrism among women and feminists, and theories of multiplicity and difference given by these feminists such as Sandoval, Minh-ha, and Mohanty have enabled young feminists to dismantle the idea of monolithic feminism. They have empowered them to recognize the differences and declare multiple identities of being female, despite constantly feeling caught between modernity and tradition. Even though Asian women found it difficult to relate completely with the western women's white problems, they related much with the women of color, and thus remolded it and built a bridge between both halves of feminism, the eastern and western, via interconnectedness among women around the world. They adapted and borrowed the 'western' ideas of feminism and women in the west incorporated the effects of women's movements in other parts of the world, while reinventing itself. Asian feminists acknowledged the need of recognizing multiple sources of domination in women's lives all across the world, refused to universalize women's experience as one, and instead recognized the differences among them due to different social locations. They claimed that although academic feminism introduced them to the idea of feminism, it failed to bring them closer to the sisters and mothers in their lives, and rather took them further away. Some have also argued that many goals of western feminism are not enough to assess women's progress in Asia because they are not necessarily relevant or exportable across the boundaries. Thus, they redefined it as one that drew upon their heritage, history, and experiences. As Grewal puts it, "These transnational feminist scholars enable us to rethink the way we construct and write the history of feminists in national and transnational contexts. Seeking to articulate transnational connections among women, they have suggested ways to move beyond constructed oppositions without ignoring the histories that informed these conflicts or the valid concerns about power relations that have represented or structured the conflicts up to this point."
Fourth wave
Fourth-wave feminism is a recent development within the feminist movement. Jennifer Baumgardner identifies fourth-wave feminism as starting in 2008 and continuing into the present day. Kira Cochrane, author of All the Rebel Women: The Rise of the Fourth Wave of Feminism, defines fourth-wave feminism as a movement that is connected through technology. Researcher Diana Diamond defines fourth-wave feminism as a movement that "combines politics, psychology, and spirituality in an overarching vision of change."
Arguments for a new wave
In 2005, Pythia Peay first argued for the existence of a fourth wave of feminism, combining justice with religious spirituality. According to Jennifer Baumgardner in 2011, a fourth wave, incorporating online resources such as social media, may have begun in 2008, inspired partly by Take Our Daughters to Work Days. This fourth wave in turn has inspired or been associated with: the Doula Project for children's services; post-abortion talk lines; pursuit of reproductive justice; plus-size fashion support; support for transgender rights; male feminism; sex work acceptance; and developing media including Feministing, Racialicious, blogs, and Twitter campaigns.
According to Kira Cochrane, a fourth wave had appeared in the U.K. and several other nations by 2012–13. It focused on: sexual inequality as manifested in "street harassment, sexual harassment, workplace discrimination[,] ... body-shaming"; media images, "online misogyny", "assault[s] on public transport"; on intersectionality; on social media technology for communication and online petitioning for organizing; and on the perception, inherited from prior waves, that individual experiences are shared and thus can have political solutions. Cochrane identified as fourth wave such organizations and websites as the Everyday Sexism Project and UK Feminista; and events such as Reclaim the Night, One Billion Rising, and "a Lose the Lads' mags protest", where "many of [the leaders] ... are in their teens and 20s".
In 2014, Betty Dodson, who is also acknowledged as one of the leaders of the early 1980s pro-sex feminist movement, expressed that she considers herself a fourth wave feminist. Dodson expressed that the previous waves of feminist were banal and anti-sexual, which is why she has chosen to look at a new stance of feminism, fourth wave feminism. In 2014, Dodson worked with women to discover their sexual desires through masturbation. Dodson says her work has gained a fresh lease of life with a new audience of young, successful women who have never had an orgasm. This includes fourth-wave feminists - those rejecting the anti-pleasure stance they believe third-wave feminists stand for.
In 2014, Rhiannon Lucy Cosslett and Holly Baxter released their book, The Vagenda. The authors of the book both consider themselves fourth wave feminists. Like their website "The Vagenda", their book aims to flag and debunk the stereotypes of femininity promoted by the mainstream women's press. One reviewer of the book has expressed disappointment with The Vagenda, saying that instead of being the "call to arms for young women" that it purports to be, it reads like a joyless dissertation detailing "everything bad the media has ever done to women."
The Everyday Sexism Project
The Everyday Sexism Project began as a social media campaign on 16 April 2012 by Laura Bates, a British feminist writer. The aim of the site was to document everyday examples of sexism as reported by contributors around the world. Bates established the Everyday Sexism Project as an open forum where women could post their experiences of harassment. Bates explains the Everyday Sexism Project's goal, ""The project was never about solving sexism. It was about getting people to take the first step of just realising there is a problem that needs to be fixed."
The website was such a success that Bates decided to write and publish a book, Everyday Sexism, which further emphasizes the importance of having this type of online forum for women. The book provides unique insight into the vibrant movement of the upcoming fourth wave and the untold stories that women shared through the Everyday Sexism Project.
Click! The Ongoing Feminist Revolution
In November 2015, a group of historians working with Clio Visualizing History launched Click! The Ongoing Feminist Revolution. This digital history exhibit examines the history of American feminism from the era of World War Two to the present. The exhibit has three major sections: Politics and Social Movements; Body and Health; and Workplace and Family. There are also interactive timelines linking to a vast array of sources documenting the history of American feminism and providing information about current feminist activism.
Criticisms of the wave metaphor
In the 1960s, feminists described their movements as the "second wave" of feminism. As the second wave emerges, the importance of this new wave was to revisit that the current women's right had a venerable past. This wave focused on the idea that these movements were a long tradition of activism and during the second wave, feminists began to rewrite U.S. history through recognizing that the suffrage movement was part of the nineteenth century movement around women's issues. Presently, many contributions about the Second Wave Feminism was correlated with "hegemonic feminism". This feminism views sexism as the main oppression and it was mainly led by white individuals who "marginalized the activism and world views of women of color". Women of color and white antiracist women clarify the rise of multiracial feminism through telling the history of the Second Wave feminism. One of the earlier feminist organizations of the Second Wave was a Chicana group named Hijas de Cuauntemoc (1971) which was named after an underground newspaper written by women during the 1910 Mexican Revolution. Multiple other feminist organizations that were created in the early 1970s with Black, Asian, Latina, and Native American women have created a nationalist tradition of sending out a message that there is a need for people of color-led, independent organizations.
During the 1990s, the United States feminist activity that was present in the 1960s through the 1980s was no longer expressed. The wave metaphor for the Second Wave showed the 1960s movement as anything other than a historical situation, and showed that the nineteenth century movement was a bigger deal and had more impact on history than what was taught. As many pondered on what state was feminism presently in, one idea emerged in the early 1990s as the "third wave". As emerging from the Second Wave and onto the Third Wave, the wave metaphor has reached its usefulness. Individuals are more aware of the significance the nineteenth century had on women's movement and are more aware of the emergence the 1960s had from their long struggle regarding women's issues.
National histories of feminism
France
The 18th century French Revolution's focus on égalité (equality) extended to the inequities faced by French women. The writer Olympe de Gouges amended the 1791 Declaration of the Rights of Man and of the Citizen into the Declaration of the Rights of Woman and of the Female Citizen, where she argued that women accountable to the law must also bear equal responsibility under the law. She also addressed marriage as a social contract between equals and attacked women's reliance on beauty and charm as a form of slavery. Two years later, she was executed by guillotine.
The 19th century, conservative, post-Revolution France was inhospitable for feminist ideas, as expressed in the counter-revolutionary writings on the role of women by Joseph de Maistre and Viscount Louis de Bonald. Advancement came mid-century under the 1848 revolution and the proclamation of the Second Republic, which introduced male suffrage amid hopes that similar benefits would apply to women. Although the Utopian Charles Fourier is considered a feminist writer of this period, his influence was minimal at the time. With the fall of the conservative Louis-Philippe in 1848, feminist hopes were raised, as in 1790. Movement newspapers and organizations appeared, such as Eugénie Niboyet's La Voix des Femmes (The Women's Voice), the first feminist daily newspaper in France. Niboyet was a Protestant who had adopted Saint-Simonianism, and La Voix attracted other women from that movement, including the seamstress Jeanne Deroin and the primary schoolteacher Pauline Roland. Unsuccessful attempts were also made to recruit George Sand. Feminism was treated as a threat due to its ties with socialism, which was scrutinized since the Revolution. Deroin and Roland were both arrested, tried, and imprisoned in 1849. With the emergence of a new, more conservative government in 1852, feminism would have to wait until the Third French Republic.
While the word féminisme previously existed to describe the qualities of women, the word féministe was coined in 1872 by Alexandre Dumas fils to refer to liberated women.
The Groupe Français d'Etudes Féministes were women intellectuals at the beginning of the 20th century who translated part of Bachofen's canon into French and campaigned for the family law reform. In 1905, they founded L'entente, which published articles on women's history, and became the focus for the intellectual avant-garde. It advocated for women's entry into higher education and the male-dominated professions. Meanwhile, the Parti Socialiste Féminin socialist feminists, adopted a Marxist version of matriarchy. Like the Groupe Français, they toiled for a new age of equality, not for a return to prehistoric models of matriarchy. French feminism of the late 20th century is mainly associated with psychoanalytic feminist theory, particularly the work of Luce Irigaray, Julia Kristeva, and Hélène Cixous.
Germany
Modern feminism in Germany began during the Wilhelmine period (1888–1918) with feminists pressuring a range of traditional institutions, from universities to government, to open their doors to women. The organized German women's movement is widely attributed to writer and feminist Louise Otto-Peters (1819–1895). This movement culminated in women's suffrage in 1919. Later waves of feminists continued to ask for legal and social equality in public and family life. Alice Schwarzer is the most prominent contemporary German feminist.
Iran
The Iranian women's rights movement first emerged some time after the Iranian Constitutional Revolution, in the year in which the first women's journal was published, 1910. The status of women deteriorated after the 1979 Iranian Revolution. The movement later grew again under feminist figures such as Bibi Khanoom Astarabadi, Touba Azmoudeh, Sediqeh Dowlatabadi, Mohtaram Eskandari, Roshank No'doost, Afaq Parsa, Fakhr ozma Arghoun, Shahnaz Azad, Noor-ol-Hoda Mangeneh, Zandokht Shirazi, Maryam Amid (Mariam Mozayen-ol Sadat).
In 1992, Shahla Sherkat founded Zanan (Women) magazine, which covered Iranian women's concerns and tested political boundaries with edgy reportage on reform politics, domestic abuse, and sex. It is the most important Iranian women's journal published after the Iranian revolution. It systematically criticized the Islamic legal code and argued that gender equality is Islamic and religious literature had been misread and misappropriated by misogynists. Mehangiz Kar, Shahla Lahiji, and Shahla Sherkat, the editor of Zanan, lead the debate on women's rights and demanded reforms. On August 27, 2006, the One Million Signatures Iranian women's rights campaign was started. It aims to end legal discrimination against women in Iranian laws by collecting a million signatures. The campaign supporters include many Iranian women's rights activists, international activists, and Nobel laureates. The most important post-revolution feminist figures are Mehrangiz Kar, Azam Taleghani, Shahla Sherkat, Parvin Ardalan, Noushin Ahmadi khorasani, and Shadi Sadr.
Egypt
In 1899, Qasim Amin, considered the "father" of Arab feminism, wrote The Liberation of Women, which argued for legal and social reforms for women. Hoda Shaarawi founded the Egyptian Feminist Union in 1923 and became its president and a symbol of the Arab women's rights movement. Arab feminism was closely connected with Arab nationalism. In 1956, President Gamal Abdel Nasser's government initiated "state feminism", which outlawed gender-based discrimination and granted women's suffrage. Despite these reforms, "state feminism" blocked feminist political activism and brought an end to the first-wave feminist movement in Egypt. During Anwar Sadat's presidency, his wife, Jehan Sadat, publicly advocated for expansion of women's rights, though Egyptian policy and society was in retreat from women's equality with the new Islamist movement and growing conservatism. However, writers such as Al Ghazali Harb, for example, argued that women's full equality is an important part of Islam. This position formed a new feminist movement, Islamic feminism, which is still active today.
India
A new generation of Indian feminists emerged following global feminism. Indian women have greater independence from increased access to higher education and control over their reproductive rights. Medha Patkar, Madhu Kishwar, and Brinda Karat are feminist social workers and politicians who advocate for women's rights in post-independence India. Writers such as Amrita Pritam, Sarojini Sahoo, and Kusum Ansal advocate for feminist ideas in Indian languages. Rajeshwari Sunder Rajan, Leela Kasturi, and Vidyut Bhagat are Indian feminist essayists and critics writing in English.
China
Feminism in China began in the late Qing period as Chinese society re-evaluated traditional and Confucian values such as foot binding and gender segregation, and began to reject traditional gender ideas as hindering progress towards modernization. During the 1898 Hundred Days' Reform, reformers called for women's education, gender equality, and the end of foot binding. Female reformers formed the first Chinese women's society, the Society for the Diffusion of Knowledge among Chinese Women (Nüxuehui). After the Qing dynasty's collapse, women's liberation became a goal of the May Fourth Movement and the New Culture Movement. Later, the Chinese Communist Revolution adopted women's liberation as one of its aims and promoted women's equality, especially regarding women's participation in the workforce. After the revolution and progress in integrating women into the workforce, the Chinese Communist Party claimed to have successfully achieved women's liberation, and women's inequality was no longer seen as a problem.
Second- and third-wave feminism in China was characterized by a re-examination of women's roles during the reform movements of the early 20th century and the ways in which feminism was adopted by those various movements in order to achieve their goals. Later and current feminists have questioned whether gender equality has actually been fully achieved, and discuss current gender problems, such as the large gender disparity in the population.
Japan
Japanese feminism as an organized political movement dates back to the early years of the 20th century when Kato Shidzue pushed for birth control availability as part of a broad spectrum of progressive reforms. Shidzue went on to serve in the National Diet following the defeat of Japan in World War II and the promulgation of the Peace Constitution by US forces. Other figures such as Hayashi Fumiko and Ariyoshi Sawako illustrate the broad socialist ideologies of Japanese feminism that seeks to accomplish broad goals rather than celebrate the individual achievements of powerful women.
Norway
Norwegian feminism's political origins are in the women's suffrage movement. Camilla Collett (1813–1895) is widely considered the first Norwegian feminist. Originating from a literary family, she wrote a novel and several articles on the difficulties facing women of her time, and, in particular, forced marriages. Amalie Skram (1846–1905), a naturalist writer, also served as the women's voice.
The Norwegian Association for Women's Rights was founded in 1884 by Gina Krog and Hagbart Berner. The organization raised issues related to women's rights to education and economic self-determination, and, above all, universal suffrage. The Norwegian Parliament passed the women's right to vote into law on June 11, 1913. Norway was the second country in Europe (after Finland) to have full suffrage for women.
Poland
The development of feminism in Poland (re-recreated in modern times in 1918) and Polish territories has traditionally been divided into seven successive "waves".
Radical feminism emerged in 1920s Poland. Its chief representatives, Irena Krzywicka and Maria Morozowicz-Szczepkowska, advocated for women's personal, social, and legal independence from men. Krzywicka and Tadeusz Żeleński both promoted planned parenthood, sexual education, rights to divorce and abortion, and equality of sexes. Krzywicka published a series of articles in Wiadomości Literackie in which she protested against interference by the Roman Catholic Church in the intimate lives of Poles.
After the Second World War, the Polish Communist state (established in 1948) forcefully promoted women's emancipation at home and at work. However, during Communist rule (until 1989), feminism in general and second-wave feminism in particular were practically absent. Although feminist texts were produced in the 1950s and afterwards, they were usually controlled and generated by the Communist state. After the fall of Communism, the Polish government, dominated by Catholic political parties, introduced a de facto legal ban on abortions. Since then, some feminists have adopted argumentative strategies from the 1980s American pro-choice movement.
Histories of selected feminist issues
Feminist theory
The sexuality and gender historian Nancy Cott distinguishes between modern feminism and its antecedents, particularly the struggle for suffrage. She argues that in the two decades surrounding the Nineteenth Amendment's 1920 passage, the prior woman movement primarily concerned women as universal entities, whereas over this 20-year period, the movement prioritized social differentiation, attention to individuality, and diversity. New issues dealt more with gender as a social construct, gender identity, and relationships within and between genders. Politically, this represented a shift from an ideological alignment comfortable with the right, to one more radically associated with the left.
In the immediate postwar period, Simone de Beauvoir opposed the "woman in the home" norm. She introduced an existentialist dimension to feminism with the publication of Le Deuxième Sexe (The Second Sex) in 1949. While less an activist than a philosopher and novelist, she signed one of the Mouvement de Libération des Femmes manifestos.
The resurgence of feminist activism in the late 1960s was accompanied by an emerging literature of what might be considered female-associated issues, such as concerns for the earth, spirituality, and environmental activism. The atmosphere this created reignited the study of and debate on matricentricity as a rejection of determinism, such as with Adrienne Rich in Of Woman Born and Marilyn French in Beyond Power. For socialist feminists like Evelyn Reed, patriarchy held the properties of capitalism.
Ann Taylor Allen describes the differences between the collective male pessimism of male intellectuals such as Ferdinand Tönnies, Max Weber, and Georg Simmel at the beginning of the 20th century, compared to the optimism of their female counterparts, whose contributions have largely been ignored by social historians of the era.
See also
Coverture
History of brassieres
Lesbian erasure
List of suffragists and suffragettes
List of women's rights activists
List of women's organizations
Mujeres Libres
New Woman
Timeline of second-wave feminism
Timeline of women's suffrage
Timeline of women's rights (other than voting)
Victorian dress reform
Women's music
Women's suffrage organizations
Women's rights in 2014
1975 Icelandic women's strike
References
Bibliography
General
Books
Cott, Nancy F. The Bonds of Womanhood. New Haven: Yale University Press, 1977.
Cott, Nancy F. The Grounding of Modern Feminism. New Haven: Yale University Press, 1987.
Duby, George and Perrot, Michelle (eds). A History of Women in the West. 5 vols. Harvard, 1992-4
I. From Ancient Goddesses to Christian Saints
II. Silences of the Middle Ages
III. Renaissance and the Enlightenment Paradoxes
IV. Emerging Feminism from Revolution to World War
V. Toward a Cultural Identity in the Twentieth Century
Ezell, Margaret J. M. Writing Women's Literary History. Johns Hopkins University, 2006. 216 pp.
Foot, Paul. The Vote: How it was won and how it was lost. London: Viking, 2005
Freedman, Estelle No Turning Back: The History of Feminism and the Future of Women, Ballantine Books, 2002,
Fulford, Roger. Votes for Women. London: Faber and Faber, 1957
Jacob, Margaret C. The Enlightenment: A Brief History With Documents, Bedford/St. Martin's, 2001,
Kramarae, Cheris and Paula Treichler. A Feminist Dictionary. University of Illinois, 1997.
Lerner, Gerda. The Creation of Feminist Consciousness From the Middle Ages to Eighteen-seventy. Oxford University Press, 1993
McQuiston, Liz. Suffragettes and She-devils: Women's liberation and beyond. London: Phaidon, 1997
Mill, John Stuart. The Subjection of Women. Okin, Susan M (ed.). Newhaven, CT: Yale, 1985
Prince, Althea and Susan Silva-Wayne (eds). Feminisms and Womanisms: A Women's Studies Reader. Women's Press, 2004.
Radical Women. The Radical Women Manifesto: Socialist Feminist Theory, Program and Organizational Structure. Red Letter Press, 2001.
Rossi, Alice S. The Feminist Papers: from Adams to Beauvoir. Boston: Northeastern University, 1973.
Rowbotham, Sheilah. A Century of Women. Viking, London 1997
Schneir, Miriam. Feminism: The Essential Historical Writings. Vintage, 1994.
Scott, Joan Wallach Feminism and History (Oxford Readings in Feminism), Oxford University Press, 1996,
Smith, Bonnie G. Global Feminisms: A Survey of Issues and Controversies (Rewriting Histories), Routledge, 2000,
Spender, Dale (ed.). Feminist Theorists: Three centuries of key women thinkers, Pantheon, 1983,
Articles
Allen, Ann Taylor. "Feminism, Social Science, and the Meanings of Modernity: The Debate on the Origin of the Family in Europe and the United States, 1860–1914". The American Historical Review, 1999 October 104(4)
Cott, Nancy F. "Feminist Politics in the 1920s: The National Woman's Party". Journal of American History 71 (June 1984): 43–68.
Cott, Nancy F. "What's In a Name? The Limits of 'Social Feminism'; or, Expanding the Vocabulary of Women's History". Journal of American History 76 (December 1989): 809–829.
Offen, Karen. "Defining Feminism: A Comparative Historical Approach". Signs 1988 Autumn 14(1):119-57
International
Parpart, Jane L., Conelly, M. Patricia, Barriteau, V. Eudine (eds). Theoretical Perspectives on Gender and Development. Ottawa: IDRC, 2000.
Europe
Anderson, Bonnie S. and Judith P. Zinsser. A History of Their Own: Women in Europe from Prehistory to the Present, Oxford University Press, 1999 (revised edition),
Offen, Karen M. European Feminisms, 1700–1950: A Political History. Stanford: Stanford University Press. 2000
Perincioli, Cristina. Berlin wird feministisch. Das Beste, was von der 68er-Bewegung blieb. Querverlag, Berlin 2015, , free access to complete English translation: http://feministberlin1968ff.de/
Great Britain
Caine, Barbara. Victorian Feminists. Oxford, 1992
Chandrasekhar, S. "A Dirty, Filfthy Book": The Writing of Charles Knowlton and Annie Besant on Reproductive Physiology and British Control and an Account of the Bradlaugh-Besant Trial. University of California Berkeley, 1981
Craik, Elizabeth M. (ed.). "Women and Marriage in Victorian England", in Marriage and Property. Aberdeen University, 1984
Forster, Margaret. Significant Sisters: The grassroots of active feminism 1839-1939. Penguin, 1986
Fraser, Antonia. The Weaker Vessel. NY: Vintage, 1985.
Hallam, David J.A., Taking on the Men: the first women parliamentary candidates 1918 , Studley, 2018
Manvell, Roger. The Trial of Annie Besant and Charles Bradlaugh. London: Elek, 1976
Pankhurst, Emmeline. My Own Story. London: Virago, 1979
Pankhurst, Sylvia. The Suffragette Movement. London: Virago, 1977
Phillips, Melanie. The Ascent of Woman – A History of the Suffragette Movement and the ideas behind it, London: Time Warner Book Group, 2003,
Pugh, Martin. Women and the Women's Movement in Britain, 1914 -1999, Basingstoke [etc.]: St. Martin's Press, 2000
Walters, Margaret. Feminism: A very short introduction. Oxford, 2005
Italy
Lucia Chiavola Birnbaum, Liberazione della Donna. Feminism in Italy, Wesleyan University Press, 1986
India
Maitrayee Chaudhuri (ed.), Feminism in India, London [etc.]: Zed Books, 2005
Iran
Edward G. Browne, The Persian Revolution of 1905-1909. Mage Publishers (July 1995).
Farideh Farhi, "Religious Intellectuals, the "Woman Question," and the Struggle for the Creation of a Democratic Public Sphere in Iran", International Journal of Politics, Culture and Society, Vol. 15, No.2, Winter 2001.
Ziba Mir-Hosseini, "Religious Modernists and the 'Woman Question': Challenges and Complicities", Twenty Years of Islamic Revolution: Political and Social Transition in Iran since 1979, Syracuse University Press, 2002, pp 74–95.
Shirin Ebadi, Iran Awakening: A Memoir of Revolution and Hope, Random House (May 2, 2006),
Japan
Vera Mackie, Feminism in Modern Japan: Citizenship, Embodiment and Sexuality, Paperback edition, Cambridge University Press, 2003,
Latin America
Nancy Sternbach, "Feminism in Latin America: from Bogotá to San Bernardo", in: Signs, Winter 1992, pp. 393–434
United States
Brownmiller, Susan. In Our Time: Memoir of a Revolution, Dial Books, 1999
Cott, Nancy and Elizabeth Pleck (eds), A Heritage of Her Own; Toward a New Social History of American Women, New York: Simon and Schuster, 1979
Echols, Alice. Daring to Be Bad: Radical Feminism in America, 1967-1975, University of Minnesota Press, 1990
Flexner, Eleanor. Century of Struggle: The Woman's Rights Movement in the United States, Paperback Edition, Belknap Press 1996
Fox-Genovese, Elizabeth., "Feminism Is Not the Story of My Life": How Today's Feminist Elite Has Lost Touch With the Real Concerns of Women, Doubleday, 1996
Keetley, Dawn (ed.) Public Women, Public Words: A Documentary History of American Feminism. 3 vols.:
Vol. 1: Beginnings to 1900, Madison, Wisconsin: Madison House, 1997
Vol. 2: 1900 to 1960, Lanham, Md. [etc.]: Rowman & Littlefield, 2002
Vol. 3: 1960 to the present, Lanham, Md. [etc.]: Rowman & Littlefield, 2002
Messer-Davidow, Ellen: Disciplining feminism: from social activism to academic discourse, Duke University Press, 2002
O'Neill, William L. Everyone Was Brave: A history of feminism in America. Chicago 1971
Roth, Benita. Separate Roads to Feminism: Black, Chicana, and White Feminist Movements in America's Second Wave, Cambridge University Press, 2004
Stansell, Christine. The Feminist Promise: 1792 to the Present (2010). , 528 pp.
Sexuality
Foucault, Michel. The History of Sexuality. Random House, New York, 1978
Soble, Alan (ed.) The Philosophy of Sex: Contemporary readings. Lanham, MD: & Littlefield, 2002.
Further reading
Browne, Alice (1987) The Eighteenth-century Feminist Mind. Brighton: Harvester
External links
Independent Voices: an open access collection of alternative press newspapers
Timeline of feminist history in the USA
UN Department of Economic and Social Affairs, Division for the Advancement of Women
Women in Politics: A Very Short History at Click! The Ongoing Feminist Revolution
The Women's Library , online resource of the extensive collections at the LSE
Feminism
History of social movements
Women's suffrage | 0.769484 | 0.994469 | 0.765229 |
Oral tradition | Oral tradition, or oral lore, is a form of human communication in which knowledge, art, ideas and culture are received, preserved, and transmitted orally from one generation to another. The transmission is through speech or song and may include folktales, ballads, chants, prose or poetry. The information is mentally recorded by oral repositories, sometimes termed "walking libraries", who are usually also performers. Oral tradition is a medium of communication for a society to transmit oral history, oral literature, oral law and other knowledge across generations without a writing system, or in parallel to a writing system. It is the most widespread medium of human communication. They often remain in use in the modern era throughout for cultural preservation.
Religions such as Buddhism, Hinduism, Catholicism, and Jainism have used oral tradition, in parallel to writing, to transmit their canonical scriptures, rituals, hymns and mythologies. Berber and sub-Saharan African societies have broadly been labelled oral civilisations, contrasted with literate civilisations, due to their reverence for the oral word and widespread use of oral tradition.
Oral tradition is memories, knowledge, and expression held in common by a group over many generations: it is the long preservation of immediate or contemporaneous testimony. It may be defined as the recall and transmission of specific, preserved textual and cultural knowledge through vocal utterance. Oral tradition is usually popular, and can be exoteric or esoteric. It speaks to people according to their understanding, unveiling itself in accordance with their aptitudes.
As an academic discipline, oral tradition refers both to objects and methods of study. It is distinct from oral history, which is the recording of personal testimony of those who experienced historical eras or events. Oral tradition is also distinct from the study of orality, defined as thought and its verbal expression in societies where the technologies of literacy (writing and print) are unfamiliar. Folklore is one albeit not the only type of oral tradition.
History
According to John Foley, oral tradition has been an ancient human tradition found in "all corners of the world". Modern archaeology has been unveiling evidence of the human efforts to preserve and transmit arts and knowledge that depended completely or partially on an oral tradition, across various cultures:
Before the introduction of text, oral tradition remained the only means of communication in order to establish societies as well as its institutions. Despite widespread comprehension of literacy in the recent century, oral tradition remains the dominant communicative means within the world.
Africa
All indigenous African societies use oral tradition to learn their origin and history, civic and religious duties, crafts and skills, as well as traditional myths and legends. It is also a key socio-cultural component in the practice of their traditional spiritualities, as well as mainstream Abrahamic religions. The prioritisation of the spoken word is evidenced by African societies having chosen to record history orally whilst some had developed or had access to a writing script. Jan Vansina differentiates between oral and literate civilisations, stating: "The attitude of members of an oral society toward speech is similar to the reverence members of a literate society attach to the written word. If it is hallowed by authority or antiquity, the word will be treasured." For centuries in Europe, all data felt to be important were written down, with the most important texts prioritised, such as Bible, and only trivia, such as song, legend, anecdote, and proverbs remained unrecorded. In Africa, all the principal political, legal, social, and religious texts were transmitted orally. When the Bamums in Cameroon invented a script, the first to be written down was the royal chronicle and the code of customary law. Most African courts had archivists who learnt by heart the royal genealogy and history of the state, and served as its unwritten constitution.
The performance of a tradition is accentuated and rendered alive by various gesture, social conventions and the unique occasion in which it is performed. Furthermore, the climate in which traditions are told influences its content. In Burundi, traditions were short because most of them were told at informal gatherings and everyone had to have his say during the evening; in neighbouring Rwanda, many narratives were spun-out because a one-man professional had to entertain his patron for a whole evening, with every production checked by fellow specialists and errors punishable. Frequently, glosses or commentaries were presented parallel to the narrative, sometimes answering questions from the audience to ensure understanding, although often someone would learn a tradition without asking their master questions and not really understand the meaning of its content, leading them to speculate in the commentary. Oral traditions only exist when they are told, except for in people's minds, and so the frequency of telling a tradition aids its preservation. These African ethnic groups also utilize oral tradition to develop and train the human intellect, and the memory to retain information and sharpen imagination.
West Africa
Perhaps the most famous repository of oral tradition is the west African griot (named differently in different languages). The griot is a hereditary position and exists in Dyula, Soninke, Fula, Hausa, Songhai, Wolof, Serer, and Mossi societies among many others, although more famously in Mandinka society. They constitute a caste and perform a range of roles, including as a historian or library, musician, poet, mediator of family and tribal disputes, spokesperson, and served in the king's court, not dissimilar from the European bard. They keep records of all births, death, and marriages through the generations of the village or family. When Sundiata Keita founded the Mali Empire, he was offered Balla Fasséké as his griot to advise him during his reign, giving rise to the Kouyate line of griots. Griots often accompany their telling of oral tradition with a musical instrument, as the Epic of Sundiata is accompanied by the balafon, or as the kora accompanies other traditions. In modern times, some griots and descendants of griots have dropped their historian role and focus on music, with many finding success, however many still maintain their traditional roles.
East Africa
North Africa
Central Africa
Southern Africa
Europe
Albania
Albanian traditions have been handed down orally across generations. They have been preserved through traditional memory systems that have survived intact into modern times in Albania, a phenomenon that is explained by the lack of state formation among Albanians and their ancestors – the Illyrians, being able to preserve their "tribally" organized society. This distinguished them from civilizations such as Ancient Egypt, Minoans and Mycenaeans, who underwent state formation and disrupted their traditional memory practices.
Albanian epic poetry has been analysed by Homeric scholars to acquire a better understanding of Homeric epics. The long oral tradition that has sustained Albanian epic poetry reinforces the idea that pre-Homeric epic poetry was oral. The theory of oral-formulaic composition was developed also through the scholarly study of Albanian epic verse. The Albanian traditional singing of epic verse from memory is one of the last survivors of its kind in modern Europe, and the last survivor of the Balkan traditions.
Ancient Greece
"All ancient Greek literature", states Steve Reece, "was to some degree oral in nature, and the earliest literature was completely so". Homer's epic poetry, states Michael Gagarin, "was largely composed, performed and transmitted orally". As folklores and legends were performed in front of distant audiences, the singers would substitute the names in the stories with local characters or rulers to give the stories a local flavor and thus connect with the audience, but making the historicity embedded in the oral tradition unreliable. The lack of surviving texts about the Greek and Roman religious traditions have led scholars to presume that these were ritualistic and transmitted as oral traditions, but some scholars disagree that the complex rituals in the ancient Greek and Roman civilizations were an exclusive product of an oral tradition.
Ireland
An Irish seanchaí (plural: seanchaithe), meaning bearer of "old lore", was a traditional Irish language storyteller (the Scottish Gaelic equivalent being the seanchaidh, anglicised as shanachie). The job of a seanchaí was to serve the head of a lineage by passing information orally from one generation to the next about Irish folklore and history, particularly in medieval times.
Rome
The potential for oral transmission of history in ancient Rome is evidenced primarily by Cicero, who discusses the significance of oral tradition in works such as Brutus, Tusculan Disputations, and On The Orator. While Cicero’s reliance on Cato’s Origines may limit the breadth of his argument, he nonetheless highlights the importance of storytelling in preserving Roman history. Valerius Maximus also references oral tradition in Memorable Doings and Sayings (2.1.10).
Wiseman argues that celebratory performances served as a vital medium for transmitting Roman history and that such traditions evolved into written forms by the third century CE. He asserts that the history of figures like the house of Tarquin was likely passed down through oral storytelling for centuries before being recorded in literature. Although Flower critiques the lack of ancient evidence supporting Wiseman's broader claims, Wiseman maintains that dramatic narratives fundamentally shaped historiography.
Asia
In Asia, the transmission of folklore, mythologies as well as scriptures in ancient India, in different Indian religions, was by oral tradition, preserved with precision with the help of elaborate mnemonic techniques:
According to Goody, the Vedic texts likely involved both a written and oral tradition, calling it a "parallel products of a literate society". Mostly recently, research shows that oral performance of (written) texts could be a philosophical activity in early China.
It is a common knowledge in India that the primary Hindu books called Vedas are great example of Oral tradition. Pundits who memorized three Vedas were called Trivedis. Pundits who memorized four vedas were called Chaturvedis. By transferring knowledge from generation to generation Hindus protected their ancient Mantras in Vedas, which are basically Prose.
The early Buddhist texts are also generally believed to be of oral tradition, with the first by comparing inconsistencies in the transmitted versions of literature from various oral societies such as the Greek, Serbia and other cultures, then noting that the Vedic literature is too consistent and vast to have been composed and transmitted orally across generations, without being written down.
Middle East
In the Middle East, Arabic oral tradition has significantly influenced literary and cultural practices. Arabic oral tradition encompassed various forms of expression, including metrical poetry, unrhymed prose, rhymed prose (saj'), and prosimetrum—a combination of prose and poetry often employed in historical narratives. Poetry held a position of particular importance, as it was believed to be a more reliable medium for information transmission than prose. This belief stemmed from observations that highly structured language, with its rhythmic and phonetic patterns, tended to undergo fewer alterations during oral transmission.
Each genre of rhymed poetry served distinct social and cultural functions. These range from spontaneous compositions at celebrations to carefully crafted historical accounts, political commentaries, and entertainment pieces. Among these, the folk epics known as siyar (singular: sīra) were considered the most intricate. These prosimetric narratives, combining prose and verse, emerged in the early Middle Ages. While many such epics circulated historically, only one has survived as a sung oral poetic tradition: Sīrat Banī Hilāl. This epic recounts the westward migration and conquests of the Banu Hilal Bedouin tribe from the 10th to 12th centuries, culminating in their rule over parts of North Africa before their eventual defeat. The historical roots of Sīrat Banī Hilāl are evident in the present-day distribution of groups claiming descent from the tribe across North Africa and parts of the Middle East. The epic's development into a cohesive narrative was first documented by the historian Ibn Khaldūn in the 14th century. In his writings, Ibn Khaldūn describes collecting stories and poems from nomadic Arabs, using these oral sources to discuss the merits of colloquial versus classical poetry and the value of oral histories in written historical works.
The Torah and other ancient Jewish literature, the Judeo-Christian Bible and texts of early centuries of Christianity are rooted in an oral tradition, and the term "People of the Book" is a medieval construct. This is evidenced, for example, by the multiple scriptural statements by Paul admitting "previously remembered tradition which he received" orally.
Oceania
Australia
Australian Aboriginal culture has thrived on oral traditions and oral histories passed down through thousands of years.
In a study published in February 2020, new evidence showed that both Budj Bim and Tower Hill volcanoes erupted between 34,000 and 40,000 years ago. Significantly, this is a "minimum age constraint for human presence in Victoria", and also could be interpreted as evidence for the oral histories of the Gunditjmara people, an Aboriginal Australian people of south-western Victoria, which tell of volcanic eruptions being some of the oldest oral traditions in existence. A basalt stone axe found underneath volcanic ash in 1947 had already proven that humans inhabited the region before the eruption of Tower Hill.
Americas
Native American
Native American society was always reliant upon oral tradition, if not storytelling, in order to convey knowledge, morals and traditions amongst others, a trait Western settlers deemed as representing an inferior race without neither culture nor history, often cited as a reason behind indoctrination.
Writing systems are not known to exist among Native North Americans before contact with Europeans except among some Mesoamerican cultures, and possibly the South American quipu and North American wampum, although those two are debatable. Oral storytelling traditions flourished in a context without the use of writing to record and preserve history, scientific knowledge, and social practices. While some stories were told for amusement and leisure, most functioned as practical lessons from tribal experience applied to immediate moral, social, psychological, and environmental issues. Stories fuse fictional, supernatural, or otherwise exaggerated characters and circumstances with real emotions and morals as a means of teaching. Plots often reflect real life situations and may be aimed at particular people known by the story's audience. In this way, social pressure could be exerted without directly causing embarrassment or social exclusion. For example, rather than yelling, Inuit parents might deter their children from wandering too close to the water's edge by telling a story about a sea monster with a pouch for children within its reach. One single story could provide dozens of lessons. Stories were also used as a means to assess whether traditional cultural ideas and practices are effective in tackling contemporary circumstances or if they should be revised.
Native American storytelling is a collaborative experience between storyteller and listeners. Native American tribes generally have not had professional tribal storytellers marked by social status. Stories could and can be told by anyone, with each storyteller using their own vocal inflections, word choice, content, or form. Storytellers not only draw upon their own memories, but also upon a collective or tribal memory extending beyond personal experience but nevertheless representing a shared reality. Native languages have in some cases up to twenty words to describe physical features like rain or snow and can describe the spectra of human emotion in very precise ways, allowing storytellers to offer their own personalized take on a story based on their own lived experiences. Fluidity in story deliverance allowed stories to be applied to different social circumstances according to the storyteller's objective at the time. One's rendition of a story was often considered a response to another's rendition, with plot alterations suggesting alternative ways of applying traditional ideas to present conditions. Listeners might have heard the story told many times, or even may have told the same story themselves. This does not take away from a story's meaning, as curiosity about what happens next was less of a priority than hearing fresh perspectives on well-known themes and plots. Elder storytellers generally were not concerned with discrepancies between their version of historical events and neighboring tribes' version of similar events, such as in origin stories. Tribal stories are considered valid within the tribe's own frame of reference and tribal experience. The 19th century Oglala Lakota tribal member Four Guns was known for his justification of the oral tradition and criticism of the written word.
Stories are used to preserve and transmit both tribal history and environmental history, which are often closely linked. Native oral traditions in the Pacific Northwest, for example, describe natural disasters like earthquakes and tsunamis. Various cultures from Vancouver Island and Washington have stories describing a physical struggle between a Thunderbird and a Whale. One such story tells of the Thunderbird, which can create thunder by moving just a feather, piercing the Whale's flesh with its talons, causing the Whale to dive to the bottom of the ocean, bringing the Thunderbird with it. Another depicts the Thunderbird lifting the Whale from the Earth then dropping it back down. Regional similarities in themes and characters suggests that these stories mutually describe the lived experience of earthquakes and floods within tribal memory. According to one story from the Suquamish Tribe, Agate Pass was created when an earthquake expanded the channel as a result of an underwater battle between a serpent and bird. Other stories in the region depict the formation of glacial valleys and moraines and the occurrence of landslides, with stories being used in at least one case to identify and date earthquakes that occurred in 900 CE and 1700. Further examples include Arikara origin stories of emergence from an "underworld" of persistent darkness, which may represent the remembrance of life in the Arctic Circle during the last ice age, and stories involving a "deep crevice", which may refer to the Grand Canyon. Despite such examples of agreement between geological and archeological records on one hand and Native oral records on the other, some scholars have cautioned against the historical validity of oral traditions because of their susceptibility to detail alteration over time and lack of precise dates. The Native American Graves Protection and Repatriation Act considers oral traditions as a viable source of evidence for establishing the affiliation between cultural objects and Native Nations.
Transmission
Oral traditions face the challenge of accurate transmission and verifiability of the accurate version, particularly when the culture lacks written language or has limited access to writing tools. Oral cultures have employed various strategies that achieve this without writing. For example, a heavily rhythmic speech filled with mnemonic devices enhances memory and recall. A few useful mnemonic devices include alliteration, repetition, assonance, and proverbial sayings. In addition, the verse is often metrically composed with an exact number of syllables or morae—such as with Greek and Latin prosody and in Chandas found in Hindu and Buddhist texts. The verses of the epic or text are typically designed wherein the long and short syllables are repeated by certain rules, so that if an error or inadvertent change is made, an internal examination of the verse reveals the problem. Oral traditions can be passed on through plays and acting, as shown in modern-day Cameroon by the Graffis or Grasslanders who perform and deliver speeches to teach their history through oral tradition. Such strategies facilitate transmission of information without a written intermediate, and they can also be applied to oral governance.
Oral transmission of law
Rudyard Kipling's The Jungle Book provides an excellent demonstration of oral governance in the Law of the Jungle. Not only does grounding rules in oral proverbs allow for simple transmission and understanding, but it also legitimizes new rulings by allowing extrapolation. These stories, traditions, and proverbs are not static, but are often altered upon each transmission, barring any change to the overall meaning. In this way, the rules that govern the people are modified by the whole and not authored by a single entity.
Indian religions
Ancient texts of Hinduism, Buddhism and Jainism were preserved and transmitted by an oral tradition. For example, the śrutis of Hinduism called the Vedas, the oldest of which trace back to the second millennium BCE. Michael Witzel explains this oral tradition as follows:
Ancient Indians developed techniques for listening, memorization and recitation of their knowledge, in schools called Gurukul, while maintaining exceptional accuracy of their knowledge across the generations. Many forms of recitation or pathas were designed to aid accuracy in recitation and the transmission of the Vedas and other knowledge texts from one generation to the next. All hymns in each Veda were recited in this way; for example, all 1,028 hymns with 10,600 verses of the Rigveda was preserved in this way; as were all other Vedas including the Principal Upanishads, as well as the Vedangas. Each text was recited in a number of ways, to ensure that the different methods of recitation acted as a cross check on the other. Pierre-Sylvain Filliozat summarizes this as:
Samhita-patha: continuous recitation of Sanskrit words bound by the phonetic rules of euphonic combination;
Pada-patha: a recitation marked by a conscious pause after every word, and after any special grammatical codes embedded inside the text; this method suppresses euphonic combination and restores each word in its original intended form;
Krama-patha: a step-by-step recitation where euphonically combined words are paired successively and sequentially and then recited; for example, a hymn "word1 word2 word3 word4...", would be recited as "word1word2 word2word3 word3word4 ...."; this method to verify accuracy is credited to Vedic sages Gargya and Sakalya in the Hindu tradition and mentioned by the ancient Sanskrit grammarian Panini (dated to pre-Buddhism period);
Krama-patha modified: the same step-by-step recitation as above, but without euphonic-combinations (or free form of each word); this method to verify accuracy is credited to Vedic sages Babhravya and Galava in the Hindu tradition, and is also mentioned by the ancient Sanskrit grammarian Panini;
, and are methods of recitation of a text and its oral transmission that developed after 5th century BCE, that is after the start of Buddhism and Jainism; these methods use more complicated rules of combination and were less used.
These extraordinary retention techniques guaranteed an accurate Śruti, fixed across the generations, not just in terms of unaltered word order but also in terms of sound. That these methods have been effective, is testified to by the preservation of the most ancient Indian religious text, the .
Poetry of Homer
Research by Milman Parry and Albert Lord indicates that the verse of the Greek poet Homer has been passed down not by rote memorization but by "oral-formulaic composition". In this process, extempore composition is aided by use of stock phrases or "formulas" (expressions that are used regularly "under the same metrical conditions, to express a particular essential idea"). In the case of the work of Homer, formulas included eos rhododaktylos ("rosy fingered dawn") and oinops pontos ("winedark sea") which fit in a modular fashion into the poetic form (in this case six-colon Greek hexameter). Since the development of this theory, of oral-formulaic composition has been "found in many different time periods and many different cultures", and according to another source (John Miles Foley) "touch[ed] on" over 100 "ancient, medieval and modern traditions."
Islam
The most recent of the world's major religions, Islam claims two major sources of divine revelation—the Quran and hadith—compiled in written form relatively shortly after being revealed:
The Quran—meaning "recitation" in Arabic—is believed by Muslims to be God's revelation to the Islamic prophet Muhammad, delivered to him from 610 CE until his death in 632 CE. It is said to have been carefully compiled and edited into a standardized written form (known as the ) about two decades after the last verse was revealed.
Hadith—meaning "narrative" or "report" in Arabic—is the record of the words, actions, and the silent approval, of Muhammad, and was transmitted by "oral preachers and storytellers" for around 150–250 years. Each hadith includes the (chain of human transmitters who passed down the tradition before it was sorted according to accuracy, compiled, and committed to written form by a reputable scholar.
The oral milieu in which the sources were revealed, and their oral form in general are important. The Arab poetry that preceded the Quran and the hadith were orally transmitted. Few Arabs were literate at the time and paper was not available in the Middle East.
The written Quran is said to have been created in part through memorization by Muhammad's companions, and the decision to create a standard written work is said to have come after the death in battle (Yamama) of a large number of Muslims who had memorized the work.
For centuries, copies of the Qurans were transcribed by hand, not printed, and their scarcity and expense made reciting the Quran from memory, not reading, the predominant mode of teaching it to others. To this day the Quran is memorized by millions and its recitation can be heard throughout the Muslim world from recordings and mosque loudspeakers (during Ramadan). Muslims state that some who teach memorization/recitation of the Quran constitute the end of an "un-broken chain" whose original teacher was Muhammad himself. It has been argued that "the Qur'an's rhythmic style and eloquent expression make it easy to memorize," and was made so to facilitate the "preservation and remembrance" of the work.
Islamic doctrine holds that from the time it was revealed to the present day, the Quran has not been altered, its continuity from divine revelation to its current written form insured by the large numbers of Muhammad's supporters who had reverently memorized the work, a careful compiling process and divine intervention. (Muslim scholars agree that although scholars have worked hard to separate the corrupt and uncorrupted hadith, this other source of revelation is not nearly so free of corruption because of the hadith's great political and theological influence.)
At least two non-Muslim scholars (Alan Dundes and Andrew G. Bannister) have examined the possibility that the Quran was not just "recited orally, but actually composed orally". Bannister postulates that some parts of the Quran—such as the seven re-tellings of the story of the Iblis and Adam, and the repeated phrases "which of the favours of your Lord will you deny?" in sura 55—make more sense addressed to listeners than readers.
Banister, Dundes and other scholars (Shabbir Akhtar, Angelika Neuwirth, Islam Dayeh) have also noted the large amount of "formulaic" phraseology in the Quran consistent with "oral-formulaic composition" mentioned above. The most common formulas are the attributes of Allah—all-mighty, all-wise, all-knowing, all-high, etc.—often found as doublets at the end of a verse. Among the other repeated phrases are "Allah created the heavens and the earth" (found 19 times in the Quran).
As much as one third of the Quran is made up of "oral formulas", according to Dundes' estimates. Bannister, using a computer database of (the original Arabic) words of the Quran and of their "grammatical role, root, number, person, gender and so forth", estimates that depending on the length of the phrase searched, somewhere between 52% (three word phrases) and 23% (five word phrases) are oral formulas. Dundes reckons his estimates confirm "that the Quran was orally transmitted from its very beginnings". Bannister believes his estimates "provide strong corroborative evidence that oral composition should be seriously considered as we reflect upon how the Qur'anic text was generated."
Dundes argues oral-formulaic composition is consistent with "the cultural context of Arabic oral tradition", quoting researchers who have found poetry reciters in the Najd (the region next to where the Quran was revealed) using "a common store of themes, motives, stock images, phraseology and prosodical options", and "a discursive and loosely structured" style "with no fixed beginning or end" and "no established sequence in which the episodes must follow".{{ref|group=Note|Scholar Saad Sowayan referring to the genre of "Saudi Arabian historical oral narrative genre called ".
Catholicism
The Catholic Church upholds that its teaching contained in its deposit of faith is transmitted not only through scripture, but as well as through sacred tradition. The Second Vatican Council affirmed in that the teachings of Jesus Christ were initially passed on to early Christians by "the Apostles who, by their oral preaching, by example, and by observance handed on what they had received from the lips of Christ, from living with Him, and from what He did". The Catholic Church asserts that this mode of transmission of the faith persists through current-day bishops, who by right of apostolic succession, have continued the oral passing of what had been revealed through Christ through their preaching as teachers.
Music
Study
Historiography
Jan Vansina, who specialised in the history of Central Africa, pioneered the study of oral tradition in his book Oral tradition as history (1985). Vansina differentiates between oral and literate civilisations, depending on whether emphasis is placed on the sanctity of the written or oral word in a society, with the latter much more likely to use oral tradition and oral literature even when a writing system has been developed or when having access to one. The Akan proverbs translated as "Ancient things in the ear" and "Ancient things are today" refer to present-day delivery and the past content, and as such oral traditions are both simultaneously expressions of the past and the present. Vansina says that to ignore the duality either way would be reductionistic. Vansina states:
Development within Europe
In the work of the Serb scholar Vuk Stefanović Karadžić (1787–1864), a contemporary and friend of the Brothers Grimm. Vuk pursued similar projects of "salvage folklore" (similar to rescue archaeology) in the cognate traditions of the South Slavic regions which would later be gathered into Yugoslavia, and with the same admixture of romantic and nationalistic interests (he considered all those speaking the Eastern Herzegovinian dialect as Serbs). Somewhat later, but as part of the same scholarly enterprise of nationalist studies in folklore, the turcologist Vasily Radlov (1837–1918) would study the songs of the Kara-Kirghiz in what would later become the Soviet Union; Karadzic and Radloff would provide models for the work of Parry.
Walter Ong
In a separate development, the media theorist Marshall McLuhan (1911–1980) would begin to focus attention on the ways that communicative media shape the nature of the content conveyed. He would serve as mentor to the Jesuit Walter Ong (1912–2003), whose interests in cultural history, psychology and rhetoric would result in Orality and Literacy (Methuen, 1980) and the important but less-known Fighting for Life: Contest, Sexuality and Consciousness (Cornell, 1981). These two works articulated the contrasts between cultures defined by primary orality, writing, print, and the secondary orality of the electronic age.
{|style="border:1px; border: thin solid white; background-color:#f6f6FF; margin:20px;" cellpadding="10"
|-
| I style the orality of a culture totally untouched by any knowledge of writing or print, 'primary orality'. It is 'primary' by contrast with the 'secondary orality' of present-day high technology culture, in which a new orality is sustained by telephone, radio, television and other electronic devices that depend for their existence and functioning on writing and print. Today primary culture in the strict sense hardly exists, since every culture knows of writing and has some experience of its effects. Still, to varying degrees many cultures and sub-cultures, even in a high-technology ambiance, preserve much of the mind-set of primary orality.
|}
Ong's works also made possible an integrated theory of oral tradition which accounted for both production of content (the chief concern of Parry-Lord theory) and its reception. This approach, like McLuhan's, kept the field open not just to the study of aesthetic culture but to the way physical and behavioral artifacts of oral societies are used to store, manage and transmit knowledge, so that oral tradition provides methods for investigation of cultural differences, other than the purely verbal, between oral and literate societies.
The most-often studied section of Orality and Literacy concerns the "psychodynamics of orality" This chapter seeks to define the fundamental characteristics of 'primary' orality and summarizes a series of descriptors (including but not limited to verbal aspects of culture) which might be used to index the relative orality or literacy of a given text or society.
John Miles Foley
In advance of Ong's synthesis, John Miles Foley began a series of papers based on his own fieldwork on South Slavic oral genres, emphasizing the dynamics of performers and audiences. Foley effectively consolidated oral tradition as an academic field when he compiled Oral-Formulaic Theory and Research in 1985. The bibliography gives a summary of the progress scholars made in evaluating the oral tradition up to that point, and includes a list of all relevant scholarly articles relating to the theory of Oral-Formulaic Composition. He also both established both the journal Oral Tradition and founded the Center for Studies in Oral Tradition (1986) at the University of Missouri. Foley developed Oral Theory beyond the somewhat mechanistic notions presented in earlier versions of Oral-Formulaic Theory, by extending Ong's interest in cultural features of oral societies beyond the verbal, by drawing attention to the agency of the bard and by describing how oral traditions bear meaning.
The bibliography would establish a clear underlying methodology which accounted for the findings of scholars working in the separate Linguistics fields (primarily Ancient Greek, Anglo-Saxon and Serbo-Croatian). Perhaps more importantly, it would stimulate conversation among these specialties, so that a network of independent but allied investigations and investigators could be established.
Foley's key works include The Theory of Oral Composition (1988); Immanent Art (1991); Traditional Oral Epic: The Odyssey, Beowulf and the Serbo-Croatian Return-Song (1993); The Singer of Tales in Performance (1995); Teaching Oral Traditions (1998); How to Read an Oral Poem (2002). His Pathways Project (2005–2012) draws parallels between the media dynamics of oral traditions and the Internet.
Acceptance and further elaboration
The theory of oral tradition would undergo elaboration and development as it grew in acceptance. While the number of formulas documented for various traditions proliferated, the concept of the formula remained lexically bound. However, numerous innovations appeared, such as the "formulaic system" with structural "substitution slots" for syntactic, morphological and narrative necessity (as well as for artistic invention). Sophisticated models such as Foley's "word-type placement rules" followed. Higher levels of formulaic composition were defined over the years, such as "ring composition", "responsion" and the "type-scene" (also called a "theme" or "typical scene"). Examples include the "Beasts of Battle" and the "Cliffs of Death". Some of these characteristic patterns of narrative details, (like "the arming sequence;" "the hero on the beach"; "the traveler recognizes his goal") would show evidence of global distribution.
At the same time, the fairly rigid division between oral and literate was replaced by recognition of transitional and compartmentalized texts and societies, including models of diglossia (Brian Stock Franz Bäuml, and Eric Havelock). Perhaps most importantly, the terms and concepts of "orality" and "literacy" came to be replaced with the more useful and apt "traditionality" and "textuality". Very large units would be defined (The Indo-European Return Song) and areas outside of military epic would come under investigation: women's song, riddles and other genres.
The methodology of oral tradition now conditions a large variety of studies, not only in folklore, literature and literacy, but in philosophy, communication theory, Semiotics, and including a very broad and continually expanding variety of languages and ethnic groups, and perhaps most conspicuously in biblical studies, in which Werner Kelber has been especially prominent. The annual bibliography is indexed by 100 areas, most of which are ethnolinguistic divisions.
Present developments explore the implications of the theory for rhetoric and composition, interpersonal communication, cross-cultural communication, postcolonial studies, rural community development, popular culture and film studies and many other areas. The most significant areas of theoretical development at present may be the construction of systematic hermeneutics and aesthetics specific to oral traditions.
Criticism and debates
The theory of oral tradition encountered early resistance from scholars who perceived it as potentially supporting either one side or another in the controversy between what were known as "unitarians" and "analysts"—that is, scholars who believed Homer to have been a single, historical figure, and those who saw him as a conceptual "author function," a convenient name to assign to what was essentially a repertoire of traditional narrative. A much more general dismissal of the theory and its implications simply described it as "unprovable" Some scholars, mainly outside the field of oral tradition, represent (either dismissively or with approval) this body of theoretical work as reducing the great epics to children's party games like "telephone" or "Chinese whispers". While games provide amusement by showing how messages distort content via uncontextualized transmission, Parry's supporters argue that the theory of oral tradition reveals how oral methods optimized the signal-to-noise ratio and thus improved the quality, stability and integrity of content transmission.
There were disputes concerning particular findings of the theory. For example, those trying to support or refute Crowne's hypothesis found the "Hero on the Beach" formula in numerous Old English poems. Similarly, it was also discovered in other works of Germanic origin, Middle English poetry, and even an Icelandic prose saga. J.A. Dane, in an article characterized as "polemics without rigor" claimed that the appearance of the theme in Ancient Greek poetry, a tradition without known connection to the Germanic, invalidated the notion of "an autonomous theme in the baggage of an oral poet."
Within Homeric studies specifically, Lord's The Singer of Tales, which focused on problems and questions that arise in conjunction with applying oral-formulaic theory to problematic texts such as the Iliad, Odyssey, and even Beowulf, influenced nearly all of the articles written on Homer and oral-formulaic composition thereafter. However, in response to Lord, Geoffrey Kirk published The Songs of Homer, questioning Lord's extension of the oral-formulaic nature of Serbian and Croatian literature (the area from which the theory was first developed) to Homeric epic. Kirk argues that Homeric poems differ from those traditions in their "metrical strictness", "formular system[s]", and creativity. In other words, Kirk argued that Homeric poems were recited under a system that gave the reciter much more freedom to choose words and passages to get to the same end than the Serbo-Croatian poet, who was merely "reproductive". Shortly thereafter, Eric Havelock's Preface to Plato revolutionized how scholars looked at Homeric epic by arguing not only that it was the product of an oral tradition, but also that the oral-formulas contained therein served as a way for ancient Greeks to preserve cultural knowledge across many different generations. Adam Parry, in his 1966 work "Have we Homer's Iliad?", theorized the existence of the most fully developed oral poet to his time, a person who could (at his discretion) creatively and intellectually create nuanced characters in the context of the accepted, traditional story. In fact, he discounted the Serbo-Croatian tradition to an "unfortunate" extent, choosing to elevate the Greek model of oral-tradition above all others. Lord reacted to Kirk's and Parry's essays with "Homer as Oral Poet", published in 1968, which reaffirmed Lord's belief in the relevance of Yugoslav poetry and its similarities to Homer and downplayed the intellectual and literary role of the reciters of Homeric epic.
Many of the criticisms of the theory have been absorbed into the evolving field as useful refinements and modifications. For example, in what Foley called a "pivotal" contribution, Larry Benson introduced the concept of "written-formulaic" to describe the status of some Anglo-Saxon poetry which, while demonstrably written, contains evidence of oral influences, including heavy reliance on formulas and themes A number of individual scholars in many areas continue to have misgivings about the applicability of the theory or the aptness of the South Slavic comparison, and particularly what they regard as its implications for the creativity which may legitimately be attributed to the individual artist. However, at present, there seems to be little systematic or theoretically coordinated challenge to the fundamental tenets of the theory; as Foley put it, ""there have been numerous suggestions for revisions or modifications of the theory, but the majority of controversies have generated further understanding.
Advantages and disadvantages
Many historians, particularly those trained in Europe, will vehemently argue against the use of oral tradition as a reliable source in professional education and research. However, this belief often directly corresponds with a dismissive attitude towards African and Native American stories, considering them useful to anthropologists but not to fact-driven historians (Babatunde 19). This misconception is surrounded by the fact that there are limited physical materials available to confirm or deny these oral histories, but that does not deem them irrelevant.
A variety of different communities throughout the continent of Africa successfully used oral tradition to share information and reconstruct massive collections of histories due to a limited writing accessibility before the Europeans arrived. Certain non-literate societies still exist today and maintain a dependence on oral sources to communicate with each other, carry on customs, traditions, folklore, etc. Oral traditions in these cultures are an invaluable source of comfort, authority, spiritual validation, and represent a psychological release from what might be difficult social and institutional circumstances (Babatunde 18). Not only is oral tradition itself a method of analyzing cultural and social differences, it is also the manner in which stories are told, the storytellers themselves, the endurance of specific tales throughout time, their general structure, and so much more that provides sharp insight into who these people are and how they have lived for centuries. This complex understanding facilitates a much more thorough understanding of history. [citation needed]
"Anything, material or immaterial that bears witness to the past is a historical document or source" (Babatunde 18). Oral tradition connects the future to the past and has been honored in African countries as sophisticated historical scholarship and used in many research studies. This broad definition of a "historical source" forces archivists to redesign their perspective on what constitutes historical research and expand their view to include the voices of those who have little more than their own stories to share with the rest of the world. This condemns the overlooking of African oral sources and challenges the preconceptions held within Western education and formal institutions. [citation needed]
Most arguments against the use of oral tradition as a reliable source for historical writing reference a lack of chronological precision and uncertainty about specific dates and times these events may have occurred. This can lead to misinterpretations or incorrect historical conclusions as a result of not having an accurate timeline. Oral tradition has been subject to distortion over time and has produced incorrect information, however, the transformation of oral tradition can provide information in itself regarding the transformation of a community. This can also allow the researcher to measure varying social attitudes and customs over time from the changes seen in storytelling. [citation needed]
It is important as a historian to understand the potential shortcomings or obstacles one might face when using oral tradition as an educational source (Babatunde 19). During the process, the researcher may have trouble placing certain events or differentiating between similar stories that may exist within one tribe. The transmission of oral histories across generations typically makes them more sensitive to inaccuracies or discrepancies that cause greater difficulty when working with them. It is necessary to maintain patience and devote a significant amount of time to deciphering oral tradition. [citation needed]
As a result of the widespread overlooking of these oral traditions, there are endless opportunities for discovery and further research into how oral tradition has formed these civilizations and correspond to other parts of history. This field of exploration can open doors to revolutionary findings or expose even more crime committed against the British and American imperial governments. This is a largely untapped pool of incredibly rich and fascinating culture that should be further explored but is unfortunately neglected by much of the western world. [citation needed]
See also
:Category:Oral tradition
:Category:Oral literature
:Category:Folklore
American Indian elder
Folk process
"Gawęda" (a genre of Polish oral folklore)
"Gold Duck" (Polish: "Złota Kaczka")
Griot
Hadith
Intangible culture
Oral history
Oral law
Oral Torah
Oral Tradition Journal
Oral-formulaic composition
Orality
Panchatantra
Parampara
Patha, Śrauta
Secondary orality
Traditional knowledge
Understanding Media
World Oral Literature Project
References
Notes
Citations
Bibliography
Foley, John Miles. Oral Formulaic Theory and Research: An Introduction and Annotated Bibliography. NY: Garland, 1985
Foley, John Miles. The Theory of Oral Composition. Bloomington: IUP, 1991
External links
Back to the Oral Tradition
Folk Tales from around the world
The Center for Studies in Oral Tradition
The Milman Parry Collection of Oral Literature Online
Oral Tradition Journal
The World Oral Literature Project
Post-Gutenberg Galaxy
Dédalo Project. Open Software Platform for Management of Intangible Cultural Heritage and Oral History
Archive of Turkish Oral Narrative at Texas Tech University | 0.767178 | 0.997424 | 0.765202 |
Etiology | Etiology (; alternatively spelled aetiology or ætiology) is the study of causation or origination. The word is derived from the Greek word , meaning "giving a reason for". More completely, etiology is the study of the causes, origins, or reasons behind the way that things are, or the way they function, or it can refer to the causes themselves. The word is commonly used in medicine (pertaining to causes of disease or illness) and in philosophy, but also in physics, biology, psychology, political science, geography, cosmology, spatial analysis and theology in reference to the causes or origins of various phenomena.
In the past, when many physical phenomena were not well understood or when histories were not recorded, myths often arose to provide etiologies. Thus, an etiological myth, or origin myth, is a myth that has arisen, been told over time or written to explain the origins of various social or natural phenomena. For example, Virgil's Aeneid is a national myth written to explain and glorify the origins of the Roman Empire. In theology, many religions have creation myths explaining the origins of the world or its relationship to believers.
Medicine
In medicine, the etiology of an illness or condition refers to the frequent studies to determine one or more factors that come together to cause the illness. Relatedly, when disease is widespread, epidemiological studies investigate what associated factors, such as location, sex, exposure to chemicals, and many others, make a population more or less likely to have an illness, condition, or disease, thus helping determine its etiology. Sometimes determining etiology is an imprecise process. In the past, the etiology of a common sailor's disease, scurvy, was long unknown. When large, ocean-going ships were built, sailors began to put to sea for long periods of time, and often lacked fresh fruit and vegetables. Without knowing the precise cause, Captain James Cook suspected scurvy was caused by the lack of vegetables in the diet. Based on his suspicion, he forced his crew to eat sauerkraut, a cabbage preparation, every day, and based upon the positive outcomes, he inferred that it prevented scurvy, even though he did not know precisely why. It took about another two hundred years to discover the precise etiology; the lack of vitamin C in a sailor's diet.
The following are examples of intrinsic factors:
Inherited conditions, or conditions that are passed down to you from your parents. An example of this is hemophilia, a disorder that leads to excessive bleeding.
Metabolic and endocrine, or hormone, disorders. These are abnormalities in the chemical signaling and interaction in the body. For example, Diabetes mellitus is an endocrine disease that causes high blood sugar.
Neoplastic disorders or cancer where the cells of the body grow out of control.
Problems with immunity, such as allergies, which are an overreaction of the immune system.
Mythology
An etiological myth, or origin myth, is a myth intended to explain the origins of cult practices, natural phenomena, proper names and the like. For example, the name Delphi and its associated deity, Apollon Delphinios, are explained in the Homeric Hymn which tells of how Apollo, in the shape of a dolphin, propelled Cretans over the seas to make them his priests. While Delphi is actually related to the word ("womb"), many etiological myths are similarly based on folk etymology (the term "Amazon", for example). In the Aeneid (published ), Virgil claims the descent of Augustus Caesar's Julian clan from the hero Aeneas through his son Ascanius, also called Iulus. The story of Prometheus' sacrifice trick at Mecone in Hesiod's Theogony relates how Prometheus tricked Zeus into choosing the bones and fat of the first sacrificial animal rather than the meat to justify why, after a sacrifice, the Greeks offered the bones wrapped in fat to the gods while keeping the meat for themselves. In Ovid's Pyramus and Thisbe, the origin of the color of mulberries is explained, as the white berries become stained red from the blood gushing forth from their double suicide.
See also
Backstory
Bradford Hill criteria
Correlation does not imply causation
Creation myth
Just-so story
Just So Stories
Pathology
Pourquoi story
Problem of causation
Involution (esoterism)
References
External links
Causes of conditions
Origin myths
Mythography
Mythology
Origins | 0.767772 | 0.996651 | 0.765201 |
Saeculum | A is a length of time roughly equal to the potential lifetime of a person or, equivalently, the complete renewal of a human population.
Background
Originally it meant the time from the moment that something happened (for example the founding of a city) until the point in time that all people who had lived at the first moment had died. At that point a new would start. According to legend, the gods had allotted a certain number of to every people or civilization; the Etruscans, for example, had been given ten saecula.
By the 2nd century BC, Roman historians were using the to periodize their chronicles and track wars. At the time of the reign of emperor Augustus, the Romans decided that a was 110 years. In 17 BC, Caesar Augustus organized Ludi saeculares ("saecular games") for the first time to celebrate the "fifth saeculum of Rome". Augustus aimed to link the with imperial authority.
Emperors such as Claudius and Septimius Severus celebrated the passing of with games at irregular intervals. In 248, Philip the Arab combined Ludi saeculares with the 1,000th anniversary of the founding of Rome. The new millennium that Rome entered was called the saeculum novum, a term that received a metaphysical connotation in Christianity, referring to the worldly age (hence "secular").
Roman emperors legitimised their political authority by referring to the in various media, linked to a golden age of imperial glory. In response, Christian writers began to define the as referring to 'this present world', as opposed to the expectation of eternal life in the 'world to come'. This results in the modern sense of 'secular' as 'belonging to the world and its affairs'.
The English word secular, an adjective meaning something happening once in an eon, is derived from the Latin saeculum. The descendants of Latin saeculum in the Romance languages generally mean "century" (i.e., 100 years): French siècle, Spanish siglo, Portuguese século, Italian secolo, etc.
See also
Aeon, comparable Greek concept
Century
Generation
In saecula saeculorum
New world order (politics)
Social cycle theory
Strauss–Howe generational theory
Saeculum obscurum
References
Units of time
Ageing
Latin words and phrases | 0.777582 | 0.984077 | 0.7652 |
Colonial roots of gender inequality in Africa | The colonial roots of gender inequality refers to the political, educational, and economic inequalities between men and women in Africa. According to a Global Gender Gap Index report published in 2018, it would take 135 years to close the gender gap in Sub-Saharan Africa and nearly 153 years in North Africa. While much more is known about the effects of colonialism on all African people, less is known about the impacts of colonialism on specifically women. There are competing theories about the cause of gender inequality in Africa, but scholars suggest its genesis is in slavery and colonialism. For most women, colonialism resulted in an erosion of traditions and rights that formerly granted women equality and esteem. Women in pre-colonial Africa held positions of power and were influential in many aspects of their societies. This changed during the post-colonial period. With new forms of gender inequality introduced, many of the cultural underpinnings of African societies were eroded, and this harm has been challenging to mend. Theoretical frameworks that help to explain the colonial roots of gender inequality include coloniality of power and coloniality of gender. These decolonial concepts provide an account of how gender inequality became situated within the African context and help to explain why present-day inequalities, including women's political underrepresentation, remain significant challenges for Africa.
History
Gender inequality on the African continent has exacerbated as a result of colonialism, which disrupted the pre-colonial economic, cultural, and political systems on the African continent. Colonialism introduced patriarchal norms, a disruption of traditional African gender roles, and the criminalization of indigenous practices.
Throughout colonization, European powers altered African communities with their patriarchal norms. As a result, women were cast aside and given inferior positions in the home and in society. Colonialism established the notion that women were subordinate to males and that men should hold all positions of power and authority.
Consequently, traditional African gender roles were transformed: in African countries, colonialism altered traditional gender roles. In many pre-colonial African communities, women held significant roles in agriculture and other economic activities. In West Africa, for example, women had much sway over disputes on markets and agriculture. Though with the establishment of colonial legal systems, laws were created that granted men precedence over women in matters of marriage and divorce. Thus, much of the pre-colonial activity that women were involved in was often ignored by colonial officials, who only appointed men to local political positions.
Scholars point to the colonial legacy of African underdevelopment to explain gender inequality and female disempowerment. When Europeans settled in Uganda it caused a century-long transformation of Kampala which led to a gender Kuznets curve. African men were educated and employed in the white-collar (high-status) economy built by the Europeans. Women, on the other hand, were slower to obtain education and employment in the white-collared economy. This disparity contributed to the gender inequality gap in the early pre-colonial period, however, the gender gap gradually decreased during the late colonial era. Economist believed the gender gap may have been rooted in indigenous social norms. Less-educated women often worked in traditional informal economies rather than formal work. Consequently, they were subjected to marital gender inequality in comparison to women who worked in the formal economies created by Europeans.
Literature often characterized African women as subservient to their fathers and husbands. But in pre-colonial Africa, women were queen-mothers, queen-sisters; princesses, chiefs and holders of offices and villages, occasional warriors, and in one well known case, the Lovedu, the supreme monarch. Yet, colonial laws and regulations restricted women's access to land and other resources, which resulted in their exclusion. In many African communities, colonization displaced women from their traditional roles in society, eroding their prestige and limiting them to passive beneficiaries of support.
Furthermore, many African indigenous traditions, like widow inheritance were made either completely illegal or restricted severely by colonial authorities, meaning that women would not benefit as they would during the pre-colonial period. The criminalization of indigenous ways of life frequently resulted in the demonization of African cultures. By imposing their own rules and ideals, which included a Westernized gender ideology that delegitimized native cultural practices, the colonial authorities intended to change the "uncivilized" African societies. As a result, women in post-colonial Africa were not always protected from certain abuses because their societal or political power was severely limited. Many scholars believe African women became virtually voiceless, unable to gain economic and political equality.
Theories of gender inequality in Africa
Different theoretical frameworks have been identified by scholars as being at the root of gender inequality in Africa. Most theories establish that contemporary African societies cannot be viewed outside the context of European colonialism, as it is through this lens that the oppression and marginalization of women in Africa can be understood. Scholars generally believe that the present-day African patriarchal system is one modeled after the Eurocentric perspective, in which European hierarchical structures were adopted, contributing to the diminution of women's roles in family and home life throughout the continent. They identify notions of coloniality of power and coloniality of gender to help explain the genesis of gender inequality on the continent.
Coloniality of power
Scholars identify the concept of coloniality of power, coined by Aníbal Quijano, to provide a framework for understanding the role of colonialism in producing a power inequality between men and women. This concept, initially advanced in Latin America post-colonial studies, has now been adopted to describe the interactions between Europe and other parts of the world, like Africa. This theoretical framework establishes that colonialism regulated and continues to regulate various dimensions of African societies, including gender relations, culture, and the economy, among others, thereby establishing that the European imperial project is at the core of the oppression of women in Africa. This theory explains how historical patterns of dominance and social hierarchies are interwoven with contemporary forms of oppression and marginalization.
Patriarchy became accepted in African cultures and solidified the subjugation of women in Africa. According to scholars, patriarchy can be thought of as an ideology or political system where men direct women on what roles they shall or shall not play in society, and women are thought of as inferior to men. Yet, patriarchy was not Africa's primary system of political and social organization prior to colonization. Many matriarchal societies have existed throughout Africa's history where women played important roles and maintained social equilibrium. In pre-colonial Africa, there was no transition from matriarchy to patriarchy since the social structure was fundamentally matriarchal in that women held power, passed down property and lineage, and were the flexible party in marriage and sexual unions. This changed drastically with the introduction of colonialism. Coloniality of power illustrates how the power imbalance between men and women became severe with the introduction of colonialism.
Coloniality of gender
Coloniality of gender outlines that gender cannot be separated from colonialism. This theoretical framework, created by Maria Lugones to explain the role of colonialism in enacting Eurocentric gender structures onto Indigenous people of the Americas, builds on the coloniality of power framework to explain colonialism in Africa, though now with a deeper consideration of gender, sexuality, and race. From a coloniality of gender perspective, colonialism radically transformed the indigenous sense of identity and gender relations in Africa. Thus, Europeans understandings of gender supplanted pre-existing notions of sex and gender that had been established long before the arrival of Europeans. Furthermore, European understandings of gender became a tool for dominance by identifying two binary, hierarchical categories, where women became defined by their subservient relationship to men in all facets of life.
According to prominent de-colonial feminist scholar Oyeronke Oyewumi, "gender was not an organizing principle in Yoruba society prior to colonialization." Oyewumi bases this conclusion on her studies of the Yoruba people in modern-day Benin, Togo, and Nigeria, where she finds that it was the Western world that introduced the idea of gender as a tool for dominance that denotes two binary and hierarchical social categories. As a result, women came to be defined by their relationships to men and were consequently denied access to power, land, and leadership positions in society. With the introduction of gender as a concept, women were created as a distinguishable category that was always subordinate to men in Yoruba culture.
Furthermore, scholars note that colonial authorities viewed African families as places of tradition and custom that needed to be changed through colonial intervention. Through laws, colonial authorities aimed to radically change how African families operated, establishing new gender relationships based on what these authorities considered socially acceptable. Thereby, these authorities established new African "traditions", where communities and families who once deviated from Eurocentric norms, with non-patriarchal family structures or complementary gender relationships, now had to conform or face punishment or mistreatment.
Gender roles in pre-colonial and post-colonial Africa
Women in all of pre-colonial Africa held positions of prominence and dignity. They played critical roles socially and economically, and contributed to the family by processing food, weaving, making pottery, and cooking. The roles of women changed drastically in the post-colonial period, where Europeans introduced a patriarchal system that devalued women and their contributions.
Pre-colonial Africa
In the pre-colonial era, women were politically active. Women were largely included in important decision-making processes, as women were central figures whose commercial activities were engrained in the cultural fabric of their societies. They governed the home, which was a very important role with significant power. Because power and privilege were based on age and gender, elder women had a voice in many important issues concerning the family and community. Private and public activities were so commingled that the power and privilege women held in the home was often mirrored in public. Some literature details how women used the production of food to gain respect, and in turn, they used that respect to dominate the children and influence the men in their lives. Religious women prayed to the gods and spirits for power and influence.
In Ghana, for example, the queen mothers of Asante were part of a dual-gender system of leadership alongside tribal chiefs. Together, the queen mothers and chiefs represented the center of authority for towns and villages in Asante culture, with the queen mother of Asante and the king of Asante serving as the final authority ruling over the Asante people.
Many women were involved in trade. Yoruba women, for example, were the central figures in long-distance trade. They amassed enormous wealth and held prominent titles. A successful Yoruba woman might hold the chieftaincy title of iyalode, which meant she had great privilege and power.
In northern Kenya, pastoralist women were given responsibility over the management of small cattle and processing of basic goods including meat, milk, and skins. These women had significant control over how these goods were traded and distributed.
Post-colonial Africa
In the 20th century, women lost their influence and power when patriarchy and colonialism changed gender relations. The role of female chiefs decreased as male chiefs negotiated with European colonial administrations in the oversight of taxes and governance. In Nigeria, Nigerian men and European firms dominated the distribution of rubber, cocoa, groundnuts (peanuts), and palm oil, as the economy became more and more dependent on cash crops for exports. This pushed women into the background where they were forced into the informal economy. The custom land-tenure systems that once provided women with access to land was exchanged for land commercialization which favored those with access to wealth earned from the sale of cash crops. Moreover, the European-style education system in post-colonial Africa favored boys over girls.
In northern Kenya, women lost their positions, authority, and respect that they had attained through their pastoralist responsibilities, as a result of a new colonial government that radically altered the social structure of their communities. Thereby, putting these women at the periphery of political and economic decision-making concerns.
In Ghana, a substantial paradigm shift emerged as a result of colonialism, where a divide between tribal chiefs and queen mothers was created, and the influence of queen mothers was substantially reduced. Though, because of continued resistance to these changes brought on by colonialism, queen mothers remained steadfast in their commitment to their communities and, after the rise of the global women's movement, later gained prominence in their roles once again.
Gender inequalities in the 21st century
Many of the problems introduced through colonialism, have contributed to the systemic inequalities present today on the continent. Analysts and scholars contend that the global movements created to improve the livelihood of women in the West, and those living in urban cities, have not benefitted women in Sub-Saharan Africa. To close the gender gap in Africa, the issues African women face must become part of the global discussions. According to published reports, Sub-Saharan Africa is among the world's most gender-unequal regions. The United Nations Development Programme (UNDP), reports “perceptions, attitudes, and historic gender roles” prevent women from accessing health care and education and contributes to disproportionate levels of family responsibility, job segregation, and sexual violence. Women in Africa experience the greatest levels of discriminatory practices. In addition to facing structural barriers regarding educational and economic inequalities, women in Africa face major obstacles in being political representatives. Without adequate women's representation, it is hard for many of the present-day inequalities to be rectified.
Educational inequalities
Theoretically, the inequality between boys and girls starts in primary school and widens throughout the educational process. Over the past decade, Africa registered the highest relative increase in primary education in total enrollment among regions. Girls, however, were enrolled at lower rates. In 2000, Sub-Saharan Africa reported 23 million girls were not enrolled in primary school, an increase of 3 million from a decade earlier when 20 million were not enrolled. Now in 2023, the gender educational gap is being significantly reduced with 66% of girls completing their primary education compared to 61% for boys, highlighting signs of progress in this regard.
Policy reforms in countries such as Benin, Botswana, the Gambia, Guinea, Lesotho, Mauritania, and Namibia made notable improvements in enrollment for girls. For example, in Benin, the gender gap has been combatted through several initiatives like media campaigns emphasizing to parents the necessity of enrolling girls in primary school as well reforms like making upper secondary school education free for girls.
Two of the biggest challenges facing young girls' educational pursuits in Africa include concerns of child marriage and human trafficking. Countries including Mauritania have combatted child marriage through public campaigns against the practice. Furthermore, the implementation of practices like bus transportation for young girls in rural areas has helped to combat human trafficking.
The female completion rate for primary education in many countries is staggeringly low. Thus, in order to address this issue, countries including Guinea have made girl's education a national priority through grassroots efforts. Programs aimed at encouraging mothers to advocate for their daughter's education as well as initiatives intended to increase the quality of girls' education have been critical in addressing this gap.
Economic inequalities
Among the biggest challenges facing the continent is economic inequality, with women facing massive hurdles in being able to participate in areas such as employment and entrepreneurship. In Africa, women are still disproportionately employed in informal, unstable jobs with few possibilities for education or training.
It is believed that discriminatory social institutions cost Africa roughly 7.5% of its gross domestic product (GDP) in 2019. Analysts believe that women's inability to accumulate wealth has allowed for gender inequality to persist on the continent. According to the World Bank, 37% of women in Sub-Sahara Africa have a bank account, compared to 48% of men. These percentages are even lower for women in North Africa where two-thirds of the population remains unbanked.
Traditional gender roles have contributed to the economic inequality present in the region, with only 25% of women being the head of their household, compared to 70% of men. Such realities maintain unequal allocations of unpaid care work, which has a negative influence on women's labor-force participation. Women in Africa perform four times as much unpaid care and domestic work as men, which exceeds the global average. Men continue to dominate traditional working sectors, due to social norms that regard men as rightful owners. Women's ownership of land is, in turn, stifled, with women only owning 12% of agricultural land despite accounting for over half of Africa's agriculture workforce.
For women to escape poverty, analysts believe they need access to development policies that would place more emphasis on their contributions to the economy. Women contribute significantly to the population, but their labor participation is not fully accounted for because it is often derived from informal agricultural farming. Similarly, any work inside the home is not accounted for because it is not considered to be an economic activity. At the macro level, gender inequality is also costly. The UNDP reports countries in Sub-Saharan Africa lose approximately $95 billion annually because women are not integrated into the national economy. When impoverished women are unable to contribute socially, economic growth is stagnated.
Some women farmers do have access to financial resources, leading analysts to conclude financial empowerment would increase female participation in community decision-making and combat social marginalization, which would improve the overall family's well-being. Case studies show women who manage household finances are less likely to have children die from malnutrition.
According to the UNDP, one of the most significant changes needed would be commitments from financial institutions to offer products that meet the needs of women, which would give more women access to financial resources. By creating specific loan programs for crops that are traditionally grown by female farmers – such as groundnuts or sunflowers, financial institutions would encourage female leadership in farmers’ cooperatives, and support markets where women sell their harvests. Furthermore, the World Bank Group is working to facilitate financial capability trainings for women and the development of their business skills.
At current rates of financial inclusion, it would take the world more than 200 years to achieve gender parity globally, but analysts believe if governments, international actors, and the financial industry devise and sustain more gender-focused policies, the gender inequality gap would shorten more quickly.
Political underrepresentation
The legislatures of African nations have seen the highest increases of gender parity in the globe, due in large part to the implementation of quotas and reserved seats that have contributed to significant gains in the proportion of women in these national legislatures. Yet, from December 2021 to June 2022, only 17 women were elected or appointed to parliaments, ministerial, or electoral offices in West Africa out of the 134 positions that were available, and growth in women's participation in political bodies has been incredibly minor and slow, throughout the entire continent.
According to data from the Inter-Parliamentary Union (IPU), which tracks how many women are elected to parliaments all over the world, Sub-Saharan Africa is currently third out of the six major regional groupings, with a gender-parity percentage of 26%, behind the Americas and Europe, and ahead of Asia and the Pacific, and North Africa-Mideast. Regional figures on gender parity have been inflated by countries like Senegal, who passed a gender parity bill in 2010 requiring political parties to guarantee that women make at least 50 percent of their candidates. As well as countries like South Africa and Rwanda who have achieved a 30% UN benchmark of women in parliament, with Rwanda having constitutional provisions reserving seats for women, and South Africa's ruling party voluntarily allocating 50% of parliamentary seats to women. Though, most other countries in the region, including Nigeria and Benin, continue to be unable to pass gender equality legislation in Congress. In Benin, women remain significantly underrepresented in elected and appointed positions as a result of there being no laws or government initiatives taken to guarantee women's political representation. Compared to countries without quota-like policies, women's representation in countries with such policies is 10 percentage points higher. With scholars recognizing the role of female political representation in impacting young girls' career aspirations and education attainment, greater women's representation has the potential to alleviate other inequalities present on the continent in the long-term.
Despite the significant underrepresentation of women, there are signs of positive change. Djibouti, which had zero women in parliament in the year 2000, now has women comprising 26.2% of its parliamentary bodies. Furthermore, regarding ministerial positions, there have been significant increases in women ministers of defense, finance, and foreign affairs in 2019 compared to 2017, with Rwanda and South Africa leading the way in that regard. According to UN Women, structural barriers like discriminatory laws and practices, as well capacity gaps including a lack of resources and education, are the greatest barriers to women's participation in the political arena, so addressing the deficits in legal frameworks in many of these countries, along with educational and economic inequalities will be necessary in achieving gender equality in the region.
References
Gender equality | 0.791901 | 0.966264 | 0.765186 |
Chronemics | Chronemics is an anthropological, philosophical, and linguistic subdiscipline that describes how time is perceived, coded, and communicated across a given culture. It is one of several subcategories to emerge from the study of nonverbal communication. According to the Encyclopedia of Special Education, "Chronemics includes time orientation, understanding and organisation, the use of and reaction to time pressures, the innate and learned awareness of time, by physically wearing or not wearing a watch, arriving, starting, and ending late or on time." A person's perception and values placed on time plays a considerable role in their communication process. The use of time can affect lifestyles, personal relationships, and work life. Across cultures, people usually have different time perceptions, and this can result in conflicts between individuals. Time perceptions include punctuality, interactions, and willingness to wait.
Definition
Chronemics is the study of the use of time in nonverbal communication, though it carries implications for verbal communication as well. Time perceptions include punctuality, willingness to wait, and interactions. The use of time can affect lifestyles, daily agendas, speed of speech, movements, and how long people are willing to listen.
Fernando Poyatos, Professor Emeritus at the University of New Brunswick, coined the term "chronemics" in 1972. Thomas J. Bruneau (1940-2012), Professor Emeritus at Radford University who taught at the University of Guam in his early career and whose scholarship focused on silence, empathy, and intercultural communication, identified the parameters of this field of study in the late 1970s. Bruneau defined chronemics and specified the functions of time in human interactions as follows:
Time can be used as an indicator of status. For example, in most companies the boss can interrupt progress to hold an impromptu meeting in the middle of the work day, yet the average worker would have to make an appointment to see the boss.
The way in which different cultures perceive time can influence communication as well.
Monochronic time
A monochronic time system means that things are done one at a time and time is segmented into small precise units. Under this system, time is scheduled, arranged, and managed.
The United States considers itself a monochronic society. This perception came about during the Industrial Revolution. Many Americans think of time as a precious resource not to be wasted or taken lightly. As communication scholar Edward T. Hall wrote regarding the American's viewpoint of time in the business world, "the schedule is sacred." Hall says that for monochronic cultures, such as the American culture, "time is tangible" and viewed as a commodity where "time is money" or "time is wasted." John Ivers, a professor of cultural paradigms, agrees with Edward Hall by stating, "In the market sense, monochronic people consume time." The result of this perspective is that monochronic cultures place a paramount value on schedules, tasks, and "getting the job done.
Monochronic time orientation is very prominent in North European cultures, German-speaking countries, and the Scandinavian countries. For example, a businessperson from the USA has a meeting scheduled, they then grow frustrated because they are waiting an hour for their partner to arrive. This is an example of a monochronic time oriented individual running in with a polychronic time oriented individual. The interesting thing is that even though America is seen as one of the most monochronic countries it "has subcultures that may lean more to one side or the other of the monochronic-polychronic divide" within the states themselves. One can see this as they compare the southern states to the northern ones. John Ivers points this out with comparing waiters in the northern and southern restaurants. The waiters from the north are "to the point": they will "engage in little" and there is usually "no small talk." They are trying to be as efficient as possible, while those in the south will work towards "establishing a nice, friendly, micro-relationship" with the customer. They are still considerate of time, but it is not the most important goal in the south.
The culture of African Americans might also be seen as polychronic. (See CP Time.)
Polychronic time
A polychronic time system means several things can be done at once. In polychronic time systems, a wider view of time is exhibited, and time is perceived in large fluid sections.
Examples of polychronic cultures are: Latin American, African, Arab, South Asian, Mediterranean, and Native American cultures. These cultures' view on time can be connected to "natural rhythms, the earth, and the seasons". These analogies can be understood and compared because natural events can occur spontaneously and sporadically, just like polychronic time oriented people and polychronic time oriented cultures. A scenario would be an Inuit working in a factory in Alaska, the superiors blow a whistle to alert for break times, etc. The Inuit are not fond of that method because they determine their times by the sea tides, how long it takes place and how long it lasts. In polychronic cultures, "time spent with others" is considered a "task" and of importance to one's daily regimen.
Polychronic cultures are much less focused on the preciseness of accounting for time and more on tradition and relationships rather than on tasks. Polychronic societies have no problem being late for an appointment if they are deeply focused on some work or in a meeting that ran past schedule, because the concept of time is fluid and can easily expand or contract as need be. As a result, polychronic cultures have a much less formal perception of time. They are not ruled by precise calendars and schedules.
Measuring polychronicity
Bluedorn, Allen C., Carol Felker Kaufman, and Paul M. Lane concluded that "developing an understanding of the monochronic/polychronic continuum will not only result in a better self-management but will also allow more rewarding job performances and relationships with people from different cultures and traditions." Researchers have examined that predicting someone's polychronicity plays an important role in productivity and individual well-being. Researchers have developed the following questionnaires to measure polychronicity:
Inventory of Polychronic Values (IPV), developed by Bluedorn et al., which is a 10-item scale designed to assess "the extent to which people in a culture prefer to be engaged in two or more tasks or events simultaneously and believe their preference is the best way to do things."
Polychronic Attitude Index (PAI), developed by Kaufman-Scarborough & Lindquist in 1991, which is a 4-item scale measuring individual preference for polychronicity, in the following statements:
"I do not like to juggle several activities at the same time".
"People should not try to do many things at once".
"When I sit down at my desk, I work on one project at a time".
"I am comfortable doing several things at the same time".
Predictable patterns between cultures with differing time systems
Cross-cultural perspectives on time
Conflicting attitudes between the monochronic and polychronic perceptions of time can interfere with cross-cultural relations and play a role in these domains, and as a result, challenges can occur within an otherwise assimilated culture. One example in the United States is the Hawaiian culture, which employs two time systems: Haole time and Hawaiian time.
According to Ashley Fulmer and Brandon Crosby, "as intercultural interactions increasingly become the norm rather than the exception, the ability of individuals, groups, and organizations to manage time effectively in cross-cultural settings is critical to the success of these interactions".
Time orientations
The way an individual perceives time and the role time plays in their lives is a learned perspective. As discussed by Alexander Gonzalez and Phillip Zimbardo, "every child learns a time perspective that is appropriate to the values and needs of his society" (Guerrero, DeVito & Hecht, 1999, p. 227).
There are four basic psychological time orientations:
Past
Time-line
Present
Future
Each orientation affects the structure, content, and urgency of communication (Burgoon, 1989). The past orientation has a hard time developing the notion of elapsed time and these individuals often confuse present and past happenings as all in the same. People oriented with time-line cognitivity are often detail oriented and think of everything in linear terms. These individuals also often have difficulty with comprehending multiple events at the same time. Individuals with a present orientation are mostly characterized as pleasure seekers who live for the moment and have a very low risk aversion. Those individuals who operate with future orientation are often thought of as being highly goal oriented and focused on the broad picture.
The use of time as a communicative channel can be a powerful, yet subtle, force in face-to-face interactions. Some of the more recognizable types of interaction that use time are:
Regulating interaction This is shown to aid in the orderly transition of conversational turn-taking. When the speaker is opening the floor for a response, they will pause. However, when no response is desired, the speaker will talk a faster pace with minimal pause. (Capella, 1985)
Expressing intimacy As relationships become more intimate, certain changes are made to accommodate the new relationship status. Some of the changes that are made include lengthening the time spent on mutual gazes, increasing the amount of time doing tasks for or with the other person and planning for the future by making plans to spend more time together (Patterson, 1990).
Affect management The onset of powerful emotions can cause a stronger affect, ranging from joy to sorrow or even to embarrassment. Some of the behaviors associated with negative affects include decreased time of gaze and awkwardly long pauses during conversations. When this happens, it is common for the individuals to try and decrease any negative affects and subsequently strengthen positive affects (Edelman & Iwawaki, 1987).
Evoking emotion Time can be used to evoke emotions in an interpersonal relationship by communicating the value of the relationship. For example, when someone with whom you have a close relationship is late, you may not take it personally, especially if that is characteristic of them. However, when meeting with a total stranger, disrespect for the value of your time may be taken personally and could even cause you to display negative emotions if and when they do arrive for the meeting.
Facilitating service and task goals Professional settings can sometimes give rise to interpersonal relations which are quite different from other "normal" interactions. For example, the societal norms that dictate minimal touch between strangers are clearly altered if one member of the dyad is a doctor, and the environment is that of a hospital examination room.
Time orientation and consumers
Time orientation has also revealed insights into how people react to advertising. Martin, Gnoth and Strong (2009) found that future-oriented consumers react most favorably to ads that feature a product to be released in the distant future and that highlight primary product attributes. In contrast, present-oriented consumers prefer near-future ads that highlight secondary product attributes. Consumer attitudes were mediated by the perceived usefulness of the attribute information.
Culture and diplomacy
Cultural roots
Just as monochronic and polychronic cultures have different time perspectives, understanding the time orientation of a culture is critical to becoming better able to successfully handle diplomatic situations. Americans think they have a future orientation. Hall indicates that for Americans "tomorrow is more important" and that they "are oriented almost entirely toward the future" (Cohen, 2004, p. 35). The future-focused orientation attributes to at least some of the concerns that Americans have with "addressing immediate issues and moving on to new challenges" (Cohen, 2004, p. 35).
On the other hand, many polychronic cultures have a past-orientation toward time.
These time perspectives are the seeds for communication clashes in diplomatic situations. Trade negotiators have observed that "American negotiators are generally more anxious for agreement because "they are always in a hurry" and basically "problem solving oriented." In other words, they place a high value on resolving an issue quickly calling to mind the American catchphrase "some solution is better than no solution" (Cohen, 2004, p. 114). Similar observations have been made of Japanese-American relations. Noting the difference in time perceptions between the two countries, former ambassador to Tokyo, Mike Mansfield commented "We're too fast, they're too slow" (Cohen, 2004, p. 118).
Influence on global affairs
Different perceptions of time across cultures can influence global communication. When writing about time perspective, Gonzalez and Zimbardo comment that "There is no more powerful, pervasive influence on how individuals think and cultures interact than our different perspectives on time—the way we learn how we mentally partition time into past, present and future."
Depending upon where an individual is from, their perception of time might be that "the clock rules the day" or that "we'll get there when we get there."
Improving prospects for success in the global community requires understanding cultural differences, traditions and communication styles.
The monochronic-oriented approach to negotiations is direct, linear and rooted in the characteristics that illustrate low context tendencies. The low context culture approaches diplomacy in a lawyerly, dispassionate fashion with a clear idea of acceptable outcomes and a plan for reaching them. Draft arguments would be prepared elaborating positions. A monochronic culture, more concerned with time, deadlines and schedules, tends to grow impatient and want to rush to "close the deal."
More polychronic-oriented cultures come to diplomatic situations with no particular importance placed on time. Chronemics is one of the channels of nonverbal communication preferred by a High context Polychronic negotiator over verbal communication. The polychronic approach to negotiations will emphasize building trust between participants, forming coalitions and finding consensus. High context Polychronic negotiators might be charged with emotion toward a subject thereby obscuring an otherwise obvious solution.
Control of time in power relationships
Time has a definite relationship to power. Though power most often refers to the ability to influence people, power is also related to dominance and status.
For example, in the workplace, those in a leadership or management position treat time and – by virtue of position – have their time treated differently from those who are of a lower stature position. Anderson and Bowman have identified three specific examples of how chronemics and power converge in the workplacewaiting time, talk time, and work time.
Waiting time
Researchers Insel and Lindgren write that the act of making an individual of a lower stature wait is a sign of dominance. They note that one who "is in the position to cause another to wait has power over him. To be kept waiting is to imply that one's time is less valuable than that of the one who imposes the wait."
Talk time
There is a direct correlation between the power of an individual in an organization and conversation. This includes both length of conversation, turn-taking, and who initiates and ends a conversation. Extensive research indicates that those with more power in an organization will speak more often and for a greater length of time. Meetings between superiors and subordinates provide an opportunity to illustrate this concept. A superior – regardless of whether or not they are running the actual meeting – lead discussions, ask questions, and have the ability to speak for longer periods of time without interruption. Likewise, research shows that turn-taking is also influenced by power. Social psychologist Nancy Henley notes that "Subordinates are expected to yield to superiors and there is a cultural expectation that a subordinate will not interrupt a superior". The length of a response follows the same pattern. While the superior can speak for as long as they want, the responses of the subordinate are shorter in length. Albert Mehrabian noted that deviation from this pattern led to negative perceptions of the subordinate by the superior. Beginning and ending a communication interaction in the workplace is also controlled by the higher-status individual in an organization. The time and duration of the conversation are dictated by the higher-status individual.
Work time
The time of high status individuals is perceived as valuable, and they control their own time. On the other hand, a subordinate with less power has their time controlled by a higher status individual and are in less control of their time – making them likely to report their time to a higher authority. Such practices are more associated with those in non-supervisory roles or in blue collar rather than white collar professions. Instead, as power and status in an organization increase, the flexibility of the work schedule also increases. For instance, while administrative professionals might keep a 9 to 5 work schedule, their superiors may keep less structured hours. This does not mean that the superior works less. They may work longer, but the structure of their work environment is not strictly dictated by the traditional workday. Instead, as Koehler and their associates note "individuals who spend more time, especially spare time, to meetings, to committees, and to developing contacts, are more likely to be influential decision makers".
A specific example of the way power is expressed through work time is scheduling. As Yakura and others have noted in research shared by Ballard and Seibold, "scheduling reflects the extent to which the sequencing and duration of plans activities and events are formalized" (Ballard and Seibold, p. 6). Higher-status individuals have very precise and formal schedules – indicating that their stature requires that they have specific blocks of time for specific meetings, projects and appointments. Lower status individuals however, may have less formalized schedules. Finally, the schedule and appointment calendar of the higher status individual will take precedence in determining where, when and the importance of a specific event or appointment.
Associated theories
Expectancy violations theory
Developed by Judee Burgoon, expectancy violations theory (EVT) sees communication as the exchange of information which is high in relational content and can be used to violate the expectations of another which will be perceived as either positively or negatively depending on the liking between the two people.
When our expectations are violated, we will respond in specific ways. If an act is unexpected and is assigned favorable interpretation, and it is evaluated positively, it will produce more favorable outcomes than an expected act with the same interpretation and evaluation.
Interpersonal adaptation theory
The interpersonal adaptation theory (IAT), founded by Judee Burgoon, states that adaptation in interaction is responsive to the needs, expectations, and desires of communicators and affects how communicators position themselves in relation to one another and adapt to one another's communication. For example, they may match each other's behavior, synchronize the timing of behavior, or behave in dissimilar ways. It is also important to note that individuals bring to interactions certain requirements that reflect basic human needs, expectations about behavior based on social norms, and desires for interaction based on goals and personal preferences (Burgoon, Stern & Dillman, 1995).
See also
African time
Paul Virilio
Philosophy of space and time
Johannes Fabian
References
Adler, ROBIN.B., Lawrence B.R., & Towne, N. (1995). Interplay (6th ed.). Fort Worth: Hardcourt Brace College.
Ballard, D & Seibold, D., Communication-related organizational structures and work group temporal differences: the effects of coordination method, technology type, and feedback cycle on members' construals and enactments of time. Communication Monographs, Vol. 71, No. 1, March 2004, pp. 1–27
Buller D.B., & Burgoon, J.K. (1996). Interpersonal deception theory. Communication Theory, 6, 203–242.
Buller, D.B., Burgoon, J.K., & Woodall, W.G. (1996). Nonverbal communications: The unspoken dialogue (2nd ed.). New York: McGraw-Hill.
Burgoon, J.K., Stern, L.A., & Dillman, L. (1995). Interpersonal adaptation: Dyadic interaction patterns. Massachusetts: Cambridge University Press.
Capella, J. N. (1985). Controlling the floor in conversation. In A. Siegman and S. Feldstein (Eds.), Multichannel integrations of nonverbal behavior, (pp. 69–103). Hillsdale, NJ: Erlbaum
Cohen, R. (2004). Negotiating across cultures: International communication in an interdependent world (rev. ed.). Washington, DC: United States Institute of Peace.
Eddelman, R.J., and Iwawaki, S. (1987). Self-reported expression and the consequences of embarrassment in the United Kingdom and Japan. Psychologia, 30, 205-216
Griffin, E. (2000). A first look at communication theory (4th ed). Boston, MA: McGraw Hill.
Gonzalez, G., & Zimbardo, P. (1985). Time in perspective. Psychology Today Magazine, 20–26.
Hall, E.T. & Hall, M. R. (1990). Understanding cultural differences: Germans, French, and Americans. Boston, MA: Intercultural Press.
Hall, J.A., & Kapp, M.L. (1992). Nonverbal communication in human interaction (3rd ed.). New York: Holt Rinehart and Winston, Inc.
Knapp, M. L. & Miller, G.R. (1985). Handbook of Interpersonal Communication. Beverly Hills: Sage Publications.
Koester, J., & Lustig, M.W. (2003). Intercultural competence (4th ed.). New York: Pearson Education, Inc.
Patterson, M.L. (1990). Functions of non-verbal behavior in social interaction.
H. Giles & W.P. Robinson (Eds), Handbook of Language and Social Psychology, Chichester, G.B.: Wiley
West, R., & Turner, L. H. (2000). Introducing communication theory: Analysis and application. Mountain View, CA: Mayfield.
Wood, J. T. (1997). Communication theories in action: An introduction. Belmont, CA: Wadsworth.
Ivers, J. J. (2017). For Deep Thinkers Only. John J. Ivers
Further reading
Bluedorn, A.C. (2002). The human organization of time: Temporal realities and experience. Stanford, CA: Stanford University Press.
Cohen, R. (2004). Negotiating across cultures: International communication in an interdependent world (rev. ed.). Washington, DC: United States Institute of Peace.
Griffin, E. (2000). A first look at communication theory (4th ed). Boston, MA: McGraw Hill.
Hugg, A. (2002, February 4). Universal language. Retrieved May 10, 2007 from Website:
Osborne, H. (2006, January/February). In other words…actions can speak as clearly as words. Retrieved May 12, 2007 from Website: http://www.healthliteracy.com/article.asp?PageID=3763
Wessel, R. (2003, January 9). Is there time to slow down?. Retrieved May 10, 2007 from Website: http://www.csmonitor.com/2003/0109/p13s01-sten.html
A sonnet on the topic by the editor of the 11th edition (1910) of the Encyclopædia Britannica.
External links
Nonverbal communication
Social constructionism
Time | 0.770837 | 0.992565 | 0.765105 |
Paleogene | The Paleogene Period ( ; also spelled Palaeogene or Palæogene) is a geologic period and system that spans 43 million years from the end of the Cretaceous Period Ma (million years ago) to the beginning of the Neogene Period Ma. It is the first period of the Cenozoic Era and is divided into the Paleocene, Eocene, and Oligocene epochs. The earlier term Tertiary Period was used to define the time now covered by the Paleogene Period and subsequent Neogene Period; despite no longer being recognized as a formal stratigraphic term, "Tertiary" still sometimes remains in informal use. Paleogene is often abbreviated "Pg", although the United States Geological Survey uses the abbreviation "" for the Paleogene on the Survey's geologic maps.
Much of the world's modern vertebrate diversity originated in a rapid surge of diversification in the early Paleogene, as survivors of the Cretaceous–Paleogene extinction event took advantage of empty ecological niches left behind by the extinction of the dinosaurs, pterosaurs, marine reptiles, and primitive fish groups. Mammals continued to diversify from relatively small, simple forms into a highly diverse group ranging from small-bodied forms to very large ones, radiating into multiple orders and colonizing the air and marine ecosystems by the Eocene. Birds, the only surviving group of dinosaurs, quickly diversified from the very few neognath and paleognath clades that survived the extinction event, also radiating into multiple orders, colonizing different ecosystems and achieving an extreme level of morphological diversity. Percomorph fish, the most diverse group of vertebrates today, first appeared near the end of the Cretaceous but saw a very rapid radiation into their modern order and family-level diversity during the Paleogene, achieving a diverse array of morphologies.
The Paleogene is marked by considerable changes in climate from the Paleocene–Eocene Thermal Maximum, through global cooling during the Eocene to the first appearance of permanent ice sheets in the Antarctic at the beginning of the Oligocene.
Geology
Stratigraphy
The Paleogene is divided into three series/epochs: the Paleocene, Eocene, and Oligocene. These stratigraphic units can be defined globally or regionally. For global stratigraphic correlation, the International Commission on Stratigraphy (ICS) ratify global stages based on a Global Boundary Stratotype Section and Point (GSSP) from a single formation (a stratotype) identifying the lower boundary of the stage.
Paleocene
The Paleocene is the first series/epoch of the Paleogene and lasted from 66.0 Ma to 56.0 Ma. It is divided into three stages: the Danian 66.0 - 61.6 Ma; Selandian 61.6 - 59.2 Ma; and, Thanetian 59.2 - 56.0 Ma. The GSSP for the base of the Cenozoic, Paleogene and Paleocene is at Oued Djerfane, west of El Kef, Tunisia. It is marked by an iridium anomaly produced by an asteroid impact, and is associated with the Cretaceous–Paleogene extinction event. The boundary is defined as the rusty colored base of a 50 cm thick clay, which would have been deposited over only a few days. Similar layers are seen in marine and continental deposits worldwide. These layers include the iridium anomaly, microtektites, nickel-rich spinel crystals and shocked quartz, all indicators of a major extraterrestrial impact. The remains of the crater are found at Chicxulub on the Yucatan Peninsula in Mexico. The extinction of the non-avian dinosaurs, ammonites and dramatic changes in marine plankton and many other groups of organisms, are also used for correlation purposes.
Eocene
The Eocene is the second series/epoch of the Paleogene, and lasted from 56.0 Ma to 33.9 Ma. It is divided into four stages: the Ypresian 56.0 Ma to 47.8 Ma; Lutetian 47.8 Ma to 41.2 Ma; Bartonian 41.2 Ma to 37.71 Ma; and, Priabonian 37.71 Ma to 33.9 Ma. The GSSP for the base of the Eocene is at Dababiya, near Luxor, Egypt and is marked by the start of a significant variation in global carbon isotope ratios, produced by a major period of global warming. The change in climate was due to a rapid release of frozen methane clathrates from seafloor sediments at the beginning of the Paleocene-Eocene thermal maximum (PETM).
Oligocene
The Oligocene is the third and youngest series/epoch of the Paleogene, and lasted from 33.9 Ma to 23.03 Ma. It is divided into two stages: the Rupelian 33.9 Ma to 27.82 Ma; and, Chattian 27.82 - 23.03 Ma. The GSSP for the base of the Oligocene is at Massignano, near Ancona, Italy. The extinction the hantkeninid planktonic foraminifera is the key marker for the Eocene-Oligocene boundary, which was a time of climate cooling that led to widespread changes in fauna and flora.
Palaeogeography
The final stages of the breakup of Pangaea occurred during the Paleogene as Atlantic Ocean rifting and seafloor spreading extended northwards, separating the North America and Eurasian plates, and Australia and South America rifted from Antarctica, opening the Southern Ocean. Africa and India collided with Eurasia forming the Alpine-Himalayan mountain chains and the western margin of the Pacific Plate changed from a divergent to convergent plate boundary.
Alpine - Himalayan Orogeny
Alpine Orogeny
The Alpine Orogeny developed in response to the collision between the African and Eurasian plates during the closing of the Neotethys Ocean and the opening of the Central Atlantic Ocean. The result was a series of arcuate mountain ranges, from the Tell-Rif-Betic cordillera in the western Mediterranean through the Alps, Carpathians, Apennines, Dinarides and Hellenides to the Taurides in the east.
From the Late Cretaceous into the early Paleocene, Africa began to converge with Eurasia. The irregular outlines of the continental margins, including the Adriatic promontory (Adria) that extended north from the African Plate, led to the development of several short subduction zones, rather than one long system. In the western Mediterranean, the European Plate was subducted southwards beneath the African Plate, whilst in the eastern Mediterranean, Africa was subducted beneath Eurasia along a northward dipping subduction zone. Convergence between the Iberian and European plates led to the Pyrenean Orogeny and, as Adria pushed northwards the Alps and Carpathian orogens began to develop.
The collision of Adria with Eurasia in the early Palaeocene was followed by a c.10 million year pause in the convergence of Africa and Eurasia, connected with the onset of the opening of the North Atlantic Ocean as Greenland rifted from the Eurasian Plate in the Palaeocene. Convergence rates between Africa and Eurasia increased again in the early Eocene and the remaining oceanic basins between Adria and Europe closed.
Between about 40 and 30 Ma, subduction began along the western Mediterranean arc of the Tell, Rif, Betic and Apennine mountain chains. The rate of convergence was less than the subduction rate of the dense lithosphere of the western Mediterranean and roll-back of the subducting slab led to the arcuate structure of these mountain ranges.
In the eastern Mediterranean, c. 35 Ma, the Anatolide-Tauride platform (northern part of Adria) began to enter the trench leading to the development of the Dinarides, Hellenides and Tauride mountain chains as the passive margin sediments of Adria were scrapped off onto the Eurasia crust during subduction.
Zagros Mountains
The Zagros mountain belt stretches for c. 2000 km from the eastern border of Iraq to the Makran coast in southern Iran. It formed as a result of the convergence and collision of the Arabian and Eurasian plates as the Neotethys Ocean closed and is composed sediments scrapped from the descending Arabian Plate.
From the Late Cretaceous, a volcanic arc developed on the Eurasia margin as the Neotethys crust was subducted beneath it. A separate intra-oceanic subduction zone in the Neotethys resulted in the obuction of ocean crust onto the Arabian margin in the Late Cretaceous to Paleocene, with break-off of the subducted oceanic plate close to the Arabian margin occurring during the Eocene. Continental collision began during the Eocene c. 35 Ma and continued into the Oligocene to c. 26 Ma.
Himalayan Orogeny
The Indian continent rifted from Madagascar at c. 83 Ma and drifted rapidly (c. 18 cm/yr in the Paleocene) northwards towards the southern margin of Eurasia. A rapid decrease in velocity to c. 5 cm/yr in the early Eocene records the collision of the Tethyan (Tibetan) Himalayas, the leading edge of Greater India, with the Lhasa Terrane of Tibet (southern Eurasian margin), along the Indus-Yarling-Zangbo suture zone. To the south of this zone, the Himalaya are composed of metasedimentary rocks scraped off the now subducted Indian continental crust and mantle lithosphere as the collision progressed.
Palaeomagnetic data place the present day Indian continent further south at the time of collision and decrease in plate velocity, indicating the presence of a large region to the north of India that has now been subducted beneath the Eurasian Plate or incorporated into the mountain belt. This region, known as Greater India, formed by extension along the northern margin of India during the opening of the Neotethys. The Tethyan Himalaya block lay along its northern edge, with the Neotethys Ocean lying between it and southern Eurasia.
Debate about the amount of deformation seen in the geological record in the India–Eurasia collision zone versus the size of Greater India, the timing and nature of the collision relative to the decrease in plate velocity, and explanations for the unusually high velocity of the Indian plate have led to several models for Greater India: 1) A Late Cretaceous to early Paleocene subduction zone may have lain between India and Eurasia in the Neotethys, dividing the region into two plates, subduction was followed by collision of India with Eurasia in the middle Eocene. In this model Greater India would have been less than 900 km wide; 2) Greater India may have formed a single plate, several thousand kilometres wide, with the Tethyan Himalaya microcontinent separated from the Indian continent by an oceanic basin. The microcontinent collided with southern Eurasia c. 58 Ma (late Paleocene), whilst the velocity of the plate did not decrease until c. 50 Ma when subduction rates dropped as young, oceanic crust entered the subduction zone; 3) This model assigns older dates to parts of Greater India, which changes its paleogeographic position relative to Eurasia and creates a Greater India formed of extended continental crust 2000 - 3000 km wide.
South East Asia
The Alpine-Himalayan Orogenic Belt in Southeast Asia extends from the Himalayas in India through Myanmar (West Burma block) Sumatra, Java to West Sulawesi.
During the Late Cretaceous to Paleogene, the northward movement of the Indian Plate led to the highly oblique subduction of the Neotethys along the edge of the West Burma block and the development of a major north-south transform fault along the margin of Southeast Asia to the south. Between c. 60 and 50 Ma, the leading northeastern edge of Greater India collided with the West Burma block resulting in deformation and metamorphism. During the middle Eocene, north-dipping subduction resumed along the southern edge of Southeast Asia, from west Sumatra to West Sulawesi, as the Australian Plate drifted slowly northwards.
Collision between India and the West Burma block was complete by the late Oligocene. As the India-Eurasia collision continued, movement of material away from the collision zone was accommodated along, and extended, the already existing major strike slip systems of the region.
Atlantic Ocean
During the Paleocene, seafloor spreading along the Mid-Atlantic Ridge propagated from the Central Atlantic northwards between North America and Greenland in the Labrador Sea (c. 62 Ma) and Baffin Bay (c. 57 Ma), and, by the early Eocene (c. 54 Ma), into the northeastern Atlantic between Greenland and Eurasia. Extension between North America and Eurasia, also in the early Eocene, led to the opening of the Eurasian Basin across the Arctic, which was linked to the Baffin Bay Ridge and Mid-Atlantic Ridge to the south via major strike slip faults.
From the Eocene and into the early Oligocene, Greenland acted as an independent plate moving northwards and rotating anticlockwise. This led to compression across the Canadian Arctic Archipelago, Svalbard and northern Greenland resulting in the Eureka Orogeny. From c. 47 Ma, the eastern margin of Greenland was cut by the Reykjanes Ridge (the northeastern branch of the Mid-Atlantic Ridge) propagating northwards and splitting off the Jan Mayen microcontinent.
After c. 33 Ma seafloor spreading in Labrador Sea and Baffin Bay gradually ceased and seafloor spreading focused along the northeast Atlantic. By the late Oligocene, the plate boundary between North America and Eurasia was established along the Mid-Atlantic Ridge, with Greenland attached to the North American plate again, and the Jan Mayen microcontinent part of the Eurasian Plate, where its remains now lie to the east and possibly beneath the southeast of Iceland.
North Atlantic Large Igneous Province
The North Atlantic Igneous Province stretches across the Greenland and northwest European margins and is associated with the proto-Icelandic mantle plume, which rose beneath the Greenland lithosphere at c. 65 Ma. There were two main phases of volcanic activity with peaks at c. 60 Ma and c. 55 Ma. Magmatism in the British and Northwest Atlantic volcanic provinces occurred mainly in the early Palaeocene, the latter associated with an increased spreading rate in the Labrador Sea, whilst northeast Atlantic magmatism occurred mainly during the early Eocene and is associated with a change in the spreading direction in the Labrador Sea and the northward drift of Greenland. The locations of the magmatism coincide with the intersection of propagating the rifts and large-scale, pre-existing lithospheric structures, which acted as channels to the surface for the magma.
The arrival of the proto-Iceland plume has been considered the driving mechanism for rifting in the North Atlantic. However, that rifting and initial seafloor spreading occurred prior to the arrival of the plume, large scale magmatism occurred at a distance to rifting, and that rifting propagated towards, rather than away from the plume, has led to the suggestion the plume and associated magmatism may have been a result, rather than a cause, of the plate tectonic forces that led to the propagation of rifting from the Central to the North Atlantic.
Americas
North America
Mountain building continued along the North America Cordillera in response to subduction of the Farallon plate beneath the North American Plate. Along the central section of the North American margin, crustal shortening of the Cretaceous to Paleocene Sevier Orogen lessened and deformation moved eastward. The decreasing dip of the subducting Farallon Plate led to a flat-slab segment that increased friction between this and the base of the North American Plate. The resulting Laramide Orogeny, which began the development of the Rocky Mountains, was a broad zone of thick-skinned deformation, with faults extending to mid-crustal depths and the uplift of basement rocks that lay to the east of the Sevier belt, and more than 700km from the trench. With the Laramide uplift the Western Interior Seaway was divided and then retreated.
During the mid to late Eocene (50–35 Ma), plate convergence rates decreased and the dip of the Farallon slab began to steepen. Uplift ceased and the region largely levelled by erosion. By the Oligocene, convergence gave way to extension, rifting and widespread volcanism across the Laramide belt.
South America
Ocean-continent convergence accommodated by east dipping subduction zone of the Farallon Plate beneath the western edge of South America continued from the Mesozoic.
Over the Paleogene, changes in plate motion and episodes of regional slab shallowing and steepening resulted in variations in the magnitude of crustal shortening and amounts of magmatism along the length of the Andes. In the Northern Andes, an oceanic plateau with volcanic arc was accreted during the latest Cretaceous and Paleocene, whilst the Central Andes were dominated by the subduction of oceanic crust and the Southern Andes were impacted by the subduction of the Farallon-East Antarctic ocean ridge.
Caribbean
The Caribbean Plate is largely composed of oceanic crust of the Caribbean Large Igneous Province that formed during the Late Cretaceous. During the Late Cretaceous to Paleocene, subduction of Atlantic crust was established along its northern margin, whilst to the southwest, an island arc collided with the northern Andes forming an east dipping subduction zone where Caribbean lithosphere was subducted beneath the South American margin.
During the Eocene (c. 45 Ma), subduction of the Farallon Plate along the Central American subduction zone was (re)established. Subduction along the northern section of the Caribbean volcanic arc ceased as the Bahamas carbonate platform collided with Cuba and was replaced by strike-slip movements as a transform fault, extending from the Mid-Atlantic Ridge, connected with the northern boundary of the Caribbean Plate. Subduction now focused along the southern Caribbean arc (Lesser Antilles).
By the Oligocene, the intra-oceanic Central American volcanic arc began to collide with northwestern South American.
Pacific Ocean
At the beginning of the Paleogene, the Pacific Ocean consisted of the Pacific, Farallon, Kula and Izanagi plates. The central Pacific Plate grew by seafloor spreading as the other three plates were subducted and broken up. In the southern Pacific, seafloor spreading continued from the Late Cretaceous across the Pacific–Antarctic, Pacific-Farallon and Farallon–Antarctic mid ocean ridges.
The Izanagi-Pacific spreading ridge lay nearly parallel to the East Asian subduction zone and between 60–50 Ma the spreading ridge began to be subducted. By c. 50 Ma, the Pacific Plate was no longer surrounded by spreading ridges, but had a subduction zone along its western edge. This changed the forces acting on the Pacific Plate and led to a major reorganisation of plate motions across the entire Pacific region. The resulting changes in stress between the Pacific and Philippine Sea plates initiated subduction along the Izu-Bonin-Mariana and Tonga-Kermadec arcs.
Subduction of the Farallon Plate beneath the American plates continued from the Late Cretaceous. The Kula-Farallon spreading ridge lay to its north until the Eocene (c. 55 Ma), when the northern section of the plate split forming the Vancouver/Juan de Fuca Plate. In the Oligocene (c. 28 Ma), the first segment of the Pacific–Farallon spreading ridge entered the North American subduction zone near Baja California leading to major strike-slip movements and the formation of the San Andreas Fault. At the Paleogene-Neogene boundary, spreading ceased between the Pacific and Farallon plates and the Farallon Plate split again forming the present date Nazca and Cocos plates.
The Kula Plate lay between Pacific Plate and North America. To the north and northwest it was being subducted beneath the Aleutian trench. Spreading between the Kula and Pacific and Farallon plates ceased c. 40 Ma and the Kula Plate became part of the Pacific Plate.
Hawaii hotspot
The Hawaiian-Emperor seamount chain formed above the Hawaiian hotspot. Originally thought to be stationary within the mantle, the hotspot is now considered to have drifted south during the Paleocene to early Eocene, as the Pacific Plate moved north. At c. 47 Ma, movement of the hotspot ceased and the Pacific Plate motion changed from northward to northwestward in response to the onset of subduction along its western margin. This resulted in a 60 degree bend in the seamount chain. Other seamount chains related to hotspots in the South Pacific show a similar change in orientation at this time.
Antarctica
Slow seafloor spreading continued between Australia and East Antarctica. Shallow water channels probably developed south of Tasmania opening the Tasmanian Passage in the Eocene and deep ocean routes opening from the mid Oligocene. Rifting between the Antarctic Peninsula and the southern tip of South America formed the Drake Passage and opened the Southern Ocean also during this time, completing the breakup of Gondwana. The opening of these passages and the creation of the Southern Ocean established the Antarctic Circumpolar Current. Glaciers began to build across the Antarctica continent that now lay isolated in the south polar region and surrounded by cold ocean waters. These changes contributed to the fall in global temperatures and the beginning of icehouse conditions.
Red Sea and East Africa
Extensional stresses from the subduction zone along the northern Neotethys resulted in rifting between Africa and Arabia, forming the Gulf of Aden in the late Eocene. To the west, in the early Oligocene, flood basalts erupted across Ethiopia, northeast Sudan and southwest Yemen as the Afar mantle plume began to impact the base of the African lithosphere. Rifting across the southern Red Sea began in the mid Oligocene, and across the central and northern Red Sea regions in the late Oligocene and early Miocene.
Climate
Climatic conditions varied considerably during the Paleogene. After the disruption of the Chicxulub impact settled, a period of cool and dry conditions continued from the Late Cretaceous. At the Paleocene-Eocene boundary global temperatures rose rapidly with the onset of the Paleocene-Eocene Thermal Maximum (PETM). By the middle Eocene, temperatures began to drop again and by the late Eocene (c. 37 Ma) had decreased sufficiently for ice sheets to form in Antarctica. The global climate entered icehouse conditions at the Eocene-Oligocene boundary and the present day Late Cenozoic ice age began.
The Paleogene began with the brief but intense "impact winter" caused by the Chicxulub impact, which was followed by an abrupt period of warming. After temperatures stabilised, the steady cooling and drying of the Late Cretaceous-Early Paleogene Cool Interval that had spanned the last two ages of the Late Cretaceous continued, with only the brief interruption of the Latest Danian Event (c. 62.2 Ma) when global temperatures rose. There is no evidence for ice sheets at the poles during the Paleocene.
The relatively cool conditions were brought to an end by the Thanetian Thermal Event, and the beginning of the PETM. This was one of the warmest times of the Phanerozoic eon, during which global mean surface temperatures increased to 31.6 °C. According to a study published in 2018, from about 56 to 48 Ma, annual air temperatures over land and at mid-latitude averaged about 23–29 °C (± 4.7 °C). For comparison, this was 10 to 15 °C higher than the current annual mean temperatures in these areas.
This rapid rise in global temperatures and intense greenhouse conditions were due to a sudden increase in levels of atmospheric carbon dioxide (CO2) and other greenhouse gases. An accompanying rise in humidity is reflected in an increase in kaolinite in sediments, which forms by chemical weathering in hot, humid conditions. Tropical and subtropical forests flourished and extended into polar regions. Water vapour (a greenhouse gas) associated with these forests also contributed to the greenhouse conditions.
The initial rise in global temperatures was related to the intrusion of magmatic sills into organic-rich sediments during volcanic activity in the North Atlantic Igneous Province, between about 56 and 54 Ma, which rapidly released large amounts of greenhouse gases into the atmosphere. This warming led to melting of frozen methane hydrates on continental slopes adding further greenhouses gases. It also reduced the rate of burial of organic matter as higher temperatures accelerated the rate of bacterial decomposition which released CO2 back into the oceans.
The (relatively) sudden climatic changes associated with the PETM resulted in the extinction of some groups of fauna and flora and the rise of others. For example, with the warming of the Arctic Ocean, around 70% of deep sea foraminifera species went extinct, whilst on land many modern mammals, including primates, appeared. Fluctuating sea levels meant, during low stands, a land bridge formed across the Bering Straits between North America and Eurasia allowing the movement of land animals between the two continents.
The PETM was followed by the less severe Eocene Thermal Maximum 2 (c. 53.69 Ma), and the Eocene Thermal Maximum 3 (c. 53 Ma). The early Eocene warm conditions were brought to an end by the Azolla event. This change of climate at about 48.5 Ma, is believed to have been caused by a proliferation of aquatic ferns from the genus Azolla, resulting in the sequestering of large amounts of CO2 from the atmosphere by the plants. From this time until about 34 Ma, there was a slow cooling trend known as the Middle-Late Eocene Cooling. As temperatures dropped at high latitudes the presence of cold water diatoms suggests sea ice was able to form in winter in the Arctic Ocean, and by the late Eocene (c. 37 Ma) there is evidence of glaciation in Antarctica.
Changes in deep ocean currents, as Australia and South America moved away from Antarctica opening the Drake and Tasmanian passages, were responsible for the drop in global temperatures. The warm waters of the South Atlantic, Indian and South Pacific oceans extended southward into the opening Southern Ocean and became part of the cold circumpolar current. Dense polar waters sank into the deep oceans and moved northwards, reducing global ocean temperatures. This cooling may have occurred over less than 100,000 years and resulted in a widespread extinction in marine life. By the Eocene-Oligocene boundary, sediments deposited in the ocean from glaciers indicate the presence of an ice sheet in western Antarctica that extended to the ocean.
The development of the circumpolar current led to changes in the oceans, which in turn reduced atmospheric CO2 further. Increasing upwellings of cold water stimulated the productivity of phytoplankton, and the cooler waters reduced the rate of bacterial decay of organic matter and promoted the growth of methane hydrates in marine sediments. This created a positive feedback cycle where global cooling reduced atmospheric CO2 and this reduction in CO2 lead to changes which further lowered global temperatures. The decrease in evaporation from the cooler oceans also reduced moisture in the atmosphere and increased aridity. By the early Oligocene, the North American and Eurasian tropical and subtropical forests were replaced by dry woodlands and widespread grasslands.
The Early Oligocene Glacial Maximum lasted for about 200,000 years, and the global mean surface temperature continued to decrease gradually during the Rupelian. A drop in global sea levels during the mid Oligocene indicates major growth of the Antarctic glacial ice sheet. In the Late Oligocene, global temperatures began to warm slightly, though they continued to be significantly lower than during the previous epochs of the Paleogene and polar ice remained.
Flora and fauna
Tropical taxa diversified faster than those at higher latitudes after the Cretaceous–Paleogene extinction event, resulting in the development of a significant latitudinal diversity gradient. Mammals began a rapid diversification during this period. After the Cretaceous–Paleogene extinction event, which saw the demise of the non-avian dinosaurs, mammals began to evolve from a few small and generalized forms into most of the modern varieties we see presently. Some of these mammals evolved into large forms that dominated the land, while others became capable of living in marine, specialized terrestrial, and airborne environments. Those that adapted to the oceans became modern cetaceans, while those that adapted to trees became primates, the group to which humans belong. Birds, extant dinosaurs which were already well established by the end of the Cretaceous, also experienced adaptive radiation as they took over the skies left empty by the now extinct pterosaurs. Some flightless birds such as penguins, ratites, and terror birds also filled niches left by the hesperornithes and other extinct dinosaurs.
Pronounced cooling in the Oligocene resulted in a massive floral shift, and many extant modern plants arose during this time. Grasses and herbs, such as Artemisia, began to proliferate, at the expense of tropical plants, which began to decrease. Conifer forests developed in mountainous areas. This cooling trend continued, with major fluctuation, until the end of the Pleistocene period. This evidence for this floral shift is found in the palynological record.
See also
References
External links
Paleogene Microfossils: 180+ images of Foraminifera
Paleogene (chronostratigraphy scale)
Geological periods | 0.76597 | 0.998788 | 0.765041 |
Social inequality | Social inequality occurs when resources within a society are distributed unevenly, often as a result of inequitable allocation practices that create distinct unequal patterns based on socially defined categories of people. Differences in accessing social goods within society are influenced by factors like power, religion, kinship, prestige, race, ethnicity, gender, age, sexual orientation, intelligence and class. Social inequality usually implies the lack of equality of outcome, but may alternatively be conceptualized as a lack of equality in access to opportunity.
Social inequality is linked to economic inequality, usually described on the basis of the unequal distribution of income or wealth. Although the disciplines of economics and sociology generally use different theoretical approaches to examine and explain economic inequality, both fields are actively involved in researching this inequality. However, social and natural resources other than purely economic resources are also unevenly distributed in most societies and may contribute to social status. Norms of allocation can also affect the distribution of rights and privileges, social power, access to public goods such as education or the judicial system, adequate housing, transportation, credit and financial services such as banking and other social goods and services.
Overview
Social inequality is shaped by a range of structural factors, such as geographical location or citizenship status, and is often underpinned by cultural discourses and identities defining, for example, whether the poor are 'deserving' or 'undeserving'. Understanding the process of social inequality highlights the importance of how society values its people and identifies significant aspects of how biases manifest within society. In simple societies, those that have few social roles and statuses occupied by its members, social inequality may be very low. In tribal societies, for example, a tribal head or chieftain may hold some privileges, use some tools, or wear marks of office to which others do not have access, but the daily life of the chieftain is very much like the daily life of any other tribal member. Anthropologists identify such highly egalitarian cultures as "kinship-oriented", which appear to value social harmony more than wealth or status. These cultures are contrasted with materially oriented cultures in which status and wealth are prized and competition and conflict are common. Kinship-oriented cultures may actively work to prevent social hierarchies from developing because they believe that could lead to conflict and instability. As social complexity increases, so can social inequality, as it tends to increase along with a widening gap between the poorest and the most wealthy members of society.
Social inequality can be classified into egalitarian societies, ranked society, and stratified society. Egalitarian societies are those communities advocating for social equality through equal opportunities and rights, hence no discrimination. People with special skills were not viewed as superior compared to the rest. The leaders do not have the power they only have influence. The norms and the beliefs the egalitarian society holds are for sharing equally and equal participation. Simply there are no classes. Ranked society mostly is agricultural communities who hierarchically grouped from the chief who is viewed to have a status in the society. In this society, people are clustered regarding status and prestige and not by access to power and resources. The chief is the most influential person followed by his family and relative, and those further related to him are less ranked. Stratified society is societies which horizontally ranked into the upper class, middle class, and lower class. The classification is regarding wealth, power, and prestige. The upper class are mostly the leaders and are the most influential in the society. It's possible for a person in the society to move from one stratum to the other. The social status is also hereditable from one generation to the next.
There are five systems or types of social inequality: wealth inequality, treatment and responsibility inequality, political inequality, life inequality, and membership inequality. Political inequality is the difference brought about by the ability to access governmental resources which therefore have no civic equality. In treatment and responsibility differences, most people can benefit more and access privileges than others. This occurs in a system where dominance is present. In working stations, some are given more responsibilities and hence better compensation and more benefits than the rest even when equally qualified. Membership inequality is the number of members in a family, nation or faith. Life inequality is brought about by the disparity of opportunities which, if present, improve a person's life quality. Finally, income and wealth inequality is the disparity due to what an individual can earn on a daily basis contributing to their total revenue either monthly or yearly.
Status in society is of two types which are ascribed characteristics and achieved characteristics. Ascribed characteristics are those present at birth or assigned by others and over which an individual has little or no control. Examples include sex, skin colour, eye shape, place of birth, sexuality, gender identity, parentage and social status of parents. Achieved characteristics are those which a person earns or chooses; examples include level of education, marital status, leadership status and other measures of merit. In most societies, an individual's social status is a combination of ascribed and achieved factors. In some societies, however, only ascribed statuses are considered in seeking and determining one's social status and there exists little to no social mobility and, therefore, few paths to more social equality. This type of social inequality is generally referred to as caste inequality.
One's social location in a society's overall structure of social stratification affects and is affected by almost every aspect of social life and one's life chances. The single best predictor of an individual's future social status is the social status into which they were born. Theoretical approaches to explaining social inequality concentrate on questions about how such social differentiations arise, what types of resources are being allocated, what are the roles of human cooperation and conflict in allocating resources, and how do these differing types and forms of inequality affect the overall functioning of a society?
The variables considered are greatly important in how the explanation of inequality is in manner and in which those variables combine to produce the inequities and their social consequences in a given society can change across time and place. In addition to interest in comparing and contrasting social inequality at local and national levels, in the wake of today's globalizing processes, the most interesting question becomes: what does inequality look like on a worldwide scale and what does such global inequality bode for the future? In effect, globalization reduces the distances of time and space, producing a global interaction of cultures and societies and social roles that can increase global inequities.
Inequality and ideology
Philosophical questions about social ethics and the desirability or inevitability of inequality in human societies have given rise to a spate of ideologies to address such questions. By looking at the inequality of the given situations that are present, we identify the source of how inequality can rise up and substantiate a rise in how we view life. We can define this significant aspect as it classifies these ideologies on the basis of whether they justify or legitimize inequality, casting it as desirable or inevitable, or whether they cast equality as desirable and inequality as a feature of society to be reduced or eliminated. One end of this ideological continuum can be called "individualist", the other "collectivist". In Western societies, there is a long history associated with the idea of individual ownership of property and economic liberalism, the ideological belief in organizing the economy on individualist lines such that the greatest possible number of economic decisions are made by individuals and not by collective institutions or organizations. Laissez-faire, free-market ideologies—including classical liberalism, neoliberalism and right-libertarianism—are formed around the idea that social inequality is a "natural" feature of societies, is therefore inevitable and in some philosophies even desirable.
Inequality provides for differing goods and services to be offered on the open market, spurs ambition, and provides incentive for industriousness and innovation. At the other end of the continuum, collectivists place little to no trust in "free market" economic systems, noting widespread lack of access among specific groups or classes of individuals to the costs of entry to the market. Widespread inequalities often lead to conflict and dissatisfaction with the current social order. Such ideologies include Fabianism and socialism. Inequality, in these ideologies, must be reduced, eliminated, or kept under tight control through collective regulation. Furthermore, in some views inequality is natural but shouldn't affect certain fundamental human needs, human rights and the initial chances given to individuals (e.g. by education) and is out of proportions due to various problematic systemic structures.
The economic grievance thesis argues that economic factors, such as deindustrialisation, economic liberalisation, and deregulation, are causing the formation of a 'left-behind' precariat with low job security, high inequality, and wage stagnation, who then support populism. Some theories only focus on the effect of economic crises, or inequality. Another objection for economic reasons is due to the globalization that is taking place in the world today. In addition to criticism of the widening inequality caused by the elite, the widening inequality among the general public caused by the influx of immigrants and other factors due to globalization is also a target of populist criticism.
The evidence of increasing economic disparity and volatility of family incomes is clear, particularly in the United States, as shown by the work of Thomas Piketty and others. Commentators such as Martin Wolf emphasize the importance of economics. They warn that such trends increase resentment and make people susceptible to populist rhetoric. Evidence for this is mixed. At the macro level, political scientists report that xenophobia, anti-immigrant feeling, and resentment towards out-groups tend to be higher during difficult economic times. Economic crises have been associated with gains by far-right political parties. However, there is little evidence at the micro- or individual level to link individual economic grievances and populist support. Populist politicians tend to put pressure on central bank independence.
Though the above discussion is limited to specific Western ideologies, similar thinking can be found, historically, in differing societies throughout the world. While, in general, eastern societies tend toward collectivism, elements of individualism and free market organization can be found in certain regions and historical eras. Classic Chinese society in the Han and Tang dynasties, for example, while highly organized into tight hierarchies of horizontal inequality with a distinct power elite also had many elements of free trade among its various regions and subcultures.
Social mobility is the movement along social strata or hierarchies by individuals, ethnic group, or nations. There is a change in literacy, income distribution, education and health status. The movement can be vertical or horizontal. Vertical is the upward or downward movement along social strata which occurs due to change of jobs or marriage. Horizontal movement along levels that are equally ranked. Intra-generational mobility is a social status change in a generation (single lifetime). For example, a person moves from a junior staff in an organization to the senior management. The absolute management movement is where a person gains better social status than their parents, and this can be due to improved security, economic development, and better education system. Relative mobility is where some individual are expected to have higher social ranks than their parents.
Today, there is belief held by some that social inequality often creates political conflict and growing consensus that political structures determine the solution for such conflicts. Under this line of thinking, adequately designed social and political institutions are seen as ensuring the smooth functioning of economic markets such that there is political stability, which improves the long-term outlook, enhances labour and capital productivity and so stimulates economic growth. With higher economic growth, net gains are positive across all levels and political reforms are easier to sustain. This may explain why, over time, in more egalitarian societies fiscal performance is better, stimulating greater accumulation of capital and higher growth.
Inequality and social class
Socioeconomic status (SES) is a combined total measure of a person's work experience and of an individual's or family's economic and social position in relation to others, based on income, education, and occupation. The importance of this has included the different ways that sources have produced multiple effects on the interpretation of women's social classes and its used throughout society. It is often used as synonymous with social class, a set of hierarchical social categories that indicate an individual's or household's relative position in a stratified matrix of social relationships. Social class is delineated by a number of variables, some of which change across time and place. For Karl Marx, there exist two major social classes with significant inequality between the two. The two are delineated by their relationship to the means of production in a given society. Those two classes are defined as the owners of the means of production and those who sell their labour to the owners of the means of production. In capitalistic societies, the two classifications represent the opposing social interests of its members, capital gain for the capitalists and good wages for the labourers, creating social conflict.
Max Weber uses social classes to examine wealth and status. For him, social class is strongly associated with prestige and privileges. It may explain social reproduction, the tendency of social classes to remain stable across generations maintaining most of their inequalities as well. Such inequalities include differences in income, wealth, access to education, pension levels, social status, socioeconomic safety-net. In general, social class can be defined as a large category of similarly ranked people located in a hierarchy and distinguished from other large categories in the hierarchy by such traits as occupation, education, income, and wealth.
In modern Western societies, inequalities are often broadly classified into three major divisions of social class: upper class, middle class, and lower class. Each of these classes can be further subdivided into smaller classes (e.g. "upper middle"). Members of different classes have varied access to financial resources, which affects their placement in the social stratification system.
Class, race, and gender are forms of stratification that bring inequality and determines the difference in allocation of societal rewards. Occupation is the primary determinant of a person class since it affects their lifestyle, opportunities, culture, and kind of people one associates with. Class based families include the lower class who are the poor in the society. They have limited opportunities. Working class are those people in blue-collar jobs and usually, affects the economic level of a nation. The Middle classes are those who rely mostly on wives' employment and depends on credits from the bank and medical coverage. The upper middle class are professionals who are strong because of economic resources and supportive institutions. Additionally, the upper class usually are the wealthy families who have economic power due to accumulative wealth from families but not and not hard earned income.
Social stratification is the hierarchical arrangement of society about social class, wealth, political influence. A society can be politically stratified based on authority and power, economically stratified based on income level and wealth, occupational stratification about one's occupation. Some roles for examples doctors, engineers, lawyers are highly ranked, and thus they give orders while the rest receive the orders. There are three systems of social stratification which are the caste system, estates system, and class system. Castes system usually ascribed to children during birth whereby one receives the same stratification as of that of their parents. The stratification may be superior or inferior and thus influences the occupation and the social roles assigned to a person. Estate system is a state or society where people in this state were required to work on their land to receive some services like military protection. Communities ranked according to the nobility of their lords. The class system is about income inequality and socio-political status. People can move the classes when they increase their level of income or if they have authority. People are expected to maximize their innate abilities and possessions. Social stratification characteristics include its universal, social, ancient, it's in diverse forms and also consequential.
The quantitative variables most often used as an indicator of social inequality are income and wealth. In a given society, the distribution of individual or household accumulation of wealth tells us more about variation in well-being than does income, alone. Gross Domestic Product (GDP), especially per capita GDP, is sometimes used to describe economic inequality at the international or global level. A better measure at that level, however, is the Gini coefficient, a measure of statistical dispersion used to represent the distribution of a specific quantity, such as income or wealth, at a global level, among a nation's residents, or even within a metropolitan area. Other widely used measures of economic inequality are the percentage of people living with under US$1.25 or $2 a day and the share of national income held by the wealthiest 10% of the population, sometimes called "the Palma" measure.
Meritocracy and social inequality
Many societies worldwide claim to be meritocracies—that is, their societies exclusively distribute resources on the basis of merit. The term "meritocracy" was coined by Michael Young in his 1958 dystopian essay "The Rise of the Meritocracy" to demonstrate the social dysfunctions that he anticipated arising in societies where the elites believe that they are successful entirely on the basis of merit, so the adoption of this term into English without negative connotations is ironic; Young was concerned that the Tripartite System of education being practised in the United Kingdom at the time he wrote the essay considered merit to be "intelligence-plus-effort, its possessors ... identified at an early age and selected for appropriate intensive education" and that the "obsession with quantification, test-scoring, and qualifications" it supported would create an educated middle-class elite at the expense of the education of the working class, inevitably resulting in injustice and eventually revolution.
Although merit matters to some degree in many societies, research shows that the distribution of resources in societies often follows hierarchical social categorizations of persons to a degree too significant to warrant calling these societies "meritocratic", since even exceptional intelligence, talent, or other forms of merit may not be compensatory for the social disadvantages people face. In many cases, social inequality is linked to racial and ethnic inequality, gender inequality, and other forms of social status, and these forms can be related to corruption.
The most common metric for comparing social inequality in different nations is the Gini coefficient, which measures the concentration of wealth and income in a nation from 0 (evenly distributed wealth and income) to 1 (one person has all wealth and income). Two nations may have identical Gini coefficients but dramatically different economic (output) and/or quality of life, so the Gini coefficient must be contextualized for meaningful comparisons to be made.
Patterns of inequality in the economic world
There are a number of socially defined characteristics of individuals that contribute to social status and, therefore, equality or inequality within a society. When researchers use quantitative variables such as income or wealth to measure inequality, on an examination of the data, patterns are found that indicate these other social variables contribute to income or wealth as intervening variables. Significant inequalities in income and wealth are found when specific socially defined categories of people are compared. Among the most pervasive of these variables are sex/gender, race, and ethnicity as they contribute to great factors in society as they form and limit many parts of the economy. This is not to say, in societies wherein merit is considered to be the primary factor determining one's place or rank in the social order, that merit has no effect on variations in income or wealth. It is to say that these other socially defined characteristics can, and often do, intervene in the valuation of merit.
Gender inequality
Gender as a social inequality is whereby women and men are treated differently due to masculinity and femininity by dividing labor, assigning roles, and responsibilities and allocating social rewards. Sex- and gender-based prejudice and discrimination, called sexism, are major contributing factors to social inequality. Most societies, even agricultural ones, have some sexual division of labour and gender-based division of labour tends to increase during industrialization. The emphasis on gender inequality is born out of the deepening division in the roles assigned to men and women, particularly in the economic, political and educational spheres. Women are underrepresented in political activities and decision-making processes in most states in both the Global North and Global South.
Gender discrimination, especially concerning the lower social status of women, has been a topic of serious discussion not only within academic and activist communities but also by governmental agencies and international bodies such as the United Nations. These discussions seek to identify and remedy widespread, institutionalized barriers to access for women in their societies. By making use of gender analysis, researchers try to understand the social expectations, responsibilities, resources and priorities of women and men within a specific context, examining the social, economic and environmental factors which influence their roles and decision-making capacity. By enforcing artificial separations between the social and economic roles of men and women, many lives of women and girls have been negatively impacted and this has a significant aspect on them, this also may lead into the ways that it can also have an effect of limiting social and economic development.
The cultural ideals about women's work can also affect men whose outward gender expression is considered "feminine" within a given society. Transgender and gender-variant persons may express their gender through their appearance, the statements they make, or official documents they present. In this context, gender normativity, which is understood as the social expectations placed on us when we present particular bodies, produces widespread cultural/institutional devaluations of trans identities, homosexuality and femininity. Trans persons, in particular, have been defined as socially unproductive and disruptive.
A variety of global issues like HIV/AIDS, illiteracy, and poverty have been occurring and are becoming a great aspect throughout society as it lacks the importance as "women's issues" since women are disproportionately affected throughout this issues. Women's health is at risk which causes a lot of issue in the long term run. In many countries, women and girls face problems such as lack of access to education, which limit their opportunities to succeed, and further limits their ability to seek many contributions to contribute to society in economical ways. Women are underrepresented and are seen are down valued within their significance in political activities and decision-making processes within different countries and the establishments throughout most of the world. As of 2007, around 20 percent of women were below the $1.25/day international poverty line and 40 percent below the $2/day mark. More than one-quarter of females under the age of 25 were below the $1.25/day international poverty line and about half on less than $2/day.
Women's participation in work has been increasing globally, but women are still facing great issues with regards to their wage discrepancies and the differences made in comparison to what men earn. This is true globally as it is seen in the agricultural and rural sectors that are shown in non developed as well as developing countries. This has also been seen in multiple countries and has been caused by the lack of participation that women are struggling to implement. Structural impediments to women's ability to pursue and advance in their chosen professions often result in a phenomenon known as the glass ceiling, which refers to unseen – and often unacknowledged barriers that prevent minorities and women from rising to the upper rungs of the corporate ladder, regardless of their qualifications or achievements. This effect can be seen in the corporate and bureaucratic environments of many countries, lowering the chances of women to excel. It prevents women from succeeding and making the maximum use of their potential, which is at a cost for women as well as the society's development. Ensuring that women's rights are protected and endorsed can promote a sense of belonging that motivates women to contribute to their society. Once able to work, women should be titled and should contribute to the same job positionings that are opposed to men. And as a sense as they can also come within jobs that have the same work environment as men do. Until such safeguards are in place, women and girls will and have always continued to experience not only barriers to work and opportunities to earn, but will continue to be the primary victims of discrimination, oppression, and gender-based violence. As demonstrated within multiple nations and productions of the world we can identify the significance that this may lead into bringing the global parts of society.
Women and persons whose gender identity does not conform to patriarchal beliefs about sex (only male and female) continue to face violence on global domestic, interpersonal, institutional and administrative scales. While first-wave Liberal Feminist initiatives raised awareness about the lack of fundamental rights and freedoms that women have access to, second-wave feminism (see also Radical Feminism) highlighted the structural forces that underlie gender-based violence. Masculinities are generally constructed so as to subordinate femininities and other expressions of gender that are not heterosexual, assertive and dominant. the way that the production of masculinity is sourced out throughout society and has developed great incline within the institutions that are constructed of it. of Gender sociologist and author, Raewyn Connell, discusses in her 2009 book, Gender, how masculinity is dangerous, heterosexual, violent and authoritative. These structures of masculinity ultimately contribute to the vast amounts of gendered violence, marginalization and suppression that women, queer, transgender, gender variant and gender non-conforming persons face.
Some scholars suggest that women's underrepresentation in political systems speaks the idea that "formal citizenship does not always imply full social membership". Men, male bodies and expressions of masculinity are linked to ideas about work and citizenship. Others point out that patriarchal states tend top scale and claw back their social policies relative to the disadvantage of women. This process ensures that women encounter resistance into meaningful positions of power in institutions, administrations, and political systems and communities.
Racial and ethnic inequality
Racial or ethnic inequality is the result of hierarchical social distinctions between racial and ethnic categories within a society and often established based on characteristics such as skin color and other physical characteristics or an individual's place of origin. Racial inequality occurs due to racism and systemic racism.
Racial inequality can also result in diminished opportunities for members of marginalized groups, as a result of this process it can in turn lead to cycles of poverty and political marginalization. A prime example of this is redlining in Chicago, where redlines would be drawn on maps around black neighborhoods, specifically for the purpose of not allowing them out of run down public housing by not giving loans to black people. In 1863, two years prior to emancipation, black people in the U.S. owned 0.5 percent of the national wealth, while in 2019 it is just over 1.5 percent.
Racial and ethnic categories become a minority category in a society. Minority members in such a society are often subjected to discriminatory actions resulting from majority policies, including assimilation, exclusion, oppression, expulsion, and extermination. For example, during the run-up to the 2012 federal elections in the United States, legislation in certain "battleground states" that claimed to target voter fraud had the effect of disenfranchising tens of thousands of primarily African-American voters. These types of institutional barriers to full and equal social participation have far-reaching effects within marginalized communities, including reduced economic opportunity and output, reduced educational outcomes and opportunities and reduced levels of overall health.
In the United States, Angela Davis argues that mass incarceration has been a modern tool of the state to impose inequality, repression, and discrimination upon African Americans and Hispanics.. The War on Drugs has been a campaign with disparate effects, ensuring the constant incarceration of poor, vulnerable, and marginalized populations in North America. Over a million African Americans are incarcerated in the US, many of whom have been convicted of a non-violent drug possession charge. With the states of Colorado and Washington having legalized the possession of marijuana, lobbyists for drug liberalization are hopeful that drug issues will be interpreted and dealt with from a healthcare perspective instead of a matter of criminal law. In Canada, Aboriginal, First Nations, and Indigenous persons represent over a quarter of the federal prison population, even though they only represent 3% of the country's population.
Age inequality
Age discrimination is defined as the unfair treatment of people with regard to promotions, recruitment, resources, or privileges because of their age. It is also known as ageism: the stereotyping of and discrimination against individuals or groups based upon their age. It is a set of beliefs, attitudes, norms, and values are used to justify age-based prejudice, discrimination, and subordination which results in ways that limits certain individuals from a set of quality. One form of ageism is adultism, which is the discrimination against children and people under the legal adult age. An example of an act of adultism might be the policy of a certain establishment, restaurant, or place of business to not allow those under the legal adult age to enter their premises after a certain time or at all. While some people may benefit or enjoy these practices, some find them offensive and discriminatory. Discrimination against those under the age of 40 however is not illegal under the current U.S. Age Discrimination in Employment Act (ADEA).
As implied in the definitions above, treating people differently based upon their age is not necessarily discrimination. Virtually every society has age-stratification, meaning that the age structure in a society changes as people begin to live longer and the population becomes older. In most cultures, there are different social role expectations for people of different ages to perform. Throughout different societies and cultures we view and present the ways social connections and the norms become different. In which every society manages people's ageing by allocating certain roles for different age groups. Age discrimination primarily occurs when age is used as an unfair criterion for allocating more or less resources. Scholars of age inequality have suggested that certain social organizations favor particular age inequalities. For instance, because of their emphasis on training and maintaining productive citizens, modern capitalist societies may dedicate disproportionate resources to training the young and maintaining the middle-aged worker to the detriment of the elderly and the retired (especially those already disadvantaged by income/wealth inequality).
In modern, technologically advanced societies, there is a tendency for both the young and the old to be relatively disadvantaged. However, more recently, in the United States the tendency is for the young to be most disadvantaged. For example, poverty levels in the U.S. have been decreasing among people aged 65 and older since the early 1970s whereas the number of children under 18 in poverty has steadily risen. Sometimes, the elderly have had the opportunity to build their wealth throughout their lives, while younger people have the disadvantage of recently entering into or having not yet entered into the economic sphere. The larger contributor to this, however, is the increase in the number of people over 65 receiving Social Security and Medicare benefits in the U.S. The sources of diversity that are produces has effected not only the system in multiple ways but has given the authority to contribute within multiple nations.
When we compare income distribution among youth across the globe, we find that about half (48.5 percent) of the world's young people are confined to the bottom two income brackets as of 2007. This means that, out of the three billion persons under the age of 24 in the world as of 2007, approximately 1.5 billion were living in situations in which they and their families had access to just nine percent of global income. Moving up the income distribution ladder, children and youth do not fare much better: more than two-thirds of the world's youth have access to less than 20 percent of global wealth, with 86 percent of all young people living on about one-third of world income. For the just over 400 million youth who are fortunate enough to rank among families or situations at the top of the income distribution, however, opportunities improve greatly with more than 60 percent of global income as this limits their reaches and their significances.
Although this does not exhaust the scope of age discrimination, in modern societies it is often discussed primarily with regards to the work environment. Indeed, non-participation in the labour force and the unequal access to rewarding jobs means that the elderly and the young are often subject to unfair disadvantages because of their age. On the one hand, the elderly are less likely to be involved in the workforce: At the same time, old age may or may not put one at a disadvantage in accessing positions of prestige. Old age may benefit one in such positions, but it may also disadvantage one because of negative ageist stereotyping of old people. On the other hand, young people are often disadvantaged from accessing prestigious or relatively rewarding jobs, because of their recent entry to the work force or because they are still completing their education. Typically, once they enter the labour force or take a part-time job while in school, they start at entry-level positions with low-level wages. Furthermore, because of their lack of prior work experience, they can also often be forced to take marginal jobs, where they can be taken advantage of by their employers.
Inequalities in health
Health inequalities can be defined as differences in health status or in the distribution of health determinants between different population groups.
Health care
Health inequalities are in many cases related to access to health care. In industrialized nations, health inequalities are most prevalent in countries that have not implemented a universal health care system, such as the United States. Because of the US health care system is heavily privatized, access to health care is dependent upon one's economic capital; Health care is not a right, it is a commodity that can be purchased through private insurance companies (or that is sometimes provided through an employer). The way health care is organized in the U.S. contributes to health inequalities based on gender, socioeconomic status and race/ethnicity. As Wright and Perry assert, "social status differences in health care are a primary mechanism of health inequalities". In the United States, over 48 million people are without medical care coverage. This means that almost one sixth of the population is without health insurance, mostly people belonging to the lower classes of society.
While universal access to health care may not eliminate health inequalities, it has been shown that it greatly reduces them. In this context, privatization gives individuals the 'power' to purchase their own health care (through private health insurance companies), but this leads to social inequality by only allowing people who have economic resources to access health care. Citizens are seen as consumers who have a 'choice' to buy the best health care they can afford; in alignment with neoliberal ideology, this puts the burden on the individual rather than the government or the community.
In countries that have a universal health care system, health inequalities have been reduced. In Canada, for example, equity in the availability of health services has been improved dramatically through Medicare. People don't have to worry about how they will pay health care, or rely on emergency rooms for care, since health care is provided for the entire population. However, inequality issues still remain. For example, not everyone has the same level of access to services. Inequalities in health are not, however, only related to access to health care. Even if everyone had the same level of access, inequalities may still remain. This is because health status is a product of more than just how much medical care people have available to them. While Medicare has equalized access to health care by removing the need for direct payments at the time of services, which improved the health of low status people, inequities in health are still prevalent in Canada. This may be due to the state of the current social system, which bear other types of inequalities such as economic, racial and gender inequality.
A lack of health equity is also evident in the developing world, where the importance of equitable access to healthcare has been cited as crucial to achieving many of the Millennium Development Goals. Health inequalities can vary greatly depending on the country one is looking at. Health equity is needed in order to live a healthier and more sufficient life within society. Inequalities in health lead to substantial effects that are burdensome on the entire society. Inequalities in health are often associated with socioeconomic status and access to health care. Health inequities can occur when the distribution of public health services is unequal. For example, in Indonesia in 1990, only 12% of government spending for health was for services consumed by the poorest 20% of households, while the wealthiest 20% consumed 29% of the government subsidy in the health sector. Access to health care is heavily influenced by socioeconomic status as well, as wealthier population groups have a higher probability of obtaining care when they need it. A study by Makinen et al. (2000) found that in the majority of developing countries they looked at, there was an upward trend by quintile in health care use for those reporting illness. Wealthier groups are also more likely to be seen by doctors and to receive medicine.
Food
There has been considerable research in recent years regarding a phenomenon known as food deserts, in which low access to fresh, healthy food in a neighborhood leads to poor consumer choices and options regarding diet. It is widely thought that food deserts are significant contributors to the childhood obesity epidemic in the United States and many other countries. This may have significant impacts on the local level as well as in broader contexts, such as in Greece, where the childhood obesity rate has skyrocketed in recent years heavily as a result of the rampant poverty and the resultant lack of access to fresh foods.
Global inequality
The economies of the world have developed unevenly, historically, such that entire geographical regions were left mired in poverty and disease while others began to reduce poverty and disease on a wholesale basis. This was represented by a type of North–South divide that existed after World War II between First world, more developed, industrialized, wealthy countries and Third world countries, primarily as measured by GDP. From around 1980, however, through at least 2011, the GDP gap, while still wide, appeared to be closing and, in some more rapidly developing countries, life expectancies began to rise. However, there are numerous limitations of GDP as an economic indicator of social "well-being."
If we look at the Gini coefficient for world income, over time, after World War II the global Gini coefficient sat at just under .45. From around 1959 to 1966, the global Gini increased sharply, to a peak of around .48 in 1966. After falling and leveling off a couple of times during a period from around 1967 to 1984, the Gini began to climb again in the mid-eighties until reaching a high or around .54 in 2000 then jumped again to around .70 in 2002. Since the late 1980s, the gap between some regions has markedly narrowed— between Asia and the advanced economies of the West, for example—but huge gaps remain globally. Overall equality across humanity, considered as individuals, has improved very little. Within the decade between 2003 and 2013, income inequality grew even in traditionally egalitarian countries like Germany, Sweden and Denmark. With a few exceptions—France, Japan, Spain—the top 10 percent of earners in most advanced economies raced ahead, while the bottom 10 percent fell further behind. By 2013, a tiny elite of multibillionaires, 85 to be exact, had amassed wealth equivalent to all the wealth owned by the poorest half (3.5 billion) of the world's total population of 7 billion. Country of citizenship (an ascribed status characteristic) explains 60% of variability in global income; citizenship and parental income class (both ascribed status characteristics) combined explain more than 80% of income variability.
Inequality and economic growth
The concept of economic growth is fundamental in capitalist economies. Productivity must grow as population grows and capital must grow to feed into increased productivity. Investment of capital leads to returns on investment (ROI) and increased capital accumulation. The hypothesis that economic inequality is a necessary precondition for economic growth has been a mainstay of liberal economic theory. Recent research, particularly over the first two decades of the 21st century, has called this basic assumption into question. While growing inequality does have a positive correlation with economic growth under specific sets of conditions, inequality in general is not positively correlated with economic growth and, under some conditions, shows a negative correlation with economic growth.
Milanovic (2011) points out that overall, global inequality between countries is more important to growth of the world economy than inequality within countries. While global economic growth may be a policy priority, recent evidence about regional and national inequalities cannot be dismissed when more local economic growth is a policy objective. The 2008 financial crisis and following global recession hit countries and shook financial systems all over the world. This led to the implementation of large-scale fiscal expansionary interventions and, as a result, to massive public debt issuance in some countries. Governmental bailouts of the banking system further burdened fiscal balances and raises considerable concern about the fiscal solvency of some countries. Most governments want to keep deficits under control but rolling back the expansionary measures or cutting spending and raising taxes implies an enormous wealth transfer from tax payers to the private financial sector. Expansionary fiscal policies shift resources and causes worries about growing inequality within countries. Moreover, recent data confirm an ongoing trend of increasing income inequality since the early nineties. Increasing inequality within countries has been accompanied by a redistribution of economic resources between developed economies and emerging markets. Davtyn, et al. (2014) studied the interaction of these fiscal conditions and changes in fiscal and economic policies with income inequality in the UK, Canada, and the US. They find income inequality has negative effect on economic growth in the case of the UK but a positive effect in the cases of the US and Canada. Income inequality generally reduces government net lending/borrowing for all the countries. Economic growth, they find, leads to an increase of income inequality in the case of the UK and to the decline of inequality in the cases of the US and Canada. At the same time, economic growth improves government net lending/borrowing in all the countries. Government spending leads to the decline in inequality in the UK but to its increase in the US and Canada.
Following the results of Alesina and Rodrick (1994), Bourguignon (2004), and Birdsall (2005) show that developing countries with high inequality tend to grow more slowly, Ortiz and Cummings (2011) show that developing countries with high inequality tend to grow more slowly. For 131 countries for which they could estimate the change in Gini index values between 1990 and 2008, they find that those countries that increased levels of inequality experienced slower annual per capita GDP growth over the same time period. Noting a lack of data for national wealth, they build an index using Forbes list of billionaires by country normalized by GDP and validated through correlation with a Gini coefficient for wealth and the share of wealth going to the top decile. They find that many countries generating low rates of economic growth are also characterized by a high level of wealth inequality with wealth concentration among a class of entrenched elites. They conclude that extreme inequality in the distribution of wealth globally, regionally and nationally, coupled with the negative effects of higher levels of income disparities, should make us question current economic development approaches and examine the need to place equity at the center of the development agenda.
Ostry, et al. (2014) reject the hypothesis that there is a major trade-off between a reduction of income inequality (through income redistribution) and economic growth. If that were the case, they hold, then redistribution that reduces income inequality would on average be bad for growth, taking into account both the direct effect of higher redistribution and the effect of the resulting lower inequality. Their research shows rather the opposite: increasing income inequality always has a significant and, in most cases, negative effect on economic growth while redistribution has an overall pro-growth effect (in one sample) or no growth effect. Their conclusion is that increasing inequality, particularly when inequality is already high, results in low growth, if any, and such growth may be unsustainable over long periods.
Piketty and Saez (2014) note that there are important differences between income and wealth inequality dynamics. First, wealth concentration is always much higher than income concentration. The top 10 percent of wealth share typically falls in the 60 to 90 percent range of all wealth, whereas the top 10 percent income share is in the 30 to 50 percent range. The bottom 50 percent wealth share is always less than 5 percent, whereas the bottom 50 percent income share generally falls in the 20 to 30 percent range. The bottom half of the population hardly owns any wealth, but it does earn appreciable income:The inequality of labor income can be high, but it is usually much less extreme. On average, members of the bottom half of the population, in terms of wealth, own less than one-tenth of the average wealth. The inequality of labor income can be high, but it is usually much less extreme. Members of the bottom half of the population in income earn about half the average income. In sum, the concentration of capital ownership is always extreme, so that the very notion of capital is fairly abstract for large segments—if not the majority—of the population. Piketty (2014) finds that wealth-income ratios, today, seem to be returning to very high levels in low economic growth countries, similar to what he calls the "classic patrimonial" wealth-based societies of the 19th century wherein a minority lives off its wealth while the rest of the population works for subsistence living. He surmises that wealth accumulation is high because growth is low.
See also
Civil rights
Digital divide
Educational inequality
Gini coefficient
Global justice
Health equity
Horizontal inequality
List of countries by income inequality
List of countries by distribution of wealth
LGBT social movements
Meritocracy
Social apartheid
Racial discrimination
Social equality
Social exclusion
Social justice
Social mobility
Social stratification
Structural violence
Tax evasion
Triple oppression
References
Further reading
Bourdieu, Pierre. 1996.The State Nobility: Elite Schools in the Field of Power, translated by Lauretta C. Clough. Stanford: Stanford University Press.
Esping-Andersen, Gosta. 1999. "The Three Worlds of Welfare Capitalism." In The Welfare State Reader edited by Christopher Pierson and Francis G. Castles. Polity Press.
Ortiz, Isabel & Matthew Cummins. 2011. Global Inequality: Beyond the Bottom Billion – A Rapid Review of Income Distribution in 141 Countries. United Nations Children's Fund (UNICEF), New York.
Piketty, Thomas (2014). Capital in the Twenty-First Century. Belknap Press.
Sernau, Scott (2013). Social Inequality in a Global Age (4th edition). Thousand Oaks, CA: Sage.
Stiglitz, Joseph. 2012. The Price of Inequality. New York: Norton.
United Nations (UN) Inequality-adjusted Human Development Report (IHDR) 2013. United Nations Development Programme (UNDP).
Weber, Max. 1946. "Power." In Max Weber: Essays in Sociology. Translated and Edited by H.H. Gerth and C. Wright Mills. New York: Oxford University Press.
External links
Inequality watch
"Wealth Gap" – A Guide (January 2014), AP News
Guardian.com/business/2015/jan/19/global-wealth-oxfam-inequality-davos-economic-summit-switzerland New Oxfam report says half of global wealth held by the 1% (2015-01-19). "Oxfam warns of widening inequality gap." The Guardian
How Much More (Or Less) Would You Make If We Rolled Back Inequality? (January 2015). "How much more (or less) would families be earning today if inequality had remained flat since 1979?" National Public Radio
OECD – Education GPS: Gender differences in education
Social systems
Distribution of wealth | 0.767622 | 0.996616 | 0.765024 |
Hallstatt culture | The Hallstatt culture was the predominant Western and Central European archaeological culture of the Late Bronze Age (Hallstatt A, Hallstatt B) from the 12th to 8th centuries BC and Early Iron Age Europe (Hallstatt C, Hallstatt D) from the 8th to 6th centuries BC, developing out of the Urnfield culture of the 12th century BC (Late Bronze Age) and followed in much of its area by the La Tène culture. It is commonly associated with Proto-Celtic speaking populations.
It is named for its type site, Hallstatt, a lakeside village in the Austrian Salzkammergut southeast of Salzburg, where there was a rich salt mine, and some 1,300 burials are known, many with fine artifacts. Material from Hallstatt has been classified into four periods, designated "Hallstatt A" to "D". Hallstatt A and B are regarded as Late Bronze Age and the terms used for wider areas, such as "Hallstatt culture", or "period", "style" and so on, relate to the Iron Age Hallstatt C and D.
By the 6th century BC, it had expanded to include wide territories, falling into two zones, east and west, between them covering much of western and central Europe down to the Alps, and extending into northern Italy. Parts of Britain and Iberia are included in the ultimate expansion of the culture.
The culture was based on farming, but metal-working was considerably advanced, and by the end of the period long-range trade within the area and with Mediterranean cultures was economically significant. Social distinctions became increasingly important, with emerging elite classes of chieftains and warriors, and perhaps those with other skills. Society is thought to have been organized on a tribal basis, though very little is known about this. Settlement size was generally small, although a few of the largest settlements, like Heuneburg in the south of Germany, were towns rather than villages by modern standards. However, at the end of the period these seem to have been overthrown or abandoned.
Chronology
According to Paul Reinecke's time-scheme from 1902, the end of the Bronze Age and the Early Iron Age were divided into four periods:
Bronze Age Urnfield culture:
HaA (1200–1050 BC)
HaB (1050-800 BC)
Early Iron Age Hallstatt culture:
HaC (800-620 BC)
HaD (620-450 BC)
Paul Reinecke based his chronological divisions on finds from the south of Germany.
Already by 1881 Otto Tischler had made analogies to the Iron Age in the Northern Alps based on finds of brooches from graves in the south of Germany.
Absolute dating
It has proven difficult to use radiocarbon dating for the Early Iron Age due to the so-called "Hallstatt-Plateau", a phenomenon where radiocarbon dates cannot be distinguished between 750 and 400 BC. There are workarounds however, such as the wiggle matching technique. Therefore, dating in this time-period has been based mainly on Dendrochronology and relative dating.
For the beginning of HaC wood pieces from the Cart Grave of Wehringen (Landkreis Augsburg) deliver a solid dating in 778 ± 5 BC (Grave Barrow 8).
Despite missing an older Dendro-date for HaC, the convention remains that the Hallstatt period begins together with the arrival of the iron ore processing technology around 800 BC.
Relative dating
HaC is dated according to the presence of Mindelheim-type swords, binocular brooches, harp brooches, and arched brooches.
Based on the quickly changing fashions of brooches, it was possible to divide HaD into three stages (D1-D3). In HaD1 snake brooches are predominant, while in HaD2 drum brooches appear more often, and in HaD3 the double-drum and embellished foot brooches.
The transition to the La Tène period is often connected with the emergence of the first animal-shaped brooches, with Certosa-type and with Marzabotto-type brooches.
Hallstatt type site
The community at Hallstatt was untypical of the wider, mainly agricultural, culture, as its booming economy exploited the salt mines in the area. These had been worked from time to time since the Neolithic period, and in this period were extensively mined with a peak from the 8th to 5th centuries BC. The style and decoration of the grave goods found in the cemetery are very distinctive, and artifacts made in this style are widespread in Europe. In the mine workings themselves, the salt has preserved many organic materials such as textiles, wood and leather, and many abandoned artifacts such as shoes, pieces of cloth, and tools including miner's backpacks, have survived in good condition.
In 1846, Johann Georg Ramsauer (1795–1874) discovered a large prehistoric cemetery near Hallstatt, Austria, which he excavated during the second half of the 19th century. Eventually the excavation would yield 1,045 burials, although no settlement has yet been found. This may be covered by the later village, which has long occupied the whole narrow strip between the steep hillsides and the lake. Some 1,300 burials have been found, including around 2,000 individuals, with women and children but few infants. Nor is there a "princely" burial, as often found near large settlements. Instead, there are a large number of burials varying considerably in the number and richness of the grave goods, but with a high proportion containing goods suggesting a life well above subsistence level. It is now thought that at least most of these were not miners themselves, but from a richer class controlling the mines.
Finds at Hallstatt extend from about 1200 BC until around 500 BC, and are divided by archaeologists into four phases:
Hallstatt A–B (1200–800 BC) are part of the Bronze Age Urnfield culture. In this period, people were cremated and buried in simple graves. In phase B, tumulus (barrow or kurgan) burial becomes common, and cremation predominates. The "Hallstatt period" proper is restricted to HaC and HaD (800–450 BC), corresponding to the early European Iron Age. Hallstatt lies in the area where the western and eastern zones of the Hallstatt culture meet, which is reflected in the finds from there.
Hallstatt D is succeeded by the La Tène culture.
Hallstatt C is characterized by the first appearance of iron swords mixed amongst the bronze ones. Inhumation and cremation co-occur. For the final phase, Hallstatt D, daggers, almost to the exclusion of swords, are found in western zone graves ranging from –500 BC. There are also differences in the pottery and brooches. Burials were mostly inhumations. Halstatt D has been further divided into the sub-phases D1–D3, relating only to the western zone, and mainly based on the form of brooches.
Major activity at the site appears to have finished about 500 BC, for reasons that are unclear. Many Hallstatt graves were robbed, probably at this time. There was widespread disruption throughout the western Hallstatt zone, and the salt workings had by then become very deep. By then the focus of salt mining had shifted to the nearby Hallein Salt Mine, with graves at Dürrnberg nearby where there are significant finds from the late Hallstatt and early La Tène periods, until the mid-4th century BC, when a major landslide destroyed the mineshafts and ended mining activity.
Much of the material from early excavations was dispersed, and is now found in many collections, especially German and Austrian museums, but the Hallstatt Museum in the town has the largest collection.
Culture and trade
Languages
It is probable that some if not all of the diffusion of Hallstatt culture took place in a Celtic-speaking context. In northern Italy the Golasecca culture developed with continuity from the Canegrate culture. Canegrate represented a completely new cultural dynamic to the area expressed in pottery and bronzework, making it a typical western example of the western Hallstatt culture.
The Lepontic Celtic language inscriptions of the area show the language of the Golasecca culture was clearly Celtic making it probable that the 13th-century BC precursor language of at least the western Hallstatt was also Celtic or a precursor to it. Lepontic inscriptions have also been found in Umbria, in the area which saw the emergence of the Terni culture, which had strong similarities with the Celtic cultures of Hallstatt and La Tène. The Umbrian necropolis of Terni, which dates back to the 10th century BC, was virtually identical in every aspect to the Celtic necropolis of the Golasecca culture.
Older assumptions of the early 20th century of Illyrians having been the bearers of especially the Eastern Hallstatt culture are indefensible and archeologically unsubstantiated.
Trade
Trade with Greece is attested by finds of Attic black-figure pottery in the elite graves of the late Hallstatt period. It was probably imported via Massilia (Marseilles). Other imported luxuries include amber, ivory (as found at the Grafenbühl Tomb) and probably wine. Red kermes dye was imported from the south as well; it was found at Hochdorf. Notable individual imports include the Greek Vix krater (the largest known metal vessel from Western classical antiquity), the Etruscan lebes from Sainte-Colombe-sur-Seine, the Greek hydria from Grächwil, the Greek cauldron from Hochdorf and the Greek or Etruscan cauldron from Lavau.
Settlements
The largest settlements were mostly fortified, situated on hilltops, and frequently included the workshops of bronze, silver and gold smiths. Major settlements are known as 'princely seats' (or Fürstensitze in German), and are characterized by elite residences, rich burials, monumental buildings and fortifications. Some of these central sites are described as urban or proto-urban, and as "the first cities north of the Alps". Typical sites of this type are the Heuneburg on the upper Danube surrounded by nine very large grave tumuli, and Mont Lassois in eastern France near Châtillon-sur-Seine with, at its foot, the very rich grave at Vix. The Heuneburg is thought to correspond to the Celtic city of 'Pyrene' mentioned by Herodotus in 450 BC.
Other important sites include the Glauberg, Hohenasperg and Ipf in Germany, the Burgstallkogel in Austria and Molpír in Slovakia. However, most settlements were much smaller villages. The large monumental site of Alte Burg may have had a religious or ceremonial function, and possibly served as a location for games and competitions.
At the end of the Hallstatt period many major centres were abandoned and there was a return to a more decentralized settlement pattern. Urban centres later re-emerged across temperate Europe in the 3rd and 2nd centuries BC during the La Tène period.
Burial rites
The burials at Hallstatt itself show a movement over the period from cremation to inhumation, with grave goods at all times (see above).
In the central Hallstatt regions toward the end of the period (Ha D), very rich graves of high-status individuals under large tumuli are found near the remains of fortified hilltop settlements. Tumuli graves had a chamber, rather large in some cases, lined with timber and with the body and grave goods set about the room. There are some chariot or wagon burials, including Býčí Skála and Brno-Holásky in the Czech Republic, Vix, Sainte-Colombe-sur-Seine and Lavau in France, Hochdorf, Hohmichele and Grafenbühl in Germany, and Mitterkirchen in Austria.
A model of a chariot made from lead has been found in Frög, Carinthia, and clay models of horses with riders are also found. Wooden "funerary carts", presumably used as hearses and then buried, are sometimes found in the grandest graves. Pottery and bronze vessels, weapons, elaborate jewellery made of bronze and gold, as well as a few stone stelae (especially the famous Warrior of Hirschlanden) are found at such burials. The daggers that largely replaced swords in chief's graves in the west were probably not serious weapons, but badges of rank, and used at the table.
Social structure
The material culture of Western Hallstatt culture was apparently sufficient to provide a stable social and economic equilibrium. The founding of Marseille and the penetration by Greek and Etruscan culture after , resulted in long-range trade relationships up the Rhone valley which triggered social and cultural transformations in the Hallstatt settlements north of the Alps. Powerful local chiefdoms emerged which controlled the redistribution of luxury goods from the Mediterranean world that is also characteristic of the La Tène culture.
The apparently largely peaceful and prosperous life of Hallstatt D culture was disrupted, perhaps even collapsed, right at the end of the period. There has been much speculation as to the causes of this, which remain uncertain. Large settlements such as Heuneburg and the Burgstallkogel were destroyed or abandoned, rich tumulus burials ended, and old ones were looted. There was probably a significant movement of population westwards, and the succeeding La Tène culture developed new centres to the west and north, their growth perhaps overlapping with the final years of the Hallstatt culture.
Technology
Occasional iron artefacts had been appearing in central and western Europe for some centuries before 800 BC (an iron knife or sickle from Ganovce in Slovakia, dating to the 18th century BC, is possibly the earliest evidence of smelted iron in Central Europe). By the later Urnfield (Hallstatt B) phase, some swords were already being made and embellished in iron in eastern Central Europe, and occasionally much further west.
Initially iron was rather exotic and expensive, and sometimes used as a prestige material for jewellery. Iron swords became more common after , and steel was also produced from 800 BC as part of the production of swords. The production of high-carbon steel is attested in Britain after .
The remarkable uniformity of spoked-wheel wagons from across the Hallstatt region indicates a certain standardisation of production methods, which included techniques such as lathe-turning. Iron tyres were developed and refined in this period, leading to the invention of shrunk-on tyres in the La Tène period. The potter's wheel also appeared in the Hallstatt period.
The extensive use of planking and massive squared beams indicates the use of long saw blades and possibly two-man sawing. The planks of the Hohmichele burial chamber (6th c. BC), which were over 6m long and 35 cm wide, appear to have been sawn by a large timber-yard saw. The construction of monumental buildings such as the Vix palace further demonstrates a "mastery of geometry and carpentry capable of freeing up vast interior spaces."
Analyses of building remains in Silesia have found evidence for the use of a standard unit of length (equivalent to 0.785 m). Remarkably, this is almost identical to the length of a measuring stick found at Borum Eshøj in Denmark (0.7855 m), dating from the Bronze Age (c. 1350 BC). Pythagorean triangles were likely used in building construction to create right angles, and some buildings had ground plans with dimensions corresponding to Pythagorean rectangles.
Art
At least the later periods of Hallstatt art from the western zone are generally agreed to form the early period of Celtic art. Decoration is mostly geometric and linear, and best seen on fine metalwork finds from graves (see above). Styles differ, especially between the west and east, with more human figures and some narrative elements in the latter. Animals, with waterfowl a particular favourite, are often included as part of other objects, more often than humans, and in the west there is almost no narrative content such as scenes of combat depicted. These characteristics were continued into the succeeding La Tène style.
Imported luxury art is sometimes found in rich elite graves in the later phases, and certainly had some influence on local styles. The most spectacular objects, such as the Strettweg Cult Wagon, the Warrior of Hirschlanden and the bronze couch supported by "unicyclists" from the Hochdorf Chieftain's Grave are one of a kind in finds from the Hallstatt period, though they can be related to objects from other periods.
More common objects include weapons, in Ha D often with hilts terminating in curving forks ("antenna hilts"). Jewellery in metal includes fibulae, often with a row of disks hanging down on chains, armlets and some torcs. This is mostly in bronze, but "princely" burials include items in gold.
The origin of the narrative scenes of the eastern zone, from Hallstatt C onwards, is generally traced to influence from the Situla art of northern Italy and the northern Adriatic, where these bronze buckets began to be decorated in bands with figures in provincial Etruscan centres influenced by Etruscan and Greek art. The fashion for decorated situlae spread north across neighbouring cultures including the eastern Hallstatt zone, beginning around 600 BC and surviving until about 400 BC; the Vače situla is a Slovenian example from near the final period. The style is also found on bronze belt plates, and some of the vocabulary of motifs spread to influence the emerging La Tène style.
According to Ruth and Vincent Megaw, "Situla art depicts life as seen from a masculine viewpoint, in which women are servants or sex objects; most of the scenes which include humans are of the feasts in which the situlae themselves figure, of the hunt or of war". Similar scenes are found on other vessel shapes, as well as bronze belt-plaques. The processions of animals, typical of earlier examples, or humans derive from the Near East and Mediterranean, and Nancy Sandars finds the style shows "a gaucherie that betrays the artist working in a way that is uncongenial, too much at variance with the temper of the craftsmen and the craft". Compared to earlier styles that arose organically in Europe "situla art is weak and sometimes quaint", and "in essence not of Europe".
Except for the Italian Benvenuti Situla, men are hairless, with "funny hats, dumpy bodies and big heads", though often shown looking cheerful in an engaging way. The Benevenuti Situla is also unusual in that it seems to show a specific story.
The Strettweg cult wagon from Austria (c. 600 BC) has been interpreted as representing a deer goddess or 'Great Nature Goddess' similar to Artemis.
Hallstatt culture musical instruments included harps, lyres, zithers, woodwinds, panpipes, horns, drums and rattles.
Inscriptions
A small number of inscriptions have been recovered from Hallstatt culture sites. Markings or symbols inscribed on iron tools from Austria dating from the early Iron Age (Ha C, 800-650 BC) show continuity with symbols from the Bronze Age Urnfield culture, and are thought to be related to mining and the metal trade. Inscriptions engraved on situlas or cauldrons from the Hallstatt cemetery in Austria, dating from c. 800-500 BC, have been interpreted as numerals, letters and words, possibly related to Etruscan or Old Italic scripts. Weights from Bavaria dating from the 7th to early 6th century BC bear signs possibly resembling Greek or Etruscan letters. A single-word inscription (possibly a name) on a locally produced ceramic sherd from Montmorot in eastern France, dating from the late 7th to mid-6th century BC, has been identified as either Gaulish or Lepontic, written in either a 'proto-Lepontic' or Etruscan alphabet. A fragment of an inscription painted on local pottery has also been recovered from the late Hallstatt site of Bragny-sur-Saône in eastern France, dating from the 5th century BC. A letter inscribed on a gold cup was deposited in a princely tomb at Apremont in eastern France, dating from c. 500 BC.
Another fragmentary inscription on pottery was found in a princely burial near Bergères-les-Vertus in north-eastern France, dating from late 5th century BC (at the beginning of La Tène A). The inscription has been identified as the Celtic word for "king", written in the Lepontic alphabet. According to Olivier (2010), "this graffito represents one of the earliest attested occurrences of the word rîx which designates the "king" in the Celtic languages. ... It would also seem to represent the first co-occurrence in the Celtic world of a funerary archaeological context and a contemporaneous linguistic qualification as ‘royal’.” According to Verger (1998) the 7th-6th century BC inscription from Montmorot "is at the beginning of a still limited series of documents attesting to the use of alphabetic signs and the use of writing in Eastern Gaul during the entire period characterised by the appearance, development and end of the Hallstattian 'princely phenomenon'. ... The first transmission of the alphabet north of the Alps, at the end of the 7th or in the first half of the 6th century, seems to be only the beginning of a process that was regularly renewed until the second half of the fifth century."
Calendar
The monumental burial mounds at Glauberg and Magdalenenberg in Germany featured structures aligned with the point of the major lunar standstill, which occurs every 18.6 years. At Glauberg this took the form of a 'processional avenue' lined by large ditches, whilst at Magdalenenberg the alignment was marked with a large timber palisade. The knowledge required to create these alignments would have required long-term observation of the skies, possibly over several generations. At Glauberg other ditches and postholes associated with the mound may have been used to observe astronomical phenomena such as the solstices, with the whole ensemble functioning as a calendar. According to the archaeologist Allard Mees, the numerous burials within the Magdalenenberg mound were positioned to mirror the constellations as they appeared at the time of the summer solstice in 618 BC. Mees argues that the Magdalenenberg represented a lunar calendar and that knowledge of the 18.6 year lunar standstill cycle would have enabled the prediction of lunar eclipses. According to Mees many other burial mounds in this period were also aligned with lunar phenomena. An analysis of Hallstatt period burials by Müller-Scheeßel (2005) similarly suggested that they were oriented towards specific constellations. According to Gaspani (1998) the diagonals of the rectangular Hochdorf burial chamber were also aligned with the major lunar standstill.
Further evidence for knowledge of lunar eclipse cycles in this period comes from Fiskerton in England, where an analysis of a wooden causeway used to make ritual depositions into the Witham river found that the causeway timbers were felled at intervals corresponding to Saros lunar eclipse cycles, indicating that the builders were able to predict these cycles in advance. According to the archaeologists who studied this site, similar evidence can be found at other locations in the British Isles and Central Europe dating from the Late Bronze Age through to the Late Iron Age, suggesting that knowledge of lunar eclipse cycles may have been widespread in this period. Knowledge of Saros cycles may also be numerically encoded on the Late Bronze Age Berlin Gold Hat.
The mountain peaks around Hallstatt may have also functioned as a large sundial and visual calendar. A similar system is thought to have existed in the Vosges mountains during the Iron Age (the Belchen System) which incorporated dates associated with major Celtic festivals such as Beltane.
Kruta (2010) interprets a large decorated brooch from Hallstatt, dating from the 6th century BC, as representing a 'symbolic summary of the course of the year'. Depictions on the brooch of waterfowl (representing night and winter) and horses (representing day and summer) are placed above a solar boat (representing the sun's journey from one winter solstice to the next), whilst twelve circular pendants represent the lunar months.
According to Garrett Olmsted (2001) the Celtic Coligny calendar, dating from the 1st-2nd centuries AD, has origins dating back to the Hallstatt period or Late Bronze Age, indicating that calendrical knowledge was transmitted through a long oral tradition.
Geography
Two culturally distinct areas, an eastern and a western zone are generally recognised. There are distinctions in burial rites, the types of grave goods, and in artistic style. In the western zone, members of the elite were buried with sword (HaC) or dagger (HaD), in the eastern zone with an axe. The western zone has chariot burials. In the eastern zone, warriors are frequently buried with helmet and a plate armour breastplate. Artistic subjects with a narrative component are only found in the east, in both pottery and metalwork. In the east the settlements and cemeteries can be larger than in the west.
The approximate division line between the two subcultures runs from north to south through central Bohemia and Lower Austria at about 14 to 15 degrees eastern longitude, and then traces the eastern and southern rim of the Alps to Eastern and Southern Tyrol.
Western Hallstatt zone
Taken at its most generous extent, the western Hallstatt zone includes:
northeastern France: Burgundy, Franche-Comté, Champagne-Ardenne, Lorraine, Alsace
northern Switzerland: Swiss plateau
Southern Germany: much of Swabia and Bavaria
western Czech Republic: Bohemia
western Austria: Vorarlberg, Tyrol, Salzkammergut
West Hallstatt influence is also later perceptible in the rest of France, in England and Ireland, in northern Italy and in central, northern and western Iberia.
While Hallstatt is regarded as the dominant settlement of the western zone, a settlement at the Burgstallkogel in the central Sulm valley (southern Styria, west of Leibnitz, Austria) was a major centre during the Hallstatt C period. Parts of the huge necropolis (which originally consisted of more than 1,100 tumuli) surrounding this settlement can be seen today near Gleinstätten, and the chieftain's mounds were on the other side of the hill, near Kleinklein. The finds are mostly in the Landesmuseum Joanneum at Graz, which also holds the Strettweg Cult Wagon.
Eastern Hallstatt zone
The eastern Hallstatt zone includes:
eastern Austria: Lower Austria, Upper Styria
eastern Czech Republic: Moravia
southwestern Slovakia: Danubian Lowland
western Hungary: Little Hungarian Plain
eastern Slovenia: Hallstatt Archaeological Site in Vače (at the border between Lower Styria and Lower Carniola regions), Novo Mesto
western Slovenia: Svetolucijska Hallstatt culture Most na Soči, Notranjsko kraška Slovene Littoral
northern Croatia: Hrvatsko Zagorje, Istria
northern and central Serbia
parts of southwestern Poland
northern and western Bulgaria
Trade, cultural diffusion, and some population movements spread the Hallstatt cultural complex (western form) into Britain, and Ireland.
Genetics
Damgaard et al. (2018) analyzed the remains of a male and female buried at a Hallstatt cemetery near Litoměřice, Czech Republic between ca. 600 BC and 400 BC. The male carried paternal haplogroup R1b and the maternal haplogroup H6a1a. The female was a carrier of the maternal haplogroup HV0. Damgaard further examined the remains of 5 individuals ascribed to either Hallstatt C or the early La Tène culture. The one male carried Y-haplogroup G2a. The five females carried mt-haplogroups K1a2a, J1c2o, H7d, U5a1a1 and J1c-16261. The examined individuals of the Hallstatt culture and La Tène culture displayed genetic continuity with the earlier Bell Beaker culture, and carried about 50% steppe-related ancestry.
The next study, Fischer et al. (2022), examined 49 genomes from 27 sites in Bronze Age and Iron Age France. The study found of strong genetic continuity between the two periods, particularly in southern France. The samples from northern and southern France were highly homogenous. The northern French samples were distinguished from the southern ones by elevated levels of steppe-related ancestry. R1b was by far the most dominant paternal lineage, while H was the most common maternal lineage. They displayed links to contemporary samples from Great Britain and Sweden. Southern samples displaying links to Celtiberians. The Iron Age samples resembled those of modern-day populations of France, Great Britain and Spain. The evidence suggested that the Celts of the Hallstatt culture largely evolved from local Bronze Age populations.
See also
Glauberg
Hohenasperg
Ipf (mountain)
Mont Lassois (commune of Vix)
Burgstallkogel
Alte Burg
Vorstengraf (Oss)
Grafenbühl grave
Bronze- and Iron-Age Poland
Celtic warfare
Iron Age sword
Iron Age France
Iron Age Switzerland
Noric steel
Irschen
Zollfeld
The Collection of Pre- and Protohistoric Artifacts at the University of Jena
Citations
Sources
Barth, F.E., J. Biel, et al. Vierrädrige Wagen der Hallstattzeit ("The Hallstatt four-wheeled wagons" at Mainz). Mainz: Römisch-Germanisches Zentralmuseum; 1987.
Laing, Lloyd and Jenifer. Art of the Celts, Thames and Hudson, London 1992
McIntosh, Jane, Handbook to Life in Prehistoric Europe, 2009, Oxford University Press (USA),
Megaw, Ruth and Vincent, Celtic Art, 2001, Thames and Hudson,
Sandars, Nancy K., Prehistoric Art in Europe, Penguin (Pelican, now Yale, History of Art), 1968 (nb 1st edn.)
Further reading
Documentary
Klaus T. Steindl: MYTHOS HALLSTATT - Dawn of the Celts. TV-Documentary featuring new findings and reporting on the state of archeological research on the Celts (2018)
External links
Archaeological cultures of Europe
Archaeological cultures of West Asia
Iron Age cultures of Europe
Celtic archaeological cultures
Iron Age of Slovenia
Archaeological cultures in Austria
Archaeological cultures in Belgium
Archaeological cultures in Bulgaria
Archaeological cultures in Croatia
Archaeological cultures in the Czech Republic
Archaeological cultures in England
Archaeological cultures in France
Archaeological cultures in Germany
Archaeological cultures in Hungary
Archaeological cultures in Portugal
Archaeological cultures in Romania
Archaeological cultures in Serbia
Archaeological cultures in Slovakia
Archaeological cultures in Slovenia
Archaeological cultures in Spain
Archaeological cultures in Switzerland
Archaeological cultures in Turkey
World Heritage Sites in Austria
8th-century BC establishments
6th-century BC disestablishments
States and territories established in the 8th century BC
States and territories disestablished in the 6th century BC | 0.766773 | 0.997716 | 0.765021 |
Constitutionalism | Constitutionalism is "a compound of ideas, attitudes, and patterns of behavior elaborating the principle that the authority of government derives from and is limited by a body of fundamental law".
Political organizations are constitutional to the extent that they "contain institutionalized mechanisms of power control for the protection of the interests and liberties of the citizenry, including those that may be in the minority". As described by political scientist and constitutional scholar David Fellman:
Definition
Constitutionalism has prescriptive and descriptive uses. Law professor Gerhard Casper captured this aspect of the term in noting, "Constitutionalism has both descriptive and prescriptive connotations. Used descriptively, it refers chiefly to the historical struggle for constitutional recognition of the people's right to 'consent' and certain other rights, freedoms, and privileges. Used prescriptively, its meaning incorporates those features of government seen as the essential elements of the... Constitution".
Descriptive
One example of constitutionalism's descriptive use is law professor Bernard Schwartz's five volume compilation of sources seeking to trace the origins of the U.S. Bill of Rights. Beginning with English antecedents going back to Magna Carta (1215), Schwartz explores the presence and development of ideas of individual freedoms and privileges through colonial charters and legal understandings. Then in carrying the story forward, he identifies revolutionary declarations and constitutions, documents and judicial decisions of the Confederation period and the formation of the federal Constitution. Finally, he turns to the debates over the federal Constitution's ratification that ultimately provided mounting pressure for a federal bill of rights. While hardly presenting a straight line, the account illustrates the historical struggle to recognize and enshrine constitutional rights and principles in a constitutional order.
Prescriptive
In contrast to describing what constitutions are, a prescriptive approach addresses what a constitution should be. As presented by the Canadian philosopher Wil Waluchow, constitutionalism embodies
One example of this prescriptive approach was the project of the National Municipal League to develop a model state constitution.
Constitutionalism vs. Constitution
The study of constitutions is not necessarily synonymous with the study of constitutionalism.
Legal historian Christian G. Fritz distinguishes between "constitutional questions", examining how the constitution was interpreted and applied to distribute power and authority as the new nation struggled with problems of war and peace, taxation and representation, and "questions of constitutionalism —how to identify the collective sovereign, what powers the sovereign possessed, and how one recognized when that sovereign acted." He noted that "questions of constitutionalism could not be answered by reference to given constitutional text or even judicial opinions" but were "open-ended questions drawing upon competing views".
A similar distinction was drawn by British constitutional scholar A.V. Dicey in assessing Britain's unwritten constitution. Dicey noted a difference between the "conventions of the constitution" and the "law of the constitution". The "essential distinction" between the two concepts was that the law of the constitution was made up of "rules enforced or recognised by the Courts", making up "a body of 'laws' in the proper sense of that term." In contrast, the conventions of the constitution consisted "of customs, practices, maxims, or precepts which are not enforced or recognised by the Courts" but "make up a body not of laws, but of constitutional or political ethics".
Core features
Fundamental law and legitimacy of government
One of the most salient features of constitutionalism is that it describes and prescribes both the source and the limits of government power derived from fundamental law. William H. Hamilton has captured this dual aspect by noting that constitutionalism "is the name given to the trust which men repose in the power of words engrossed on parchment to keep a government in order."
Moreover, whether reflecting a descriptive or prescriptive focus, treatments of the concept of constitutionalism all deal with the legitimacy of government. One recent assessment of American constitutionalism, for example, notes that the idea of constitutionalism serves to define what it is that "grants and guides the legitimate exercise of government authority". Similarly, historian Gordon S. Wood described the most "advanced thinking" on the nature of constitutions wherein the constitution was conceived (according to Demophilis, who was possibly George Bryan) a "sett of fundamental rules by which even the supreme power of the state shall be governed." Ultimately, American constitutionalism came to rest on the collective sovereignty of the people, the source that legitimized American governments.
Civil rights and liberties
Constitutionalism is not simply about the power structure of society. It also asks for a strong protection of the interests of citizens, civil rights as well as civil liberties, especially for the social minorities, and has a close relation with democracy. The United Kingdom has had basic laws limiting governmental power for centuries. Historically, there has been little political support for introducing a comprehensive written or codified constitution in the UK. However, several commentators and reformers have argued for a new British Bill of Rights to provide liberty, democracy and the rule of law with more effective constitutional protection.
Criticisms
Legal scholar Jeremy Waldron contends that constitutionalism is often undemocratic:
Constitutionalism has also been the subject of criticism by Murray Rothbard, who attacked constitutionalism as being incapable of restraining governments and not protecting the rights of citizens from their governments:
[i]t is true that, in the United States, at least, we have a constitution that imposes strict limits on some powers of government. But, as we have discovered in the past century, no constitution can interpret or enforce itself; it must be interpreted by men. And if the ultimate power to interpret a constitution is given to the government's own Supreme Court, then the inevitable tendency is for the Court to continue to place its imprimatur on ever-broader powers for its own government. Furthermore, the highly touted "checks and balances" and "separation of powers" in the American government are flimsy indeed, since in the final analysis all of these divisions are part of the same government and are governed by the same set of rulers.
Constitutionalism by nations
Used descriptively, the concept of constitutionalism can refer chiefly to the historical struggle for constitutional recognition of the people's right to "consent" and certain other rights, freedoms, and privileges. On the other hand, the prescriptive approach to constitutionalism addresses what a constitution should be. Two observations might be offered about its prescriptive use.
There is often confusion in equating the presence of a written constitution with the conclusion that a state or polity is one based upon constitutionalism. As noted by David Fellman, constitutionalism "should not be taken to mean that if a state has a constitution, it is necessarily committed to the idea of constitutionalism. In a very real sense... every state may be said to have a constitution, since every state has institutions which are at the very least expected to be permanent, and every state has established ways of doing things". But even with a "formal written document labelled 'constitution' which includes the provisions customarily found in such a document, it does not follow that it is committed to constitutionalism...."
Often the word "constitutionalism" is used in a rhetorical sense, as a political argument that equates the views of the speaker or writer with a preferred view of the constitution. For instance, University of Maryland Constitutional History Professor Herman Belz's critical assessment of expansive constitutional construction notes that "constitutionalism... ought to be recognized as a distinctive ideology and approach to political life.... Constitutionalism not only establishes the institutional and intellectual framework, but it also supplies much of the rhetorical currency with which political transactions are carried on." Similarly, Georgetown University Law Center Professor Louis Michael Seidman noted as well the confluence of political rhetoric with arguments supposedly rooted in constitutionalism. In assessing the "meaning that critical scholars attributed to constitutional law in the late twentieth century," Professor Seidman notes a "new order... characterized most prominently by extremely aggressive use of legal argument and rhetoric" and as a result "powerful legal actors are willing to advance arguments previously thought out-of-bounds. They have, in short, used legal reasoning to do exactly what crits claim legal reasoning always does—put the lipstick of disinterested constitutionalism on the pig of raw politics."
United States
Descriptive
Constitutionalism of the United States has been defined as a complex of ideas, attitudes and patterns elaborating the principle that the authority of government derives from the people, and is limited by a body of fundamental law. These ideas, attitudes and patterns, according to one analyst, derive from "a dynamic political and historical process rather than from a static body of thought laid down in the eighteenth century".
In U.S. history, constitutionalism, in both its descriptive and prescriptive sense, has traditionally focused on the federal constitution. Indeed, a routine assumption of many scholars has been that understanding "American constitutionalism" necessarily entails the thought that went into the drafting of the federal constitution and the American experience with that constitution since its ratification in 1789.
There is a rich tradition of state constitutionalism that offers broader insight into constitutionalism in the United States. While state constitutions and the federal constitution operate differently as a function of federalism from the coexistence and interplay of governments at both a national and state level, they all rest on a shared assumption that their legitimacy comes from the sovereign authority of the people or popular sovereignty. This underlying premise, embraced by the American revolutionaries with the Declaration of Independence unites American constitutional tradition.
Both experience with state constitutions before and after the federal constitution as well as the emergence and operation of the latter reflect an ongoing struggle over the idea that all governments in America rested on the sovereignty of the people for their legitimacy.
Prescriptive
Starting with the proposition that "'Constitutionalism' refers to the position or practice that government be limited by a constitution, usually written," analysts take a variety of positions on what the constitution means. For instance, they describe the document as a document that may specify its relation to statutes, treaties, executive and judicial actions, and the constitutions or laws of regional jurisdictions. This prescriptive use of Constitutionalism is also concerned with the principles of constitutional design, which includes the principle that the field of public action be partitioned between delegated powers to the government and the rights of individuals, each of which is a restriction of the other, and that no powers be delegated that are beyond the competence of government.
Two notable Chief Justices of the United States who played an important role in the development of American constitutionalism are John Marshall and Earl Warren. John Marshall, the 4th Chief Justice, upheld the principle of judicial review in the 1803 landmark case Marbury v. Madison, whereby Supreme Court could strike down federal and state laws if they conflicted with the Constitution. By establishing the principle of judicial review, Marshall Court helped implement the ideology of separation of powers and cement the position of the American judiciary as an independent and co-equal branch of government. On the other hand, Earl Warren, the 14th Chief Justice, greatly extended civil rights and civil liberties of all Americans through a series of landmark rulings. The Warren Court started a liberal Constitutional Revolution by bringing "one man, one vote" to the United States, tearing apart racial segregation and state laws banning interracial marriage, extending the coverage of Bill of Rights, providing defendants' rights to an attorney and to silence (Miranda warning), and so on.
United Kingdom
Descriptive
The United Kingdom is perhaps the best instance of constitutionalism in a country that has an uncodified constitution. A variety of developments in 17th century England, including the Constitutional Monarchy and "the protracted struggle for power between King and Parliament was accompanied by an efflorescence of political ideas in which the concept of countervailing powers was clearly defined,"
led to a well-developed polity with multiple governmental and private institutions that counter the power of the state.
Prescriptive
Constitutionalist was also a label used by some independent candidates in UK general elections in the early 1920s. Most of the candidates were former Liberal Party members, and many of them joined the Conservative Party soon after being elected. The best known Constitutionalist candidate was Winston Churchill in the 1924 UK general election.
Japan
On May 3, 1947, the sovereign state of Japan has maintained a unitary parliamentary constitutional monarchy with an Emperor and an elected legislature called the National Diet.
Polish–Lithuanian Commonwealth
Descriptive
From the mid-sixteenth to the late eighteenth century, the Polish–Lithuanian Commonwealth utilized the liberum veto, a form of unanimity voting rule, in its parliamentary deliberations. The "principle of liberum veto played an important role in [the] emergence of the unique Polish form of constitutionalism." This constraint on the powers of the monarch were significant in making the "[r]ule of law, religious tolerance and limited constitutional government... the norm in Poland in times when the rest of Europe was being devastated by religious hatred and despotism."
Prescriptive
The Constitution of May 3, 1791, which historian Norman Davies calls "the first constitution of its kind in Europe", was in effect for only a year. It was designed to redress longstanding political defects of the Polish–Lithuanian Commonwealth and its traditional system of "Golden Liberty". The Constitution introduced political equality between townspeople and nobility (szlachta) and placed the peasants under the protection of the government, thus mitigating the worst abuses of serfdom.
Dominican Republic
After the democratically elected government of president Juan Bosch in the Dominican Republic was deposed, the Constitutionalist movement was born in the country. As opposed to said movement, the Anti-constitutionalist movement was also born. Bosch had to depart to Puerto Rico after he was deposed. His first leader was Colonel Rafael Tomás Fernández Domínguez, and he wanted Bosch to come back to power once again. Colonel Fernández Domínguez was exiled to Puerto Rico where Bosch was. The Constitutionalists had a new leader: Colonel Francisco Alberto Caamaño Deñó.
Islamic states
The scope and limits of constitutionalism in Muslim countries have attracted growing interest in recent years. Authors such as Ann E. Mayer define Islamic constitutionalism as "constitutionalism that is in some form based on Islamic principles, as opposed to constitutionalism that has developed in countries that happen to be Muslim but that has not been informed by distinctively Islamic principles". However, the concrete meaning of the notion remains contested among Muslim as well as Western scholars. Influential thinkers like Mohammad Hashim Kamali and Khaled Abou El Fadl, but also younger ones like Asifa Quraishi and Nadirsyah Hosen combine classic Islamic law with modern constitutionalism. The constitutional changes initiated by the Arab Spring movement have already brought into reality many new hybrid models of Islamic constitutionalism.
See also
Classical liberalism
Constitutional liberalism
Constitution Party (disambiguation)
Constitutional law
Constitutionalism in the United States
Digital constitutionalism
English Constitution Party
Judicial interpretation
Libertarianism
Natural and legal rights
Philosophy of law
Rule according to higher law
Rule of law
Separation of powers
Social contract
References
Further reading
Gebeye, Berihun Adugna (2021). A Theory of African Constitutionalism. Oxford University Press.
Möller, Kai (2012). The Global Model of Constitutional Rights, , Oxford University Press.
External links
Philip P. Wiener, ed., "Dictionary of the History of Ideas: Studies of Selected Pivotal Ideas", (David Fellman, "Constitutionalism"), vol 1, pp. 485, 491–492 (1973–74).
National Humanities Institute
MJC Vile Constitutionalism and the Separation of Powers (1967, Indianapolis: Liberty Fund, 1998) Second edition.
"Economics and the Rule of Law" The Economist (2008-03-13).
Middle East Constitutional Forum
The Social and Political Foundations of Constitutions Foundation for Law, Justice and Society programme
Comparative politics
Constitutional law
Philosophy of law
Theories of law
Western culture | 0.768245 | 0.995799 | 0.765018 |
Colon classification | Colon classification (CC) is a library catalogue system developed by Shiyali Ramamrita Ranganathan. It was an early faceted (or analytico-synthetic) classification system. The first edition of colon classification was published in 1933, followed by six more editions. It is especially used in libraries in India.
Its name originates from its use of colons to separate facets into classes. Many other classification schemes, some of which are unrelated, also use colons and other punctuation to perform various functions. Originally, CC used only the colon as a separator, but since the second edition, CC has used four other punctuation symbols to identify each facet type.
In CC, facets describe "personality" (the most specific subject), matter, energy, space, and time (PMEST). These facets are generally associated with every item in a library, and thus form a reasonably universal sorting system.
As an example, the subject "research in the cure of tuberculosis of lungs by x-ray conducted in India in 1950" would be categorized as:
This is summarized in a specific call number:
Organization
The colon classification system uses 42 main classes that are combined with other letters, numbers, and marks in a manner resembling the Library of Congress Classification.
Facets
CC uses five primary categories, or facets, to specify the sorting of a publication. Collectively, they are called PMEST:
Other symbols can be used to indicate components of facets called isolates, and to specify complex combinations or relationships between disciplines.
Classes
The following are the main classes of CC, with some subclasses, the main method used to sort the subclass using the PMEST scheme and examples showing application of PMEST.
z Generalia
1 Universe of Knowledge
2 Library Science
3 Book science
4 Journalism
A Natural science
B Mathematics
B2 Algebra
C Physics
D Engineering
E Chemistry
F Technology
G Biology
H Geology
HX Mining
I Botany
J Agriculture
J1 Horticulture
J2 Feed
J3 Food
J4 Stimulant
J5 Oil
J6 Drug
J7 Fabric
J8 Dye
K Zoology
KZ Animal Husbandry
L Medicine
LZ3 Pharmacology
LZ5 Pharmacopoeia
M Useful arts
M7 Textiles [material]:[work]
Δ Spiritual experience and mysticism [religion],[entity]:[problem]
N Fine arts
ND Sculpture
NN Engraving
NQ Painting
NR Music
O Literature
P Linguistics
Q Religion
R Philosophy
S Psychology
T Education
U Geography
V History
W Political science
X Economics
Y Sociology
YZ Social Work
Z Law
Example
A common example of the colon classification is:
"Research in the cure of the tuberculosis of lungs by x-ray conducted in India in 1950s":
The main classification is Medicine;
(Medicine)
Within Medicine, the Lungs are the main concern;
The property of the Lungs is that they are afflicted with Tuberculosis;
The Tuberculosis is being performed (:) on, that is the intent is to cure (Treatment);
The matter that we are treating the Tuberculosis with is X-Rays;
And this discussion of treatment is regarding the Research phase;
This Research is performed within a geographical space (.), namely India;
During the time (') of 1950;
And finally, translating into the codes listed for each subject and facet the classification becomes
References
Further reading
Colon Classification (6th Edition) by Shiyali Ramamrita Ranganathan, published by Ess Ess Publications, Delhi, India
Chan, Lois Mai. Cataloging and Classification: An Introduction. 2nd ed. New York: McGraw-Hill, c. 1994. .
Knowledge representation
Library cataloging and classification | 0.775474 | 0.98649 | 0.764997 |
Nation-building | Nation-building is constructing or structuring a national identity using the power of the state. Nation-building aims at the unification of the people within the state so that it remains politically stable and viable in the long run. According to Harris Mylonas, "Legitimate authority in modern national states is connected to popular rule, to majorities. Nation-building is the process through which these majorities are constructed." In Harris Mylonas's framework, "state elites employ three nation-building policies: accommodation, assimilation, and exclusion."
Nation builders are those members of a state who take the initiative to develop the national community through government programs, including military conscription and national content mass schooling. Nation-building can involve the use of propaganda or major infrastructure development to foster social harmony and economic growth. According to Columbia University sociologist Andreas Wimmer, three factors tend to determine the success of nation-building over the long-run: "the early development of civil-society organisations, the rise of a state capable of providing public goods evenly across a territory, and the emergence of a shared medium of communication."
Overview
In the modern era, nation-building referred to the efforts of newly independent nations, to establish trusted institutions of national government, education, military defence, elections, land registry, import customs, foreign trade, foreign diplomacy, banking, finance, taxation, company registration, police, law, courts, healthcare, citizenship, citizen rights and liberties, marriage registry, birth registry, immigration, transport infrastructure and/or municipal governance charters.
Nation-building can also include attempts to redefine the populace of territories that had been carved out by colonial powers or empires without regard to ethnic, religious, or other boundaries, as in Africa and the Balkans.
These reformed states could then become viable and coherent national entities.
Nation-building also includes the creation of national paraphernalia such as national flags, national coats of arms, national anthems, national days, national stadiums, national airlines, national languages, and national myths.
At a deeper level, national identity may be deliberately constructed by molding different ethnic groups into a nation, especially since in many newly established states colonial practices of divide and rule had resulted in ethnically heterogeneous populations.
In a functional understanding of nation-building, both economic and social factors are seen as influential. The development of nation-states in different times and places is influenced by differing conditions. It has been suggested that elites and masses in Great Britain, France, and the United States slowly grew to identify with each other as those states were established and that nationalism developed as more people were able to participate politically and to receive public goods in exchange for taxes. The more recent development of nation-states in geographically diverse, postcolonial areas may not be comparable due to differences in underlying conditions.
Many new states were plagued by cronyism (the exclusion of all but friends); corruption which erodes trust; and tribalism (rivalry between tribes within the nation). This sometimes resulted in their near-disintegration, such as the attempt by Biafra to secede from Nigeria in 1970, or the continuing demand of the Somali people in the Ogaden region of Ethiopia for complete independence. The Rwandan genocide, as well as the recurrent problems experienced by the Sudan, can also be related to a lack of ethnic, religious, or racial cohesion within the nation. It has often proved difficult to unite states with similar ethnic but different colonial backgrounds.
Differences in language may be particularly hard to overcome in the process of nation-building. Whereas some consider Cameroon to be an example of success, fractures are emerging in the form of the Anglophone problem. Failures like Senegambia Confederation demonstrate the problems of uniting Francophone and Anglophone territories.
Terminology: nation-building versus state-building
Traditionally, there has been some confusion between the use of the term nation-building and that of state-building (the terms are sometimes used interchangeably in North America). Both have fairly narrow and different definitions in political science, the former referring to national identity, the latter to infrastructure, and the institutions of the state. The debate has been clouded further by the existence of two very different schools of thought on state-building. The first (prevalent in the media) portrays state-building as an interventionist action by foreign countries. The second (more academic in origin and increasingly accepted by international institutions) sees state-building as an indigenous process. For a discussion of the definitional issues, see state-building, Carolyn Stephenson's essay, and the papers by Whaites, CPC/IPA or ODI cited below.
The confusion over terminology has meant that more recently, nation-building has come to be used in a completely different context, with reference to what has been succinctly described by its proponents as "the use of armed force in the aftermath of a conflict to underpin an enduring transition to democracy". In this sense nation-building, better referred to as state-building, describes deliberate efforts by a foreign power to construct or install the institutions of a national government, according to a model that may be more familiar to the foreign power but is often considered foreign and even destabilizing. In this sense, state-building is typically characterized by massive investment, military occupation, transitional government, and the use of propaganda to communicate governmental policy.
Role of education
The expansion of primary school provision is often believed to be a key driver in the process of nation-building. European rulers during the 19th century relied on state-controlled primary schooling to teach their subjects a common language, a shared identity, and a sense of duty and loyalty to the regime. In Prussia, mass primary education was introduced to foster "loyalty, obedience and devotion to the King". These beliefs about the power of education in forming loyalty to the sovereign were adopted by states in other parts of the world as well, in both non-democratic and democratic contexts. Reports on schools in the Soviet Union illustrate the fact that government-sponsored education programs emphasized not just academic content and skills but also taught "a love of country and mercilessness to the enemy, stubbornness in the overcoming of difficulties, an iron discipline, and love of oppressed peoples, the spirit of adventure and constant striving".
Foreign policy operations
Germany and Japan after World War II
After World War II, the Allied victors engaged in large-scale nation-building with considerable success in Germany. The United States, Britain, and France operated sectors that became West Germany. The Soviet Union operated a sector that became East Germany. In Japan, the victors were nominally in charge but in practice, the United States was in full control, again with considerable political, social, and economic impact.
NATO
After the collapse of communism in Yugoslavia in 1989, a series of civil wars broke out. Following the Dayton Agreement, also referred to as the Dayton Accords, NATO (the North Atlantic Treaty Organization), and also the European Union, engaged in stopping the civil wars, punishing more criminals, and operating nation-building programs especially in Bosnia and Herzegovina, as well as in Kosovo.
Afghanistan
Soviet efforts
Afghanistan was the target for Soviet-style nation-building during the Soviet–Afghan War. However, Soviet efforts bogged down due to Afghan resistance, in which foreign nations (primarily the United States) supported the mujahideen due to the geopolitics of the Cold War. The Soviet Union ultimately withdrew in 1988, ending the conflict.
NATO efforts
After the Soviets left, the Taliban established de facto control of much of Afghanistan. It tolerated the Al Qaeda forces that carried out the September 11, 2001 attacks on the United States. NATO responded under US leadership. In December 2001, after the Taliban government was overthrown, the Afghan Interim Administration under Hamid Karzai was formed. The International Security Assistance Force (ISAF) was established by the UN Security Council to help assist the Karzai administration and provide basic security. By 2001, after two decades of civil war and famine, it had the lowest life expectancy,and much of the population were hungry. Many foreign donors—51 in all—started providing aid and assistance to rebuild the war-torn country. For example, Norway's had charge of the province of Faryab. The Norwegian-led Provincial Reconstruction Team had the mission of effecting security, good governance, and economic development, 2005–2012.
The initial invasion of Afghanistan, intended to disrupt Al Qaeda's networks ballooned into a 20 year long nation building project. Frank McKenzie described it as "an attempt to impose a form of government, a state, that would be a state the way that we recognize a state." According to McKenzie, the US "lost track of why we were there". Afghanistan was not "ungovernable", according to the former Marine Corps general, but it was "ungovernable with the Western model that will be imposed on it". He says the gradual shift to nation building put the US "far beyond the scope" of their original mission to disrupt Al Qaeda.
References
Further reading
Ahmed, Zahid Shahab. "Impact of the China–Pakistan Economic Corridor on nation-building in Pakistan." Journal of Contemporary China 28.117 (2019): 400–414.
Barkey, Karen. After empire: Multiethnic societies and nation-building: The Soviet Union and the Russian, Ottoman, and Habsburg empires (Routledge, 2018).
Bendix, Reinhard. Nation-building & citizenship: studies of our changing social order (1964), influential pioneer
Berdal, Mats, and Astri Suhrke. "A Good Ally: Norway and International Statebuilding in Afghanistan, 2001-2014." Journal of Strategic Studies 41.1-2 (2018): 61–88. online
Bereketeab, Redie. "Education as an Instrument of Nation‐Building in Postcolonial Africa." Studies in Ethnicity and Nationalism 20.1 (2020): 71–90. online
Bokat-Lindell, Spencer. "Is the United States Done Being the World’s Cop? The New York Times July 20, 2021
Dibb, Paul (2010) "The Soviet experience in Afghanistan: lessons to be learned?" Australian Journal of International Affairs 64.5 (2010): 495–509.
Dobbins, James. America's Role in Nation-Building: From Germany to Iraq (RAND, 2005).
Engin, Kenan (2013). "Nation-Building" – Theoretische Betrachtung und Fallbeispiel: Irak . Baden Baden: Nomos Verlag. .
Ergun, Ayça. "Citizenship, National Identity, and Nation-Building in Azerbaijan: Between the Legacy of the Past and the Spirit of Independence." Nationalities Papers (2021): 1–18. online
Eriksen, Thomas Hylland. Common denominators: Ethnicity, nation-building and compromise in Mauritius (Routledge, 2020).
Etzioni, Amitai. "The folly of nation building." National Interest 120 (2012): 60–68; on American misguided efforts online
Hodge, Nathan (2011), Armed Humanitarians: The Rise of the Nation Builders, New York: Bloomsbury USA.
Ignatieff, Michael. (2003) Empire lite: nation building in Bosnia, Kosovo, Afghanistan (Random House, 2003).
Junco, José Alvarez. "The nation-building process in nineteenth-century Spain." in Nationalism and the Nation in the Iberian Peninsula (Routledge, 2020) pp. 89–106.
Latham, Michael E. Modernization as Ideology: American Social Science and "Nation Building" in the Kennedy Era (U North Carolina Press, 2000)
Mylonas, Harris (2017), "Nation-building", Oxford Bibliographies in International Relations. Ed. Patrick James. New York: Oxford University Press.
Polese, Abel, et al., eds. Identity and nation building in everyday post-socialist life (Routledge, 2017).
Safdar, Ghulam, Ghulam Shabir, and Abdul Wajid Khan. "Media's Role in Nation Building: Social, Political, Religious and Educational Perspectives." Pakistan Journal of Social Sciences (PJSS) 38.2 (2018). online
Scott, James Wesley. "Border politics in Central Europe: Hungary and the role of national scale and nation-building." Geographia Polonica 91.1 (2018): 17–32. online
Seoighe, Rachel. War, denial and nation-building in Sri Lanka: after the end (Springer, 2017).
Smith, Anthony (1986), "State-Making and Nation-Building" in John Hall (ed.), States in History. Oxford: Basil Blackwell, 228–263.
Wimmer, Andreas. "Nation building: Why some countries come together while others fall apart." Survival 60.4 (2018): 151–164.
External links
Fritz V, Menocal AR, Understanding State-building from a Political Economy Perspective, ODI, London: 2007.
CIC/IPA, Concepts and Dilemmas of State-building in Fragile Situations, OECD-DAC, Paris: 2008.
Whaites, Alan, State in Development: Understanding State-building, DFID, London: 2008.
Political science terminology
International relations
Nation | 0.768825 | 0.99502 | 0.764996 |
Fennoscandia | Fennoscandia (Finnish, Swedish and ; ), or the Fennoscandian Peninsula, is a peninsula in Europe which includes the Scandinavian and Kola peninsulas, mainland Finland, and Karelia. Administratively, this roughly encompasses the mainlands of Finland, Norway and Sweden, as well as Murmansk Oblast, much of the Republic of Karelia, and parts of northern Leningrad Oblast in Russia.
Its name comes from the Latin words Fennia (Finland) and Scandia (Scandinavia). The term was first used by the Finnish geologist Wilhelm Ramsay in 1898.
Geologically, the area is distinct because its bedrock is Archean granite and gneiss with very little limestone, in contrast to adjacent areas in Europe.
The similar term Fenno-Scandinavia is sometimes used for Fennoscandia. Both terms are sometimes used in English to refer to a cultural or political grouping of Finland with Sweden, Norway and Denmark (the latter country is closely connected culturally and politically, but is not part of the Fennoscandian Peninsula), which is a subset of the Nordic countries.
See also
References
Further reading
Ramsay, W., 1898. Über die Geologische Entwicklung der Halbinsel Kola in der Quartärzeit. Fennia 16 (1), 151 p.
External links
Geological Map of the Fennoscandian Shield
The Fennoscandian Shield Within Fennoscandia
Archean geology
Geology of Finland
Landforms of Karelia
Geology of Norway
Geology of Sweden
Peninsulas of Europe
1890s neologisms | 0.768534 | 0.995372 | 0.764977 |
Tellurocracy | Tellurocracy (from and ) is a concept proposed by Aleksandr Dugin to describe a type of civilization or state system that is defined by the development of land territories and consistent penetration into inland territories. Tellurocratic states possess a set state-territory in which the state-forming ethnic majority lives, around this territory further land expansion occurs. Tellurocracy is conceived of as an antonym to thalassocracy.
Most states display an amalgam of tellurocratic and thalassocratic features. In political geography, geopolitics and geo-economics, the term is used to explain the power of a country through its control over land. For example, prior to their merger, the Sultanate of Muscat was thalassocratic, but the Imamate of Oman was landlocked and purely tellurocratic. It could be suggested that most or all landlocked states are tellurocracies.
Defining tellurocracy
Tellurocracies are generally not purely tellurocratic. In particular, most large tellurocracies have coastlines and not just inland territories, unlike thalassocracies, which historically would generally only have coastlines, and not inland territories. This makes it difficult to define what exactly a tellurocracy is.
For example, the Mongols attempted to conquer Japan on multiple occasions. As well, the Russian Empire conquered Russian America (now Alaska) after it reached a point where it could no longer expand eastward by land. Likewise, the United States acquired Alaska and incorporated many islands and the Panama Canal Zone after it could no longer expand westward. It is also worth noting that the largely tellurocratic, continental Australia, founded as a group of thalassocratic colonies, now holds its own island territories outside of its mainland, such as Christmas Island.
Historical tellurocracies
Many empires of antiquity are noted for being more tellurocratic than their rivals, such as the early Roman Republic in opposition to its rival Carthaginian Empire, which later as the Roman and Byzantine Empires became a rather thalassocratic, yet still quite tellurocratic rival to the quite purely tellurocratic Parthian and Sasanian Empires.
Dugin's theory
In Alexandr Dugin's theory of tellurocracy, the following civilizational characteristics are traditionally attributed: a sedentary lifestyle (not excluding migratory colonization), conservatism, the permanence of legal norms, the presence of a powerful bureaucratic apparatus and central authority, strong infantry, but a weak fleet. Traditionally, tellurocracy is attributed to the Eurasian states (Qing Empire, Mongol Empire), Mughal Empire, etc. although some, such as the early United States and the Brazilian Empire, have come into being elsewhere.
In practice, all these qualities are not always present. Moreover, certain peoples and states evolve over time in one direction or another. Russia before the Russian Empire was a typical tellurocratic state. After Emperor Peter I, there was a gradual increase in the share of thalassocratic characteristics of the Russian Empire and then the Soviet Union, which turned into one of the largest naval powers. The British Empire, on the contrary, was for a long time a small, largely thalassocratic state outside of its home islands, but during the nineteenth and twentieth centuries it increased its tellurocratic characteristics (expansion into the Australian Outback and inland Africa, etc.).
Dugin based his concept on the works of the crown jurist of the Third Reich and theorist of geopolitics, Carl Schmitt. He associates tellurocracy with Eurasianism, in contrast to a perceived association of thalassocracy with Atlanticism.
Notes
References
"The Sea Against the Earth" (in Russian)
"International tension between East and West and the confrontation of the Earth and the Sea" (in Russian)
Forms of government
Empires
Former empires
National Bolshevism
Eurasianism
Geopolitical terminology
Political theories
International relations theory
Landlocked countries | 0.776771 | 0.984786 | 0.764953 |
Regional organization | Regional organizations (ROs) are, in a sense, international organizations (IOs), as they incorporate international membership and encompass geopolitical entities that operationally transcend a single nation state. However, their membership is characterized by boundaries and demarcations characteristic to a defined and unique geography, such as continents, or geopolitics, such as economic blocs. They have been established to foster cooperation and political and economic integration or dialogue among states or entities within a restrictive geographical or geopolitical boundary. They both reflect common patterns of development and history that have been fostered since the end of World War II as well as the fragmentation inherent in globalization, which is why their institutional characteristics vary from loose cooperation to formal regional integration. Most ROs tend to work alongside well-established multilateral organizations such as the United Nations. While in many instances a regional organization is simply referred to as an international organization, in many others it makes sense to use the term regional organization to stress the more limited scope of a particular membership.
Examples of ROs include, amongst others, the African Union (AU), Association of Southeast Asian Nations (ASEAN), Arab League (AL), Arab Maghreb Union (AMU), Caribbean Community (CARICOM), Council of Europe (CoE), Eurasian Economic Union (EAEU), European Union (EU), South Asian Association for Regional Cooperation (SAARC), Shanghai Cooperation Organisation, Asian-African Legal Consultative Organization (AALCO), Union for the Mediterranean (UfM), Union of South American Nations (USAN).
See also
International organization
List of intergovernmental organizations
List of regional organizations by population
List of trade blocs
Regional Economic Communities
Regional integration
Supranational union
References
Further reading
Tanja A. Börzel and Thomas Risse (2016), The Oxford Handbook of Comparative Regionalism. Oxford: Oxford University Press.
Rodrigo Tavares (2009), Regional Security: The Capacity of International Organizations. London and New York: Routledge.
International relations
Organization | 0.770717 | 0.992518 | 0.76495 |
Anarcho-primitivism | Anarcho-primitivism, also known as anti-civilization anarchism, is an anarchist critique of civilization that advocates a return to non-civilized ways of life through deindustrialization, abolition of the division of labor or specialization, abandonment of large-scale organization and all technology other than prehistoric technology, and the dissolution of agriculture. Anarcho-primitivists critique the origins and alleged progress of the Industrial Revolution and industrial society. Most Anarcho-primitivists advocate for a tribal-like way of life while some see an even simpler lifestyle as beneficial. According to anarcho-primitivists, the shift from hunter-gatherer to agricultural subsistence during the Neolithic Revolution gave rise to coercion, social alienation, and social stratification.
Anarcho-primitivism argues that civilization is at the root of societal and environmental problems. Primitivists also consider domestication, technology and language to cause social alienation from "authentic reality". As a result, they propose the abolition of civilization and a return to a hunter-gatherer lifestyle.
History
Roots
The roots of primitivism lay in Enlightenment philosophy and the critical theory of the Frankfurt School. The early-modern philosopher Jean-Jacques Rousseau blamed agriculture and cooperation for the development of social inequality and causing habitat destruction. In his Discourse on Inequality, Rousseau depicted the state of nature as a "primitivist utopia"; however, he stopped short of advocating a return to it. Instead, he called for political institutions to be recreated anew, in harmony with nature and without the artificiality of modern civilization. Later, critical theorist Max Horkheimer argued that Environmental degradation stemmed directly from social oppression, which had vested all value in labor and consequently caused widespread alienation.
Development
The modern school of anarcho-primitivism was primarily developed by John Zerzan, whose work was released at a time when green anarchist theories of social and deep ecology were beginning to attract interest. Primitivism, as outlined in Zerzan's work, first gained popularity as enthusiasm in deep ecology began to wane.
Zerzan claimed that pre-civilization societies were inherently superior to modern civilization and that the move towards agriculture and the increasing use of technology had resulted in the alienation and oppression of humankind. Zerzan argued that under civilization, humans and other species have undergone domestication, which stripped them of their agency and subjected them to control by capitalism. He also claimed that language, mathematics and art had caused alienation, as they replaced "authentic reality" with an abstracted representation of reality. In order to counteract such issues, Zerzan proposed that humanity return to a state of nature, which he believed would increase social equality and individual autonomy by abolishing private property, organized violence and the division of labour.
Primitivist thinker Paul Shepard also criticized domestication, which he believed had devalued non-human life and reduced human life to their labor and property. Other primitivist authors have drawn different conclusions to Zerzan on the origins of alienation, with John Fillis blaming technology and Richard Heinberg claiming it to be a result of addiction psychology.
Adoption and practice
Primitivist ideas were taken up by the eco-terrorist Ted Kaczynski, although he has been repeatedly criticised for his violent means by more pacifistic anarcho-primitivists, who instead advocate for non-violent forms of direct action. Primitivist concepts have also taken root within the philosophy of deep ecology, inspiring the direct actions of groups such as Earth First!. Another radical environmentalist group, the Earth Liberation Front (ELF), was directly influenced by anarcho-primitivism and its calls for rewilding.
Primitivists and green anarchists have adopted the concept of ecological rewilding as part of their practice, i.e., using reclaimed skills and methods to work towards a sustainable future while undoing institutions of civilization.
Anarcho-primitivist periodicals include Green Anarchy and Species Traitor. The former, self-described as an "anti-civilization journal of theory and action" and printed in Eugene, Oregon, was first published in 2000 and expanded from a 16-page newsprint tabloid to a 76-page magazine covering monkeywrenching topics such as pipeline sabotage and animal liberation. Species Traitor, edited by Kevin Tucker, is self-described as "an insurrectionary anarcho-primitivist journal", with essays against literacy and for hunter gatherer societies. Adjacent periodicals include the radical environmental journal Earth First!
s
A common criticism is of hypocrisy, i.e. that people rejecting civilization typically maintain a civilized lifestyle themselves, often while still using the very industrial technology that they oppose in order to spread their message. Activist writer Derrick Jensen counters that this criticism merely resorts to an ad hominem argument, attacking individuals but not the actual validity of their beliefs. He further responds that working to entirely avoid such hypocrisy is ineffective, self-serving, and a convenient misdirection of activist energies. Primitivist John Zerzan admits that living with this hypocrisy is a necessary evil for continuing to contribute to the larger intellectual conversation.
Wolfi Landstreicher and Jason McQuinn, post-leftists, have both criticized the romanticized exaggerations of indigenous societies and the pseudoscientific (and even mystical) appeal to nature they perceive in anarcho-primitivist ideology and deep ecology.
Ted Kaczynski also argued that anarcho-primitivists have exaggerated the short working week of primitive society, arguing that they only examine the process of food extraction and not the processing of food, creation of fire and childcare, which adds up to over 40 hours a week.
See also
Abecedarians, opposed language on religious grounds
Agrarian socialism
Khmer Rouge
Anti-modernization
Back-to-the-land movement
Deep ecology
Degrowth
Deindustrialization
Doomer
Earth liberation
Eco-communalism
Ecofeminism
Ecofascism
Environmental ethics
Evolutionary psychology
Freedomites
Green anarchism
Green Anarchy
Hunter-gatherer
Idea of Progress
Jacques Camatte
Neo-Luddism
Ted Kaczynski, neo-Luddite and domestic terrorist
Neo-tribalism
Noble savage
Post-left anarchy
Primitive communism
Rewilding (conservation biology)
Romanticism
Solarpunk
State of nature
Survivalism
Year Zero (political notion)
National Anarchism
Individualists Tending to the Wild
Ecoterrorism
Notes
Bibliography
Further reading
External links
Primitivism
Criticism of science
Cultural anthropology
Political ideologies
Post-left anarchism
Simple living
Social philosophy
Syncretic political movements | 0.768094 | 0.9959 | 0.764945 |
Race (human categorization) | Race is a categorization of humans based on shared physical or social qualities into groups generally viewed as distinct within a given society. The term came into common usage during the 16th century, when it was used to refer to groups of various kinds, including those characterized by close kinship relations. By the 17th century, the term began to refer to physical (phenotypical) traits, and then later to national affiliations. Modern science regards race as a social construct, an identity which is assigned based on rules made by society. While partly based on physical similarities within groups, race does not have an inherent physical or biological meaning. The concept of race is foundational to racism, the belief that humans can be divided based on the superiority of one race over another.
Social conceptions and groupings of races have varied over time, often involving folk taxonomies that define essential types of individuals based on perceived traits. Modern scientists consider such biological essentialism obsolete, and generally discourage racial explanations for collective differentiation in both physical and behavioral traits.
Even though there is a broad scientific agreement that essentialist and typological conceptions of race are untenable, scientists around the world continue to conceptualize race in widely differing ways. While some researchers continue to use the concept of race to make distinctions among fuzzy sets of traits or observable differences in behavior, others in the scientific community suggest that the idea of race is inherently naive or simplistic. Still others argue that, among humans, race has no taxonomic significance because all living humans belong to the same subspecies, Homo sapiens sapiens.
Since the second half of the 20th century, race has been associated with discredited theories of scientific racism, and has become increasingly seen as a largely pseudoscientific system of classification. Although still used in general contexts, race has often been replaced by less ambiguous and/or loaded terms: populations, people(s), ethnic groups, or communities, depending on context. Its use in genetics was formally renounced by the U.S. National Academies of Sciences, Engineering, and Medicine in 2023.
Defining race
Modern scholarship views racial categories as socially constructed, that is, race is not intrinsic to human beings but rather an identity created, often by socially dominant groups, to establish meaning in a social context. Different cultures define different racial groups, often focused on the largest groups of social relevance, and these definitions can change over time.
Historical race concepts have included a wide variety of schemes to divide local or worldwide populations into races and sub-races. Across the world, different organizations and societies choose to disambiguate race to different extents:
In South Africa, the Population Registration Act, 1950 recognized only White, Black, and Coloured, with Indians added later.
The government of Myanmar recognizes eight "major national ethnic races".
The Brazilian census classifies people into brancos (Whites), pardos (multiracial), pretos (Blacks), amarelos (Asians), and indigenous (see Race and ethnicity in Brazil), though many people use different terms to identify themselves.
Legal definitions of whiteness in the United States used before the civil rights movement were often challenged for specific groups.
Furthermore, the United States Census Bureau proposed but then withdrew plans to add a new category to classify Middle Eastern and North African peoples in the 2020 U.S. census, due to a dispute over whether this classification should be considered a white ethnicity or a separate race.
The establishment of racial boundaries often involves the subjugation of groups defined as racially inferior, as in the one-drop rule used in the 19th-century United States to exclude those with any amount of African ancestry from the dominant racial grouping, defined as "white". Such racial identities reflect the cultural attitudes of imperial powers dominant during the age of European colonial expansion. This view rejects the notion that race is biologically defined.
According to geneticist David Reich, "while race may be a social construct, differences in genetic ancestry that happen to correlate to many of today's racial constructs are real". In response to Reich, a group of 67 scientists from a broad range of disciplines wrote that his concept of race was "flawed" as "the meaning and significance of the groups is produced through social interventions".
Although commonalities in physical traits such as facial features, skin color, and hair texture comprise part of the race concept, this linkage is a social distinction rather than an inherently biological one. Other dimensions of racial groupings include shared history, traditions, and language. For instance, African-American English is a language spoken by many African Americans, especially in areas of the United States where racial segregation exists. Furthermore, people often self-identify as members of a race for political reasons.
When people define and talk about a particular conception of race, they create a social reality through which social categorization is achieved. In this sense, races are said to be social constructs. These constructs develop within various legal, economic, and sociopolitical contexts, and may be the effect, rather than the cause, of While race is understood to be a social construct by many, most scholars agree that race has real material effects in the lives of people through institutionalized practices of preference and discrimination.
Socioeconomic factors, in combination with early but enduring views of race, have led to considerable suffering within disadvantaged racial groups. Racial discrimination often coincides with racist mindsets, whereby the individuals and ideologies of one group come to perceive the members of an outgroup as both racially defined and morally inferior. As a result, racial groups possessing relatively little power often find themselves excluded or oppressed, while hegemonic individuals and institutions are charged with holding racist attitudes. Racism has led to many instances of tragedy, including slavery and genocide.
In some countries, law enforcement uses race to profile suspects. This use of racial categories is frequently criticized for perpetuating an outmoded understanding of human biological variation, and promoting stereotypes. Because in some societies racial groupings correspond closely with patterns of social stratification, for social scientists studying social inequality, race can be a significant variable. As sociological factors, racial categories may in part reflect subjective attributions, self-identities, and social institutions.
Scholars continue to debate the degrees to which racial categories are biologically warranted and socially constructed. For example, in 2008, John Hartigan Jr. argued for a view of race that focused primarily on culture, but which does not ignore the potential relevance of biology or genetics. Accordingly, the racial paradigms employed in different disciplines vary in their emphasis on biological reduction as contrasted with societal construction.
In the social sciences, theoretical frameworks such as racial formation theory and critical race theory investigate implications of race as social construction by exploring how the images, ideas and assumptions of race are expressed in everyday life. A large body of scholarship has traced the relationships between the historical, social production of race in legal and criminal language, and their effects on the policing and disproportionate incarceration of certain groups.
Historical origins of racial classification
Groups of humans have always identified themselves as distinct from neighboring groups, but such differences have not always been understood to be natural, immutable and global. These features are the distinguishing features of how the concept of race is used today. In this way the idea of race as we understand it today came about during the historical process of exploration and conquest which brought Europeans into contact with groups from different continents, and of the ideology of classification and typology found in the natural sciences. The term race was often used in a general biological taxonomic sense, starting from the 19th century, to denote genetically differentiated human populations defined by phenotype.
The modern concept of race emerged as a product of the colonial enterprises of European powers from the 16th to 18th centuries which identified race in terms of skin color and physical differences. Author Rebecca F. Kennedy argues that the Greeks and Romans would have found such concepts confusing in relation to their own systems of classification. According to Bancel et al., the epistemological moment where the modern concept of race was invented and rationalized lies somewhere between 1730 and 1790.
Colonialism
According to Smedley and Marks the European concept of "race", along with many of the ideas now associated with the term, arose at the time of the scientific revolution, which introduced and privileged the study of natural kinds, and the age of European imperialism and colonization which established political relations between Europeans and peoples with distinct cultural and political traditions. As Europeans encountered people from different parts of the world, they speculated about the physical, social, and cultural differences among various human groups. The rise of the Atlantic slave trade, which gradually displaced an earlier trade in slaves from throughout the world, created a further incentive to categorize human groups in order to justify the subordination of African slaves.
Drawing on sources from classical antiquity and upon their own internal interactions – for example, the hostility between the English and Irish powerfully influenced early European thinking about the differences between people – Europeans began to sort themselves and others into groups based on physical appearance, and to attribute to individuals belonging to these groups behaviors and capacities which were claimed to be deeply ingrained. A set of folk beliefs took hold that linked inherited physical differences between groups to inherited intellectual, behavioral, and moral qualities. Similar ideas can be found in other cultures, for example in China, where a concept often translated as "race" was associated with supposed common descent from the Yellow Emperor, and used to stress the unity of ethnic groups in China. Brutal conflicts between ethnic groups have existed throughout history and across the world.
Early taxonomic models
The first post-Graeco-Roman published classification of humans into distinct races seems to be François Bernier's Nouvelle division de la terre par les différents espèces ou races qui l'habitent ("New division of Earth by the different species or races which inhabit it"), published in 1684. In the 18th century the differences among human groups became a focus of scientific investigation. But the scientific classification of phenotypic variation was frequently coupled with racist ideas about innate predispositions of different groups, always attributing the most desirable features to the White, European race and arranging the other races along a continuum of progressively undesirable attributes. The 1735 classification of Carl Linnaeus, inventor of zoological taxonomy, divided the human species Homo sapiens into continental varieties of europaeus, asiaticus, americanus, and afer, each associated with a different humour: sanguine, melancholic, choleric, and phlegmatic, respectively. Homo sapiens europaeus was described as active, acute, and adventurous, whereas Homo sapiens afer was said to be crafty, lazy, and careless.
The 1775 treatise "The Natural Varieties of Mankind", by Johann Friedrich Blumenbach proposed five major divisions: the Caucasoid race, the Mongoloid race, the Ethiopian race (later termed Negroid), the American Indian race, and the Malayan race, but he did not propose any hierarchy among the races. Blumenbach also noted the graded transition in appearances from one group to adjacent groups and suggested that "one variety of mankind does so sensibly pass into the other, that you cannot mark out the limits between them".
From the 17th through 19th centuries, the merging of folk beliefs about group differences with scientific explanations of those differences produced what Smedley has called an "ideology of race". According to this ideology, races are primordial, natural, enduring and distinct. It was further argued that some groups may be the result of mixture between formerly distinct populations, but that careful study could distinguish the ancestral races that had combined to produce admixed groups. Subsequent influential classifications by Georges Buffon, Petrus Camper and Christoph Meiners all classified "Negros" as inferior to Europeans. In the United States the racial theories of Thomas Jefferson were influential. He saw Africans as inferior to Whites especially in regards to their intellect, and imbued with unnatural sexual appetites, but described Native Americans as equals to whites.
Polygenism vs monogenism
In the last two decades of the 18th century, the theory of polygenism, the belief that different races had evolved separately in each continent and shared no common ancestor, was advocated in England by historian Edward Long and anatomist Charles White, in Germany by ethnographers Christoph Meiners and Georg Forster, and in France by Julien-Joseph Virey. In the US, Samuel George Morton, Josiah Nott and Louis Agassiz promoted this theory in the mid-19th century. Polygenism was popular and most widespread in the 19th century, culminating in the founding of the Anthropological Society of London (1863), which, during the period of the American Civil War, broke away from the Ethnological Society of London and its monogenic stance, their underlined difference lying, relevantly, in the so-called "Negro question": a substantial racist view by the former, and a more liberal view on race by the latter.
Modern scholarship
Models of human evolution
Today, all humans are classified as belonging to the species Homo sapiens. However, this is not the first species of homininae: the first species of genus Homo, Homo habilis, evolved in East Africa at least 2 million years ago, and members of this species populated different parts of Africa in a relatively short time. Homo erectus evolved more than 1.8 million years ago, and by 1.5 million years ago had spread throughout Europe and Asia. Virtually all physical anthropologists agree that Archaic Homo sapiens (A group including the possible species H. heidelbergensis, H. rhodesiensis, and H. neanderthalensis) evolved out of African H. erectus or H. ergaster. Anthropologists support the idea that anatomically modern humans (Homo sapiens) evolved in North or East Africa from an archaic human species such as H. heidelbergensis and then migrated out of Africa, mixing with and replacing H. heidelbergensis and H. neanderthalensis populations throughout Europe and Asia, and H. rhodesiensis populations in Sub-Saharan Africa (a combination of the Out of Africa and Multiregional models).
Biological classification
In the early 20th century, many anthropologists taught that race was an entirely biological phenomenon and that this was core to a person's behavior and identity, a position commonly called racial essentialism. This, coupled with a belief that linguistic, cultural, and social groups fundamentally existed along racial lines, formed the basis of what is now called scientific racism. After the Nazi eugenics program, along with the rise of anti-colonial movements, racial essentialism lost widespread popularity. New studies of culture and the fledgling field of population genetics undermined the scientific standing of racial essentialism, leading race anthropologists to revise their conclusions about the sources of phenotypic variation. A significant number of modern anthropologists and biologists in the West came to view race as an invalid genetic or biological designation.
The first to challenge the concept of race on empirical grounds were the anthropologists Franz Boas, who provided evidence of phenotypic plasticity due to environmental factors, and Ashley Montagu, who relied on evidence from genetics. E. O. Wilson then challenged the concept from the perspective of general animal systematics, and further rejected the claim that "races" were equivalent to "subspecies".
Human genetic variation is predominantly within races, continuous, and complex in structure, which is inconsistent with the concept of genetic human races. According to the biological anthropologist Jonathan Marks,
Subspecies
The term race in biology is used with caution because it can be ambiguous. Generally, when it is used it is effectively a synonym of subspecies. (For animals, the only taxonomic unit below the species level is usually the subspecies; there are narrower infraspecific ranks in botany, and race does not correspond directly with any of them.) Traditionally, subspecies are seen as geographically isolated and genetically differentiated populations. Studies of human genetic variation show that human populations are not geographically isolated. and their genetic differences are far smaller than those among comparable subspecies.
In 1978, Sewall Wright suggested that human populations that have long inhabited separated parts of the world should, in general, be considered different subspecies by the criterion that most individuals of such populations can be allocated correctly by inspection. Wright argued: "It does not require a trained anthropologist to classify an array of Englishmen, West Africans, and Chinese with 100% accuracy by features, skin color, and type of hair despite so much variability within each of these groups that every individual can easily be distinguished from every other." While in practice subspecies are often defined by easily observable physical appearance, there is not necessarily any evolutionary significance to these observed differences, so this form of classification has become less acceptable to evolutionary biologists. Likewise this typological approach to race is generally regarded as discredited by biologists and anthropologists.
Ancestrally differentiated populations (clades)
In 2000, philosopher Robin Andreasen proposed that cladistics might be used to categorize human races biologically, and that races can be both biologically real and socially constructed. Andreasen cited tree diagrams of relative genetic distances among populations published by Luigi Cavalli-Sforza as the basis for a phylogenetic tree of human races (p. 661). Biological anthropologist Jonathan Marks (2008) responded by arguing that Andreasen had misinterpreted the genetic literature: "These trees are phenetic (based on similarity), rather than cladistic (based on monophyletic descent, that is from a series of unique ancestors)." Evolutionary biologist Alan Templeton (2013) argued that multiple lines of evidence falsify the idea of a phylogenetic tree structure to human genetic diversity, and confirm the presence of gene flow among populations. Marks, Templeton, and Cavalli-Sforza all conclude that genetics does not provide evidence of human races.
Previously, anthropologists Lieberman and Jackson (1995) had also critiqued the use of cladistics to support concepts of race. They argued that "the molecular and biochemical proponents of this model explicitly use racial categories in their initial grouping of samples". For example, the large and highly diverse macroethnic groups of East Indians, North Africans, and Europeans are presumptively grouped as Caucasians prior to the analysis of their DNA variation. They argued that this a priori grouping limits and skews interpretations, obscures other lineage relationships, deemphasizes the impact of more immediate clinal environmental factors on genomic diversity, and can cloud our understanding of the true patterns of affinity.
In 2015, Keith Hunley, Graciela Cabana, and Jeffrey Long analyzed the Human Genome Diversity Project sample of 1,037 individuals in 52 populations, finding that diversity among non-African populations is the result of a serial founder effect process, with non-African populations as a whole nested among African populations, that "some African populations are equally related to other African populations and to non-African populations", and that "outside of Africa, regional groupings of populations are nested inside one another, and many of them are not monophyletic". Earlier research had also suggested that there has always been considerable gene flow between human populations, meaning that human population groups are not monophyletic. Rachel Caspari has argued that, since no groups currently regarded as races are monophyletic, by definition none of these groups can be clades.
Clines
One crucial innovation in reconceptualizing genotypic and phenotypic variation was the anthropologist C. Loring Brace's observation that such variations, insofar as they are affected by natural selection, slow migration, or genetic drift, are distributed along geographic gradations or clines. For example, with respect to skin color in Europe and Africa, Brace writes:
In part, this is due to isolation by distance. This point called attention to a problem common to phenotype-based descriptions of races (for example, those based on hair texture and skin color): they ignore a host of other similarities and differences (for example, blood type) that do not correlate highly with the markers for race. Thus, anthropologist Frank Livingstone's conclusion was that, since clines cross racial boundaries, "there are no races, only clines".
In a response to Livingstone, Theodore Dobzhansky argued that when talking about race one must be attentive to how the term is being used: "I agree with Dr. Livingstone that if races have to be 'discrete units', then there are no races, and if 'race' is used as an 'explanation' of the human variability, rather than vice versa, then the explanation is invalid." He further argued that one could use the term race if one distinguished between "race differences" and "the race concept". The former refers to any distinction in gene frequencies between populations; the latter is "a matter of judgment". He further observed that even when there is clinal variation: "Race differences are objectively ascertainable biological phenomena ... but it does not follow that racially distinct populations must be given racial (or subspecific) labels." In short, Livingstone and Dobzhansky agree that there are genetic differences among human beings; they also agree that the use of the race concept to classify people, and how the race concept is used, is a matter of social convention. They differ on whether the race concept remains a meaningful and useful social convention.
In 1964, the biologists Paul Ehrlich and Holm pointed out cases where two or more clines are distributed discordantly – for example, melanin is distributed in a decreasing pattern from the equator north and south; frequencies for the haplotype for beta-S hemoglobin, on the other hand, radiate out of specific geographical points in Africa. As the anthropologists Leonard Lieberman and Fatimah Linda Jackson observed, "Discordant patterns of heterogeneity falsify any description of a population as if it were genotypically or even phenotypically homogeneous".
Patterns such as those seen in human physical and genetic variation as described above, have led to the consequence that the number and geographic location of any described races is highly dependent on the importance attributed to, and quantity of, the traits considered. A skin-lightening mutation, estimated to have occurred 20,000 to 50,000 years ago, partially accounts for the appearance of light skin in people who migrated out of Africa northward into what is now Europe. East Asians owe their relatively light skin to different mutations. On the other hand, the greater the number of traits (or alleles) considered, the more subdivisions of humanity are detected, since traits and gene frequencies do not always correspond to the same geographical location. Or as put it:
Genetically differentiated populations
Another way to look at differences between populations is to measure genetic differences rather than physical differences between groups. The mid-20th-century anthropologist William C. Boyd defined race as: "A population which differs significantly from other populations in regard to the frequency of one or more of the genes it possesses. It is an arbitrary matter which, and how many, gene loci we choose to consider as a significant 'constellation'". Leonard Lieberman and Rodney Kirk have pointed out that "the paramount weakness of this statement is that if one gene can distinguish races then the number of races is as numerous as the number of human couples reproducing". Moreover, the anthropologist Stephen Molnar has suggested that the discordance of clines inevitably results in a multiplication of races that renders the concept itself useless. The Human Genome Project states "People who have lived in the same geographic region for many generations may have some alleles in common, but no allele will be found in all members of one population and in no members of any other." Massimo Pigliucci and Jonathan Kaplan argue that human races do exist, and that they correspond to the genetic classification of ecotypes, but that real human races do not correspond very much, if at all, to folk racial categories. In contrast, Walsh & Yun reviewed the literature in 2011 and reported: "Genetic studies using very few chromosomal loci find that genetic polymorphisms divide human populations into clusters with almost 100 percent accuracy and that they correspond to the traditional anthropological categories."
Some biologists argue that racial categories correlate with biological traits (e.g. phenotype), and that certain genetic markers have varying frequencies among human populations, some of which correspond more or less to traditional racial groupings.
Distribution of genetic variation
The distribution of genetic variants within and among human populations are impossible to describe succinctly because of the difficulty of defining a population, the clinal nature of variation, and heterogeneity across the genome (Long and Kittles 2003). In general, however, an average of 85% of statistical genetic variation exists within local populations, ≈7% is between local populations within the same continent, and ≈8% of variation occurs between large groups living on different continents. The recent African origin theory for humans would predict that in Africa there exists a great deal more diversity than elsewhere and that diversity should decrease the further from Africa a population is sampled. Hence, the 85% average figure is misleading: Long and Kittles find that rather than 85% of human genetic diversity existing in all human populations, about 100% of human diversity exists in a single African population, whereas only about 60% of human genetic diversity exists in the least diverse population they analyzed (the Surui, a population derived from New Guinea). Statistical analysis that takes this difference into account confirms previous findings that "Western-based racial classifications have no taxonomic significance".
Cluster analysis
A 2002 study of random biallelic genetic loci found little to no evidence that humans were divided into distinct biological groups.
In his 2003 paper, "Human Genetic Diversity: Lewontin's Fallacy", A. W. F. Edwards argued that rather than using a locus-by-locus analysis of variation to derive taxonomy, it is possible to construct a human classification system based on characteristic genetic patterns, or clusters inferred from multilocus genetic data. Geographically based human studies since have shown that such genetic clusters can be derived from analyzing of a large number of loci which can assort individuals sampled into groups analogous to traditional continental racial groups. Joanna Mountain and Neil Risch cautioned that while genetic clusters may one day be shown to correspond to phenotypic variations between groups, such assumptions were premature as the relationship between genes and complex traits remains poorly understood. However, Risch denied such limitations render the analysis useless: "Perhaps just using someone's actual birth year is not a very good way of measuring age. Does that mean we should throw it out? ... Any category you come up with is going to be imperfect, but that doesn't preclude you from using it or the fact that it has utility."
Early human genetic cluster analysis studies were conducted with samples taken from ancestral population groups living at extreme geographic distances from each other. It was thought that such large geographic distances would maximize the genetic variation between the groups sampled in the analysis, and thus maximize the probability of finding cluster patterns unique to each group. In light of the historically recent acceleration of human migration (and correspondingly, human gene flow) on a global scale, further studies were conducted to judge the degree to which genetic cluster analysis can pattern ancestrally identified groups as well as geographically separated groups. One such study looked at a large multiethnic population in the United States, and "detected only modest genetic differentiation between different current geographic locales within each race/ethnicity group. Thus, ancient geographic ancestry, which is highly correlated with self-identified race/ethnicity – as opposed to current residence – is the major determinant of genetic structure in the U.S. population."
have argued that even when individuals can be reliably assigned to specific population groups, it may still be possible for two randomly chosen individuals from different populations/clusters to be more similar to each other than to a randomly chosen member of their own cluster. They found that many thousands of genetic markers had to be used in order for the answer to the question "How often is a pair of individuals from one population genetically more dissimilar than two individuals chosen from two different populations?" to be "never". This assumed three population groups separated by large geographic ranges (European, African and East Asian). The entire world population is much more complex and studying an increasing number of groups would require an increasing number of markers for the same answer. The authors conclude that "caution should be used when using geographic or genetic ancestry to make inferences about individual phenotypes". Witherspoon, et al. concluded: "The fact that, given enough genetic data, individuals can be correctly assigned to their populations of origin is compatible with the observation that most human genetic variation is found within populations, not between them. It is also compatible with our finding that, even when the most distinct populations are considered and hundreds of loci are used, individuals are frequently more similar to members of other populations than to members of their own population."
Anthropologists such as C. Loring Brace, the philosophers Jonathan Kaplan and Rasmus Winther, and the geneticist Joseph Graves, have argued that the cluster structure of genetic data is dependent on the initial hypotheses of the researcher and the influence of these hypotheses on the choice of populations to sample. When one samples continental groups, the clusters become continental, but if one had chosen other sampling patterns, the clustering would be different. Weiss and Fullerton have noted that if one sampled only Icelanders, Mayans and Maoris, three distinct clusters would form and all other populations could be described as being clinally composed of admixtures of Maori, Icelandic and Mayan genetic materials. Kaplan and Winther therefore argue that, seen in this way, both Lewontin and Edwards are right in their arguments. They conclude that while racial groups are characterized by different allele frequencies, this does not mean that racial classification is a natural taxonomy of the human species, because multiple other genetic patterns can be found in human populations that crosscut racial distinctions. Moreover, the genomic data underdetermines whether one wishes to see subdivisions (i.e., splitters) or a continuum (i.e., lumpers). Under Kaplan and Winther's view, racial groupings are objective social constructions (see Mills 1998) that have conventional biological reality only insofar as the categories are chosen and constructed for pragmatic scientific reasons. In earlier work, Winther had identified "diversity partitioning" and "clustering analysis" as two separate methodologies, with distinct questions, assumptions, and protocols. Each is also associated with opposing ontological consequences vis-a-vis the metaphysics of race. Philosopher Lisa Gannett has argued that biogeographical ancestry, a concept devised by Mark Shriver and Tony Frudakis, is not an objective measure of the biological aspects of race as Shriver and Frudakis claim it is. She argues that it is actually just a "local category shaped by the U.S. context of its production, especially the forensic aim of being able to predict the race or ethnicity of an unknown suspect based on DNA found at the crime scene".
Clines and clusters in genetic variation
Recent studies of human genetic clustering have included a debate over how genetic variation is organized, with clusters and clines as the main possible orderings. argued for smooth, clinal genetic variation in ancestral populations even in regions previously considered racially homogeneous, with the apparent gaps turning out to be artifacts of sampling techniques. disputed this and offered an analysis of the Human Genetic Diversity Panel showing that there were small discontinuities in the smooth genetic variation for ancestral populations at the location of geographic barriers such as the Sahara, the Oceans, and the Himalayas. Nonetheless, stated that their findings "should not be taken as evidence of our support of any particular concept of biological race ... Genetic differences among human populations derive mainly from gradations in allele frequencies rather than from distinctive 'diagnostic' genotypes." Using a sample of 40 populations distributed roughly evenly across the Earth's land surface, found that "genetic diversity is distributed in a more clinal pattern when more geographically intermediate populations are sampled".
Guido Barbujani has written that human genetic variation is generally distributed continuously in gradients across much of Earth, and that there is no evidence that genetic boundaries between human populations exist as would be necessary for human races to exist.
Over time, human genetic variation has formed a nested structure that is inconsistent with the concept of races that have evolved independently of one another.
Social constructions
As anthropologists and other evolutionary scientists have shifted away from the language of race to the term population to talk about genetic differences, historians, cultural anthropologists and other social scientists re-conceptualized the term "race" as a cultural category or identity, i.e., a way among many possible ways in which a society chooses to divide its members into categories.
Many social scientists have replaced the word race with the word "ethnicity" to refer to self-identifying groups based on beliefs concerning shared culture, ancestry and history. Alongside empirical and conceptual problems with "race", following the Second World War, evolutionary and social scientists were acutely aware of how beliefs about race had been used to justify discrimination, apartheid, slavery, and genocide. This questioning gained momentum in the 1960s during the civil rights movement in the United States and the emergence of numerous anti-colonial movements worldwide. They thus came to believe that race itself is a social construct, a concept that was believed to correspond to an objective reality but which was believed in because of its social functions.
Craig Venter and Francis Collins of the National Institute of Health jointly made the announcement of the mapping of the human genome in 2000. Upon examining the data from the genome mapping, Venter realized that although the genetic variation within the human species is on the order of 1–3% (instead of the previously assumed 1%), the types of variations do not support the notion of genetically defined races. Venter said, "Race is a social concept. It's not a scientific one. There are no bright lines (that would stand out), if we could compare all the sequenced genomes of everyone on the planet. ... When we try to apply science to try to sort out these social differences, it all falls apart."
Anthropologist Stephan Palmié has argued that race "is not a thing but a social relation"; or, in the words of Katya Gibel Mevorach, "a metonym", "a human invention whose criteria for differentiation are neither universal nor fixed but have always been used to manage difference". As such, the use of the term "race" itself must be analyzed. Moreover, they argue that biology will not explain why or how people use the idea of race; only history and social relationships will.
Imani Perry has argued that race "is produced by social arrangements and political decision making", and that "race is something that happens, rather than something that is. It is dynamic, but it holds no objective truth." Similarly, in Racial Culture: A Critique (2005), Richard T. Ford argued that while "there is no necessary correspondence between the ascribed identity of race and one's culture or personal sense of self" and "group difference is not intrinsic to members of social groups but rather contingent o[n] the social practices of group identification", the social practices of identity politics may coerce individuals into the "compulsory" enactment of "prewritten racial scripts".
Brazil
Compared to 19th-century United States, 20th-century Brazil was characterized by a perceived relative absence of sharply defined racial groups. According to anthropologist Marvin Harris, this pattern reflects a different history and different social relations.
Race in Brazil was "biologized", but in a way that recognized the difference between ancestry (which determines genotype) and phenotypic differences. There, racial identity was not governed by rigid descent rule, such as the one-drop rule, as it was in the United States. A Brazilian child was never automatically identified with the racial type of one or both parents, nor were there only a very limited number of categories to choose from, to the extent that full siblings can pertain to different racial groups.
Over a dozen racial categories would be recognized in conformity with all the possible combinations of hair color, hair texture, eye color, and skin color. These types grade into each other like the colors of the spectrum, and not one category stands significantly isolated from the rest. That is, race referred preferentially to appearance, not heredity, and appearance is a poor indication of ancestry, because only a few genes are responsible for someone's skin color and traits: a person who is considered white may have more African ancestry than a person who is considered black, and the reverse can be also true about European ancestry. The complexity of racial classifications in Brazil reflects the extent of genetic mixing in Brazilian society, a society that remains highly, but not strictly, stratified along color lines. These socioeconomic factors are also significant to the limits of racial lines, because a minority of pardos, or brown people, are likely to start declaring themselves white or black if socially upward, and being seen as relatively "whiter" as their perceived social status increases (much as in other regions of Latin America).
Fluidity of racial categories aside, the "biologification" of race in Brazil referred above would match contemporary concepts of race in the United States quite closely, though, if Brazilians are supposed to choose their race as one among, Asian and Indigenous apart, three IBGE's census categories. While assimilated Amerindians and people with very high quantities of Amerindian ancestry are usually grouped as caboclos, a subgroup of pardos which roughly translates as both mestizo and hillbilly, for those of lower quantity of Amerindian descent a higher European genetic contribution is expected to be grouped as a pardo. In several genetic tests, people with less than 60-65% of European descent and 5–10% of Amerindian descent usually cluster with Afro-Brazilians (as reported by the individuals), or 6.9% of the population, and those with about 45% or more of Subsaharan contribution most times do so (in average, Afro-Brazilian DNA was reported to be about 50% Subsaharan African, 37% European and 13% Amerindian).
If a more consistent report with the genetic groups in the gradation of genetic mixing is to be considered (e.g. that would not cluster people with a balanced degree of African and non-African ancestry in the black group instead of the multiracial one, unlike elsewhere in Latin America where people of high quantity of African descent tend to classify themselves as mixed), more people would report themselves as white and pardo in Brazil (47.7% and 42.4% of the population as of 2010, respectively), because by research its population is believed to have between 65 and 80% of autosomal European ancestry, in average (also >35% of European mt-DNA and >95% of European Y-DNA).
From the last decades of the Empire until the 1950s, the proportion of the white population increased significantly while Brazil welcomed 5.5 million immigrants between 1821 and 1932, not much behind its neighbor Argentina with 6.4 million, and it received more European immigrants in its colonial history than the United States. Between 1500 and 1760, 700.000 Europeans settled in Brazil, while 530.000 Europeans settled in the United States for the same given time. Thus, the historical construction of race in Brazilian society dealt primarily with gradations between persons of majority European ancestry and little minority groups with otherwise lower quantity therefrom in recent times.
European Union
According to the Council of the European Union:
The European Union uses the terms racial origin and ethnic origin synonymously in its documents and according to it "the use of the term 'racial origin' in this directive does not imply an acceptance of such [racial] theories". Haney López warns that using "race" as a category within the law tends to legitimize its existence in the popular imagination. In the diverse geographic context of Europe, ethnicity and ethnic origin are arguably more resonant and are less encumbered by the ideological baggage associated with "race". In European context, historical resonance of "race" underscores its problematic nature. In some states, it is strongly associated with laws promulgated by the Nazi and Fascist governments in Europe during the 1930s and 1940s. Indeed, in 1996, the European Parliament adopted a resolution stating that "the term should therefore be avoided in all official texts".
The concept of racial origin relies on the notion that human beings can be separated into biologically distinct "races", an idea generally rejected by the scientific community. Since all human beings belong to the same species, the ECRI (European Commission against Racism and Intolerance) rejects theories based on the existence of different "races". However, in its Recommendation ECRI uses this term in order to ensure that those persons who are generally and erroneously perceived as belonging to "another race" are not excluded from the protection provided for by the legislation. The law claims to reject the existence of "race", yet penalize situations where someone is treated less favourably on this ground.
United States
The immigrants to the United States came from every region of Europe, Africa, and Asia. They mixed among themselves and with the indigenous inhabitants of the continent. In the United States most people who self-identify as African American have some European ancestors, while many people who identify as European American have some African or Amerindian ancestors.
Since the early history of the United States, Amerindians, African Americans, and European Americans have been classified as belonging to different races. Efforts to track mixing between groups led to a proliferation of categories, such as mulatto and octoroon. The criteria for membership in these races diverged in the late 19th century. During the Reconstruction era, increasing numbers of Americans began to consider anyone with "one drop" of known "Black blood" to be Black, regardless of appearance. By the early 20th century, this notion was made statutory in many states. Amerindians continue to be defined by a certain percentage of "Indian blood" (called blood quantum). To be White one had to have perceived "pure" White ancestry. The one-drop rule or hypodescent rule refers to the convention of defining a person as racially black if he or she has any known African ancestry. This rule meant that those that were mixed race but with some discernible African ancestry were defined as black. The one-drop rule is specific to not only those with African ancestry but to the United States, making it a particularly African-American experience.
The decennial censuses conducted since 1790 in the United States created an incentive to establish racial categories and fit people into these categories.
The term "Hispanic" as an ethnonym emerged in the 20th century with the rise of migration of laborers from the Spanish-speaking countries of Latin America to the United States. Today, the word "Latino" is often used as a synonym for "Hispanic". The definitions of both terms are non-race specific, and include people who consider themselves to be of distinct races (Black, White, Amerindian, Asian, and mixed groups). However, there is a common misconception in the US that Hispanic/Latino is a race or sometimes even that national origins such as Mexican, Cuban, Colombian, Salvadoran, etc. are races. In contrast to "Latino" or "Hispanic", "Anglo" refers to non-Hispanic White Americans or non-Hispanic European Americans, most of whom speak the English language but are not necessarily of English descent.
Views across disciplines over time
Anthropology
The concept of race classification in physical anthropology lost credibility around the 1960s and is now considered untenable. A 2019 statement by the American Association of Physical Anthropologists declares:Race does not provide an accurate representation of human biological variation. It was never accurate in the past, and it remains inaccurate when referencing contemporary human populations. Humans are not divided biologically into distinct continental types or racial genetic clusters. Instead, the Western concept of race must be understood as a classification system that emerged from, and in support of, European colonialism, oppression, and discrimination.Wagner et al. (2017) surveyed 3,286 American anthropologists' views on race and genetics, including both cultural and biological anthropologists. They found a consensus among them that biological races do not exist in humans, but that race does exist insofar as the social experiences of members of different races can have significant effects on health.
Wang, Štrkalj et al. (2003) examined the use of race as a biological concept in research papers published in China's only biological anthropology journal, Acta Anthropologica Sinica. The study showed that the race concept was widely used among Chinese anthropologists. In a 2007 review paper, Štrkalj suggested that the stark contrast of the racial approach between the United States and China was due to the fact that race is a factor for social cohesion among the ethnically diverse people of China, whereas "race" is a very sensitive issue in America and the racial approach is considered to undermine social cohesion – with the result that in the socio-political context of US academics scientists are encouraged not to use racial categories, whereas in China they are encouraged to use them.
Lieberman et al. in a 2004 study researched the acceptance of race as a concept among anthropologists in the United States, Canada, the Spanish speaking areas, Europe, Russia and China. Rejection of race ranged from high to low, with the highest rejection rate in the United States and Canada, a moderate rejection rate in Europe, and the lowest rejection rate in Russia and China. Methods used in the studies reported included questionnaires and content analysis.
Kaszycka et al. (2009) in 2002–2003 surveyed European anthropologists' opinions toward the biological race concept. Three factors – country of academic education, discipline, and age – were found to be significant in differentiating the replies. Those educated in Western Europe, physical anthropologists, and middle-aged persons rejected race more frequently than those educated in Eastern Europe, people in other branches of science, and those from both younger and older generations. "The survey shows that the views on race are sociopolitically (ideologically) influenced and highly dependent on education."
United States
Since the second half of the 20th century, physical anthropology in the United States has moved away from a typological understanding of human biological diversity towards a genomic and population-based perspective. Anthropologists have tended to understand race as a social classification of humans based on phenotype and ancestry as well as cultural factors, as the concept is understood in the social sciences. Since 1932, an increasing number of college textbooks introducing physical anthropology have rejected race as a valid concept: from 1932 to 1976, only seven out of thirty-two rejected race; from 1975 to 1984, thirteen out of thirty-three rejected race; from 1985 to 1993, thirteen out of nineteen rejected race. According to one academic journal entry, where 78 percent of the articles in the 1931 Journal of Physical Anthropology employed these or nearly synonymous terms reflecting a bio-race paradigm, only 36 percent did so in 1965, and just 28 percent did in 1996.
A 1998 "Statement on 'Race'" composed by a select committee of anthropologists and issued by the executive board of the American Anthropological Association, which they argue "represents generally the contemporary thinking and scholarly positions of a majority of anthropologists", declares:
An earlier survey, conducted in 1985 , asked 1,200 American scientists how many disagree with the following proposition: "There are biological races in the species Homo sapiens." Among anthropologists, the responses were:
physical anthropologists: 41%
cultural anthropologists: 53%
Lieberman's study also showed that more women reject the concept of race than men.
The same survey, conducted again in 1999, showed that the number of anthropologists disagreeing with the idea of biological race had risen substantially. The results were as follows:
physical anthropologists: 69%
cultural anthropologists: 80%
A line of research conducted by Cartmill (1998), however, seemed to limit the scope of Lieberman's finding that there was "a significant degree of change in the status of the race concept". Goran Štrkalj has argued that this may be because Lieberman and collaborators had looked at all the members of the American Anthropological Association irrespective of their field of research interest, while Cartmill had looked specifically at biological anthropologists interested in human variation.
In 2007, Ann Morning interviewed over 40 American biologists and anthropologists and found significant disagreements over the nature of race, with no one viewpoint holding a majority among either group. Morning also argues that a third position, "antiessentialism", which holds that race is not a useful concept for biologists, should be introduced into this debate in addition to "constructionism" and "essentialism".
According to the 2000 University of Wyoming edition of a popular physical anthropology textbook, forensic anthropologists are overwhelmingly in support of the idea of the basic biological reality of human races. Forensic physical anthropologist and professor George W. Gill has said that the idea that race is only skin deep "is simply not true, as any experienced forensic anthropologist will affirm" and "Many morphological features tend to follow geographic boundaries coinciding often with climatic zones. This is not surprising since the selective forces of climate are probably the primary forces of nature that have shaped human races with regard not only to skin color and hair form but also the underlying bony structures of the nose, cheekbones, etc. (For example, more prominent noses humidify air better.)" While he can see good arguments for both sides, the complete denial of the opposing evidence "seems to stem largely from socio-political motivation and not science at all". He also states that many biological anthropologists see races as real yet "not one introductory textbook of physical anthropology even presents that perspective as a possibility. In a case as flagrant as this, we are not dealing with science but rather with blatant, politically motivated censorship".
In partial response to Gill's statement, Professor of Biological Anthropology C. Loring Brace argues that the reason laymen and biological anthropologists can determine the geographic ancestry of an individual can be explained by the fact that biological characteristics are clinally distributed across the planet, and that does not translate into the concept of race. He states: The concept of "race" is still sometimes used within forensic anthropology (when analyzing skeletal remains), biomedical research, and race-based medicine. Brace has criticized forensic anthropologists for this, arguing that they in fact should be talking about regional ancestry. He argues that while forensic anthropologists can determine that a skeletal remain comes from a person with ancestors in a specific region of Africa, categorizing that skeletal as being "black" is a socially constructed category that is only meaningful in the particular social context of the United States, and which is not itself scientifically valid.
Biology, anatomy, and medicine
In the same 1985 survey , 16% of the surveyed biologists and 36% of the surveyed developmental psychologists disagreed with the proposition: "There are biological races in the species Homo sapiens."
The authors of the study also examined 77 college textbooks in biology and 69 in physical anthropology published between 1932 and 1989. Physical anthropology texts argued that biological races exist until the 1970s, when they began to argue that races do not exist. In contrast, biology textbooks did not undergo such a reversal but many instead dropped their discussion of race altogether. The authors attributed this to biologists trying to avoid discussing the political implications of racial classifications, and to the ongoing discussions in biology about the validity of the idea of "subspecies". The authors concluded, "The concept of race, masking the overwhelming genetic similarity of all peoples and the mosaic patterns of variation that do not correspond to racial divisions, is not only socially dysfunctional but is biologically indefensible as well (pp. 5 18–5 19)."
A 1994 examination of 32 English sport/exercise science textbooks found that 7 (21.9%) claimed that there are biophysical differences due to race that might explain differences in sports performance, 24 (75%) did not mention nor refute the concept, and 1 (3.1%) expressed caution with the idea.
In February 2001, the editors of Archives of Pediatrics and Adolescent Medicine asked "authors to not use race and ethnicity when there is no biological, scientific, or sociological reason for doing so". The editors also stated that "analysis by race and ethnicity has become an analytical knee-jerk reflex". Nature Genetics now ask authors to "explain why they make use of particular ethnic groups or populations, and how classification was achieved".
Morning (2008) looked at high school biology textbooks during the 1952–2002 period and initially found a similar pattern with only 35% directly discussing race in the 1983–92 period from initially 92% doing so. However, this has increased somewhat after this to 43%. More indirect and brief discussions of race in the context of medical disorders have increased from none to 93% of textbooks. In general, the material on race has moved from surface traits to genetics and evolutionary history. The study argues that the textbooks' fundamental message about the existence of races has changed little.
Surveying views on race in the scientific community in 2008, Morning concluded that biologists had failed to come to a clear consensus, and they often split along cultural and demographic lines. She notes: "At best, one can conclude that biologists and anthropologists now appear equally divided in their beliefs about the nature of race."
Gissis (2008) examined several important American and British journals in genetics, epidemiology and medicine for their content during the 1946–2003 period. He wrote that "Based upon my findings I argue that the category of race only seemingly disappeared from scientific discourse after World War II and has had a fluctuating yet continuous use during the time span from 1946 to 2003, and has even become more pronounced from the early 1970s on".
33 health services researchers from differing geographic regions were interviewed in a 2008 study. The researchers recognized the problems with racial and ethnic variables but the majority still believed these variables were necessary and useful.
A 2010 examination of 18 widely used English anatomy textbooks found that they all represented human biological variation in superficial and outdated ways, many of them making use of the race concept in ways that were current in 1950s anthropology. The authors recommended that anatomical education should describe human anatomical variation in more detail and rely on newer research that demonstrates the inadequacies of simple racial typologies.
A 2021 study that examined over 11,000 papers from 1949 to 2018 in the American Journal of Human Genetics, found that "race" was used in only 5% of papers published in the last decade, down from 22% in the first. Together with an increase in use of the terms "ethnicity", "ancestry", and location-based terms, it suggests that human geneticists have mostly abandoned the term "race".
The National Academies of Sciences, Engineering, and Medicine (NASEM), supported by the US the National Institutes of Health, formally declared that "researchers should not use race as a proxy for describing human genetic variation". The report of its Committee on the Use of Race, Ethnicity, and Ancestry as Population Descriptors in Genomics Research titled Using Population Descriptors in Genetics and Genomics Research was released on 14 March 2023. The report stated: "In humans, race is a socially constructed designation, a misleading and harmful surrogate for population genetic differences, and has a long history of being incorrectly identified as the major genetic reason for phenotypic differences between groups." The committee co-chair Charmaine D. Royal and Robert O. Keohane of Duke University agreed in the meeting: "Classifying people by race is a practice entangled with and rooted in racism."
Sociology
Lester Frank Ward (1841–1913), considered to be one of the founders of American sociology, rejected notions that there were fundamental differences that distinguished one race from another, although he acknowledged that social conditions differed dramatically by race. At the turn of the 20th century, sociologists viewed the concept of race in ways that were shaped by the scientific racism of the 19th and early 20th centuries. Many sociologists focused on African Americans, called Negroes at that time, and claimed that they were inferior to whites. White sociologist Charlotte Perkins Gilman (1860–1935), for example, used biological arguments to claim the inferiority of African Americans. American sociologist Charles H. Cooley (1864–1929) theorized that differences among races were "natural", and that biological differences result in differences in intellectual abilities. Edward Alsworth Ross (1866–1951), also an important figure in the founding of American sociology, and a eugenicist, believed that whites were the superior race, and that there were essential differences in "temperament" among races. In 1910, the Journal published an article by Ulysses G. Weatherly (1865–1940) that called for white supremacy and segregation of the races to protect racial purity.
W. E. B. Du Bois (1868–1963), one of the first African-American sociologists, was the first sociologist to use sociological concepts and empirical research methods to analyze race as a social construct instead of a biological reality. Beginning in 1899 with his book The Philadelphia Negro, Du Bois studied and wrote about race and racism throughout his career. In his work, he contended that social class, colonialism, and capitalism shaped ideas about race and racial categories. Social scientists largely abandoned scientific racism and biological reasons for racial categorization schemes by the 1930s. Other early sociologists, especially those associated with the Chicago School, joined Du Bois in theorizing race as a socially constructed fact. By 1978, William Julius Wilson argued that race and racial classification systems were declining in significance, and that instead, social class more accurately described what sociologists had earlier understood as race. By 1986, sociologists Michael Omi and Howard Winant successfully introduced the concept of racial formation to describe the process by which racial categories are created. Omi and Winant assert that "there is no biological basis for distinguishing among human groups along the lines of race".
Eduardo Bonilla-Silva, Sociology professor at Duke University, remarks: "I contend that racism is, more than anything else, a matter of group power; it is about a dominant racial group (whites) striving to maintain its systemic advantages and minorities fighting to subvert the racial status quo." The types of practices that take place under this new color-blind racism is subtle, institutionalized, and supposedly not racial. Color-blind racism thrives on the idea that race is no longer an issue in the United States. There are contradictions between the alleged color-blindness of most whites and the persistence of a color-coded system of inequality.
Today, sociologists generally understand race and racial categories as socially constructed, and reject racial categorization schemes that depend on biological differences.
Political and practical uses
Biomedicine
In the United States, federal government policy promotes the use of racially categorized data to identify and address health disparities between racial or ethnic groups. In clinical settings, race has sometimes been considered in the diagnosis and treatment of medical conditions. Doctors have noted that some medical conditions are more prevalent in certain racial or ethnic groups than in others, without being sure of the cause of those differences. Recent interest in race-based medicine, or race-targeted pharmacogenomics, has been fueled by the proliferation of human genetic data which followed the decoding of the human genome in the first decade of the twenty-first century. There is an active debate among biomedical researchers about the meaning and importance of race in their research. Proponents of the use of racial categories in biomedicine argue that continued use of racial categorizations in biomedical research and clinical practice makes possible the application of new genetic findings, and provides a clue to diagnosis. Biomedical researchers' positions on race fall into two main camps: those who consider the concept of race to have no biological basis and those who consider it to have the potential to be biologically meaningful. Members of the latter camp often base their arguments around the potential to create genome-based personalized medicine.
Other researchers point out that finding a difference in disease prevalence between two socially defined groups does not necessarily imply genetic causation of the difference. They suggest that medical practices should maintain their focus on the individual rather than an individual's membership to any group. They argue that overemphasizing genetic contributions to health disparities carries various risks such as reinforcing stereotypes, promoting racism or ignoring the contribution of non-genetic factors to health disparities. International epidemiological data show that living conditions rather than race make the biggest difference in health outcomes even for diseases that have "race-specific" treatments. Some studies have found that patients are reluctant to accept racial categorization in medical practice.
Law enforcement
In an attempt to provide general descriptions that may facilitate the job of law enforcement officers seeking to apprehend suspects, the United States FBI employs the term "race" to summarize the general appearance (skin color, hair texture, eye shape, and other such easily noticed characteristics) of individuals whom they are attempting to apprehend. From the perspective of law enforcement officers, it is generally more important to arrive at a description that will readily suggest the general appearance of an individual than to make a scientifically valid categorization by DNA or other such means. Thus, in addition to assigning a wanted individual to a racial category, such a description will include: height, weight, eye color, scars and other distinguishing characteristics.
Criminal justice agencies in England and Wales use at least two separate racial/ethnic classification systems when reporting crime, as of 2010. One is the system used in the 2001 Census when individuals identify themselves as belonging to a particular ethnic group: W1 (White-British), W2 (White-Irish), W9 (Any other white background); M1 (White and black Caribbean), M2 (White and black African), M3 (White and Asian), M9 (Any other mixed background); A1 (Asian-Indian), A2 (Asian-Pakistani), A3 (Asian-Bangladeshi), A9 (Any other Asian background); B1 (Black Caribbean), B2 (Black African), B3 (Any other black background); O1 (Chinese), O9 (Any other). The other is categories used by the police when they visually identify someone as belonging to an ethnic group, e.g. at the time of a stop and search or an arrest: White – North European (IC1), White – South European (IC2), Black (IC3), Asian (IC4), Chinese, Japanese, or South East Asian (IC5), Middle Eastern (IC6), and Unknown (IC0). "IC" stands for "Identification Code;" these items are also referred to as Phoenix classifications. Officers are instructed to "record the response that has been given" even if the person gives an answer which may be incorrect; their own perception of the person's ethnic background is recorded separately. Comparability of the information being recorded by officers was brought into question by the Office for National Statistics (ONS) in September 2007, as part of its Equality Data Review; one problem cited was the number of reports that contained an ethnicity of "Not Stated".
In many countries, such as France, the state is legally banned from maintaining data based on race.
In the United States, the practice of racial profiling has been ruled to be both unconstitutional and a violation of civil rights. There is active debate regarding the cause of a marked correlation between the recorded crimes, punishments meted out, and the country's populations. Many consider de facto racial profiling an example of institutional racism in law enforcement.
Mass incarceration in the United States disproportionately impacts African American and Latino communities. Michelle Alexander, author of The New Jim Crow: Mass Incarceration in the Age of Colorblindness (2010), argues that mass incarceration is best understood as not only a system of overcrowded prisons. Mass incarceration is also, "the larger web of laws, rules, policies, and customs that control those labeled criminals both in and out of prison". She defines it further as "a system that locks people not only behind actual bars in actual prisons, but also behind virtual bars and virtual walls", illustrating the second-class citizenship that is imposed on a disproportionate number of people of color, specifically African-Americans. She compares mass incarceration to Jim Crow laws, stating that both work as racial caste systems.
Many research findings appear to agree that the impact of victim race in the interpersonal violence (IPV) arrest decision might include a racial bias in favor of white victims. A 2011 study in a national sample of IPV arrests found that female arrest was more likely if the male victim was white and the female offender was black, while male arrest was more likely if the female victim was white. For both female and male arrest in IPV cases, situations involving married couples were more likely to lead to arrest compared to dating or divorced couples. More research is needed to understand agency and community factors that influence police behavior and how discrepancies in IPV interventions/ tools of justice can be addressed.
Recent work using DNA cluster analysis to determine race background has been used by some criminal investigators to narrow their search for the identity of both suspects and victims. Proponents of DNA profiling in criminal investigations cite cases where leads based on DNA analysis proved useful, but the practice remains controversial among medical ethicists, defense lawyers and some in law enforcement.
The Constitution of Australia contains a line about 'people of any race for whom it is deemed necessary to make special laws', despite there being no agreed definition of race described in the document.
Forensic anthropology
Similarly, forensic anthropologists draw on highly heritable morphological features of human remains (e.g. cranial measurements) to aid in the identification of the body, including in terms of race. In a 1992 article, anthropologist Norman Sauer noted that anthropologists had generally abandoned the concept of race as a valid representation of human biological diversity, except for forensic anthropologists. He asked, "If races don't exist, why are forensic anthropologists so good at identifying them?" He concluded:
Identification of the ancestry of an individual is dependent upon knowledge of the frequency and distribution of phenotypic traits in a population. This does not necessitate the use of a racial classification scheme based on unrelated traits, although the race concept is widely used in medical and legal contexts in the United States. Some studies have reported that races can be identified with a high degree of accuracy using certain methods, such as that developed by Giles and Elliot. However, this method sometimes fails to be replicated in other times and places; for instance, when the method was re-tested to identify Native Americans, the average rate of accuracy dropped from 85% to 33%. Prior information about the individual (e.g. Census data) is also important in allowing the accurate identification of the individual's "race".
In a different approach, anthropologist C. Loring Brace said:
In association with a NOVA program in 2000 about race, he wrote an essay opposing use of the term.
A 2002 study found that about 13% of human craniometric variation existed between regions, while 6% existed between local populations within regions and 81% within local populations. In contrast, the opposite pattern of genetic variation was observed for skin color (which is often used to define race), with 88% of variation between regions. The study concluded: "The apportionment of genetic diversity in skin color is atypical, and cannot be used for purposes of classification."
Similarly, a 2009 study found that craniometrics could be used accurately to determine what part of the world someone was from based on their cranium; however, this study also found that there were no abrupt boundaries that separated craniometric variation into distinct racial groups. Another 2009 study showed that American blacks and whites had different skeletal morphologies, and that significant patterning in variation in these traits exists within continents. This suggests that classifying humans into races based on skeletal characteristics would necessitate many different "races" being defined.
In 2010, philosopher Neven Sesardić argued that when several traits are analyzed at the same time, forensic anthropologists can classify a person's race with an accuracy of close to 100% based on only skeletal remains. Sesardić's claim has been disputed by philosopher Massimo Pigliucci, who accused Sesardić of "cherry pick[ing] the scientific evidence and reach[ing] conclusions that are contradicted by it". Specifically, Pigliucci argued that Sesardić misrepresented a paper by Ousley et al. (2009), and neglected to mention that they identified differentiation not just between individuals from different races, but also between individuals from different tribes, local environments, and time periods.
See also
Biological anthropology
Casta
Clan
Cultural identity
Environmental racism
Epicanthic fold
Ethnic nationalism
Ethnic stereotype
Genetic distance
Human skin color
Hypatia transracialism controversy
Interracial marriage
List of contemporary ethnic groups
Melanism
Minzu (anthropology)
Multiracial
Nationalism
Nomen dubium – a scientific name that is of unknown or doubtful application.
Pre-Adamite
Race and ethnicity in censuses (US)
Race and ethnicity in Latin America
Race and genetics
Race and health
Race of the future
Racialization
Raciolinguistics
Racism
Supremacism
Races of Mankind for the Field Museum of Natural History exhibition by sculptor Malvina Hoffman
The Race Question
References
Bibliography
Lay summary (1 December 2013)
Originally appeared in:
Further reading
Lay summary (28 April 2013) This review of current research includes chapters by Ian Whitmarsh, David S. Jones, Jonathan Kahn, Pamela Sankar, Steven Epstein, Simon M. Outram, George T. H. Ellison, Richard Tutton, Andrew Smart, Richard Ashcroft, Paul Martin, George T. H. Ellison, Amy Hinterberger, Joan H. Fujimura, Ramya Rajagopalan, Pilar N. Ossorio, Kjell A. Doksum, Jay S. Kaufman, Richard S. Cooper, Angela C. Jenks, Nancy Krieger, and Dorothy Roberts.
Popular press
Extract from
External links
Companion website to California Newsreel feature.
A collection of essays by professors and research scientists.
Official statements
Originally published in Federal Register, 30 October 1997.
A public education program, including history, human variation, and lived experience.
Anthropology
Kinship and descent
Social inequality | 0.765288 | 0.99954 | 0.764936 |
Structural functionalism | Structural functionalism, or simply functionalism, is "a framework for building theory that sees society as a complex system whose parts work together to promote solidarity and stability".
This approach looks at society through a macro-level orientation, which is a broad focus on the social structures that shape society as a whole, and believes that society has evolved like organisms. This approach looks at both social structure and social functions. Functionalism addresses society as a whole in terms of the function of its constituent elements; namely norms, customs, traditions, and institutions.
A common analogy called the organic or biological analogy, popularized by Herbert Spencer, presents these parts of society as human body "organs" that work toward the proper functioning of the "body" as a whole. In the most basic terms, it simply emphasizes "the effort to impute, as rigorously as possible, to each feature, custom, or practice, its effect on the functioning of a supposedly stable, cohesive system". For Talcott Parsons, "structural-functionalism" came to describe a particular stage in the methodological development of social science, rather than a specific school of thought.
Theory
In sociology, classical theories are defined by a tendency towards biological analogy and notions of social evolutionism:
While one may regard functionalism as a logical extension of the organic analogies for societies presented by political philosophers such as Rousseau, sociology draws firmer attention to those institutions unique to industrialized capitalist society (or modernity).
Auguste Comte believed that society constitutes a separate "level" of reality, distinct from both biological and inorganic matter. Explanations of social phenomena had therefore to be constructed within this level, individuals being merely transient occupants of comparatively stable social roles. In this view, Comte was followed by Émile Durkheim.
A central concern for Durkheim was the question of how certain societies maintain internal stability and survive over time. He proposed that such societies tend to be segmented, with equivalent parts held together by shared values, common symbols or (as his nephew Marcel Mauss held), systems of exchanges. Durkheim used the term "mechanical solidarity" to refer to these types of "social bonds, based on common sentiments and shared moral values, that are strong among members of pre-industrial societies". In modern, complex societies, members perform very different tasks, resulting in a strong interdependence. Based on the metaphor above of an organism in which many parts function together to sustain the whole, Durkheim argued that complex societies are held together by "organic solidarity", i.e. "social bonds, based on specialization and interdependence, that are strong among members of industrial societies".
The central concern of structural functionalism may be regarded as a continuation of the Durkheimian task of explaining the apparent stability and internal cohesion needed by societies to endure over time. Societies are seen as coherent, bounded and fundamentally relational constructs that function like organisms, with their various (or social institutions) working together in an unconscious, quasi-automatic fashion toward achieving an overall social equilibrium. All social and cultural phenomena are therefore seen as functional in the sense of working together, and are effectively deemed to have "lives" of their own. They are primarily analyzed in terms of this function. The individual is significant not in and of themselves, but rather in terms of their status, their position in patterns of social relations, and the behaviours associated with their status. Therefore, the social structure is the network of statuses connected by associated roles.
Functionalism also has an anthropological basis in the work of theorists such as Marcel Mauss, Bronisław Malinowski and Radcliffe-Brown. The prefix 'structural' emerged in Radcliffe-Brown's specific usage. Radcliffe-Brown proposed that most stateless, "primitive" societies, lacking strong centralized institutions, are based on an association of corporate-descent groups, i.e. the respective society's recognised kinship groups. Structural functionalism also took on Malinowski's argument that the basic building block of society is the nuclear family, and that the clan is an outgrowth, not vice versa.
It is simplistic to equate the perspective directly with political conservatism. The tendency to emphasize "cohesive systems", however, leads functionalist theories to be contrasted with "conflict theories" which instead emphasize social problems and inequalities.
Prominent theorists
Auguste Comte
Auguste Comte, the "Father of Positivism", pointed out the need to keep society unified as many traditions were diminishing. He was the first person to coin the term sociology. Comte suggests that sociology is the product of a three-stage development:
Theological stage: From the beginning of human history until the end of the European Middle Ages, people took a religious view that society expressed God's will. In the theological state, the human mind, seeking the essential nature of beings, the first and final causes (the origin and purpose) of all effects—in short, absolute knowledge—supposes all phenomena to be produced by the immediate action of supernatural beings.
Metaphysical stage: People began seeing society as a natural system as opposed to the supernatural. This began with enlightenment and the ideas of Hobbes, Locke, and Rousseau. Perceptions of society reflected the failings of a selfish human nature rather than the perfection of God.
Positive or scientific stage: Describing society through the application of the scientific approach, which draws on the work of scientists.
Herbert Spencer
Herbert Spencer (1820–1903) was a British philosopher famous for applying the theory of natural selection to society. He was in many ways the first true sociological functionalist. In fact, while Durkheim is widely considered the most important functionalist among positivist theorists, it is known that much of his analysis was culled from reading Spencer's work, especially his Principles of Sociology (1874–96). In describing society, Spencer alludes to the analogy of a human body. Just as the structural parts of the human body—the skeleton, muscles, and various internal organs—function independently to help the entire organism survive, social structures work together to preserve society.
While reading Spencer's massive volumes can be tedious (long passages explicating the organic analogy, with reference to cells, simple organisms, animals, humans and society), there are some important insights that have quietly influenced many contemporary theorists, including Talcott Parsons, in his early work The Structure of Social Action (1937). Cultural anthropology also consistently uses functionalism.
This evolutionary model, unlike most 19th century evolutionary theories, is cyclical, beginning with the differentiation and increasing complication of an organic or "super-organic" (Spencer's term for a social system) body, followed by a fluctuating state of equilibrium and disequilibrium (or a state of adjustment and adaptation), and, finally, the stage of disintegration or dissolution. Following Thomas Malthus' population principles, Spencer concluded that society is constantly facing selection pressures (internal and external) that force it to adapt its internal structure through differentiation.
Every solution, however, causes a new set of selection pressures that threaten society's viability. Spencer was not a determinist in the sense that he never said that
Selection pressures will be felt in time to change them;
They will be felt and reacted to; or
The solutions will always work.
In fact, he was in many ways a political sociologist, and recognized that the degree of centralized and consolidated authority in a given polity could make or break its ability to adapt. In other words, he saw a general trend towards the centralization of power as leading to stagnation and ultimately, pressures to decentralize.
More specifically, Spencer recognized three functional needs or prerequisites that produce selection pressures: they are regulatory, operative (production) and distributive. He argued that all societies need to solve problems of control and coordination, production of goods, services and ideas, and, finally, to find ways of distributing these resources.
Initially, in tribal societies, these three needs are inseparable, and the kinship system is the dominant structure that satisfies them. As many scholars have noted, all institutions are subsumed under kinship organization, but, with increasing population (both in terms of sheer numbers and density), problems emerge with regard to feeding individuals, creating new forms of organization—consider the emergent division of labour—coordinating and controlling various differentiated social units, and developing systems of resource distribution.
The solution, as Spencer sees it, is to differentiate structures to fulfill more specialized functions; thus, a chief or "big man" emerges, soon followed by a group of lieutenants, and later kings and administrators. The structural parts of society (e.g. families, work) function interdependently to help society function. Therefore, social structures work together to preserve society.
Talcott Parsons
Talcott Parsons began writing in the 1930s and contributed to sociology, political science, anthropology, and psychology. Structural functionalism and Parsons have received much criticism. Numerous critics have pointed out Parsons' underemphasis of political and monetary struggle, the basics of social change, and the by and large "manipulative" conduct unregulated by qualities and standards. Structural functionalism, and a large portion of Parsons' works, appear to be insufficient in their definitions concerning the connections amongst institutionalized and non-institutionalized conduct, and the procedures by which institutionalization happens.
Parsons was heavily influenced by Durkheim and Max Weber, synthesizing much of their work into his action theory, which he based on the system-theoretical concept and the methodological principle of voluntary action. He held that "the social system is made up of the actions of individuals". His starting point, accordingly, is the interaction between two individuals faced with a variety of choices about how they might act, choices that are influenced and constrained by a number of physical and social factors.
Parsons determined that each individual has expectations of the other's action and reaction to their own behavior, and that these expectations would (if successful) be "derived" from the accepted norms and values of the society they inhabit. As Parsons himself emphasized, in a general context there would never exist any perfect "fit" between behaviors and norms, so such a relation is never complete or "perfect".
Social norms were always problematic for Parsons, who never claimed (as has often been alleged) that social norms were generally accepted and agreed upon, should this prevent some kind of universal law. Whether social norms were accepted or not was for Parsons simply a historical question.
As behaviors are repeated in more interactions, and these expectations are entrenched or institutionalized, a role is created. Parsons defines a "role" as the normatively-regulated participation "of a person in a concrete process of social interaction with specific, concrete role-partners". Although any individual, theoretically, can fulfill any role, the individual is expected to conform to the norms governing the nature of the role they fulfill.
Furthermore, one person can and does fulfill many different roles at the same time. In one sense, an individual can be seen to be a "composition" of the roles he inhabits. Certainly, today, when asked to describe themselves, most people would answer with reference to their societal roles.
Parsons later developed the idea of roles into collectivities of roles that complement each other in fulfilling functions for society. Some roles are bound up in institutions and social structures (economic, educational, legal and even gender-based). These are functional in the sense that they assist society in operating and fulfilling its functional needs so that society runs smoothly.
Contrary to prevailing myth, Parsons never spoke about a society where there was no conflict or some kind of "perfect" equilibrium. A society's cultural value-system was in the typical case never completely integrated, never static and most of the time, like in the case of the American society, in a complex state of transformation relative to its historical point of departure. To reach a "perfect" equilibrium was not any serious theoretical question in Parsons analysis of social systems, indeed, the most dynamic societies had generally cultural systems with important inner tensions like the US and India. These tensions were a source of their strength according to Parsons rather than the opposite. Parsons never thought about system-institutionalization and the level of strains (tensions, conflict) in the system as opposite forces per se.
The key processes for Parsons for system reproduction are socialization and social control. Socialization is important because it is the mechanism for transferring the accepted norms and values of society to the individuals within the system. Parsons never spoke about "perfect socialization"in any society socialization was only partial and "incomplete" from an integral point of view.
Parsons states that "this point ... is independent of the sense in which [the] individual is concretely autonomous or creative rather than 'passive' or 'conforming', for individuality and creativity, are to a considerable extent, phenomena of the institutionalization of expectations"; they are culturally constructed.
Socialization is supported by the positive and negative sanctioning of role behaviours that do or do not meet these expectations. A punishment could be informal, like a snigger or gossip, or more formalized, through institutions such as prisons and mental homes. If these two processes were perfect, society would become static and unchanging, but in reality, this is unlikely to occur for long.
Parsons recognizes this, stating that he treats "the structure of the system as problematic and subject to change", and that his concept of the tendency towards equilibrium "does not imply the empirical dominance of stability over change". He does, however, believe that these changes occur in a relatively smooth way.
Individuals in interaction with changing situations adapt through a process of "role bargaining". Once the roles are established, they create norms that guide further action and are thus institutionalized, creating stability across social interactions. Where the adaptation process cannot adjust, due to sharp shocks or immediate radical change, structural dissolution occurs and either new structures (or therefore a new system) are formed, or society dies. This model of social change has been described as a "moving equilibrium", and emphasizes a desire for social order.
Davis and Moore
Kingsley Davis and Wilbert E. Moore (1945) gave an argument for social stratification based on the idea of "functional necessity" (also known as the Davis-Moore hypothesis). They argue that the most difficult jobs in any society have the highest incomes in order to motivate individuals to fill the roles needed by the division of labour. Thus, inequality serves social stability.
This argument has been criticized as fallacious from a number of different angles: the argument is both that the individuals who are the most deserving are the highest rewarded, and that a system of unequal rewards is necessary, otherwise no individuals would perform as needed for the society to function. The problem is that these rewards are supposed to be based upon objective merit, rather than subjective "motivations." The argument also does not clearly establish why some positions are worth more than others, even when they benefit more people in society, e.g., teachers compared to athletes and movie stars. Critics have suggested that structural inequality (inherited wealth, family power, etc.) is itself a cause of individual success or failure, not a consequence of it.
Robert Merton
Robert K. Merton made important refinements to functionalist thought. He fundamentally agreed with Parsons' theory but acknowledged that Parsons' theory could be questioned, believing that it was over generalized. Merton tended to emphasize middle range theory rather than a grand theory, meaning that he was able to deal specifically with some of the limitations in Parsons' thinking. Merton believed that any social structure probably has many functions, some more obvious than others. He identified three main limitations: functional unity, universal functionalism and indispensability. He also developed the concept of deviance and made the distinction between manifest and latent functions. Manifest functions referred to the recognized and intended consequences of any social pattern. Latent functions referred to unrecognized and unintended consequences of any social pattern.
Merton criticized functional unity, saying that not all parts of a modern complex society work for the functional unity of society. Consequently, there is a social dysfunction referred to as any social pattern that may disrupt the operation of society. Some institutions and structures may have other functions, and some may even be generally dysfunctional, or be functional for some while being dysfunctional for others. This is because not all structures are functional for society as a whole. Some practices are only functional for a dominant individual or a group.
There are two types of functions that Merton discusses the "manifest functions" in that a social pattern can trigger a recognized and intended consequence. The manifest function of education includes preparing for a career by getting good grades, graduation and finding good job. The second type of function is "latent functions", where a social pattern results in an unrecognized or unintended consequence. The latent functions of education include meeting new people, extra-curricular activities, school trips.
Another type of social function is "social dysfunction" which is any undesirable consequences that disrupts the operation of society. The social dysfunction of education includes not getting good grades, a job. Merton states that by recognizing and examining the dysfunctional aspects of society we can explain the development and persistence of alternatives. Thus, as Holmwood states, "Merton explicitly made power and conflict central issues for research within a functionalist paradigm."
Merton also noted that there may be functional alternatives to the institutions and structures currently fulfilling the functions of society. This means that the institutions that currently exist are not indispensable to society. Merton states "just as the same item may have multiple functions, so may the same function be diversely fulfilled by alternative items." This notion of functional alternatives is important because it reduces the tendency of functionalism to imply approval of the status quo.
Merton's theory of deviance is derived from Durkheim's idea of anomie. It is central in explaining how internal changes can occur in a system. For Merton, anomie means a discontinuity between cultural goals and the accepted methods available for reaching them.
Merton believes that there are 5 situations facing an actor.
Conformity occurs when an individual has the means and desire to achieve the cultural goals socialized into them.
Innovation occurs when an individual strives to attain the accepted cultural goals but chooses to do so in novel or unaccepted method.
Ritualism occurs when an individual continues to do things as prescribed by society but forfeits the achievement of the goals.
Retreatism is the rejection of both the means and the goals of society.
Rebellion is a combination of the rejection of societal goals and means and a substitution of other goals and means.
Thus it can be seen that change can occur internally in society through either innovation or rebellion. It is true that society will attempt to control these individuals and negate the changes, but as the innovation or rebellion builds momentum, society will eventually adapt or face dissolution.
Almond and Powell
In the 1970s, political scientists Gabriel Almond and Bingham Powell introduced a structural-functionalist approach to comparing political systems. They argued that, in order to understand a political system, it is necessary to understand not only its institutions (or structures) but also their respective functions. They also insisted that these institutions, to be properly understood, must be placed in a meaningful and dynamic historical context.
This idea stood in marked contrast to prevalent approaches in the field of comparative politics—the state-society theory and the dependency theory. These were the descendants of David Easton's system theory in international relations, a mechanistic view that saw all political systems as essentially the same, subject to the same laws of "stimulus and response"—or inputs and outputs—while paying little attention to unique characteristics. The structural-functional approach is based on the view that a political system is made up of several key components, including interest groups, political parties and branches of government.
In addition to structures, Almond and Powell showed that a political system consists of various functions, chief among them political socialization, recruitment and communication: socialization refers to the way in which societies pass along their values and beliefs to succeeding generations, and in political terms describe the process by which a society inculcates civic virtues, or the habits of effective citizenship; recruitment denotes the process by which a political system generates interest, engagement and participation from citizens; and communication refers to the way that a system promulgates its values and information.
Unilineal descent
In their attempt to explain the social stability of African "primitive" stateless societies where they undertook their fieldwork, Evans-Pritchard (1940) and Meyer Fortes (1945) argued that the Tallensi and the Nuer were primarily organized around unilineal descent groups. Such groups are characterized by common purposes, such as administering property or defending against attacks; they form a permanent social structure that persists well beyond the lifespan of their members. In the case of the Tallensi and the Nuer, these corporate groups were based on kinship which in turn fitted into the larger structures of unilineal descent; consequently Evans-Pritchard's and Fortes' model is called "descent theory". Moreover, in this African context territorial divisions were aligned with lineages; descent theory therefore synthesized both blood and soil as the same. Affinal ties with the parent through whom descent is not reckoned, however, are considered to be merely complementary or secondary (Fortes created the concept of "complementary filiation"), with the reckoning of kinship through descent being considered the primary organizing force of social systems. Because of its strong emphasis on unilineal descent, this new kinship theory came to be called "descent theory".
With no delay, descent theory had found its critics. Many African tribal societies seemed to fit this neat model rather well, although Africanists, such as Paul Richards, also argued that Fortes and Evans-Pritchard had deliberately downplayed internal contradictions and overemphasized the stability of the local lineage systems and their significance for the organization of society. However, in many Asian settings the problems were even more obvious. In Papua New Guinea, the local patrilineal descent groups were fragmented and contained large amounts of non-agnates. Status distinctions did not depend on descent, and genealogies were too short to account for social solidarity through identification with a common ancestor. In particular, the phenomenon of cognatic (or bilateral) kinship posed a serious problem to the proposition that descent groups are the primary element behind the social structures of "primitive" societies.
Leach's (1966) critique came in the form of the classical Malinowskian argument, pointing out that "in Evans-Pritchard's studies of the Nuer and also in Fortes's studies of the Tallensi unilineal descent turns out to be largely an ideal concept to which the empirical facts are only adapted by means of fictions". People's self-interest, manoeuvring, manipulation and competition had been ignored. Moreover, descent theory neglected the significance of marriage and affinal ties, which were emphasized by Lévi-Strauss's structural anthropology, at the expense of overemphasizing the role of descent. To quote Leach: "The evident importance attached to matrilateral and affinal kinship connections is not so much explained as explained away."
Biological
Biological functionalism is an anthropological paradigm, asserting that all social institutions, beliefs, values and practices serve to address pragmatic concerns. In many ways, the theorem derives from the longer-established structural functionalism, yet the two theorems diverge from one another significantly. While both maintain the fundamental belief that a social structure is composed of many interdependent frames of reference, biological functionalists criticise the structural view that a social solidarity and collective conscience is required in a functioning system. By that fact, biological functionalism maintains that our individual survival and health is the driving provocation of actions, and that the importance of social rigidity is negligible.
Everyday application
Although the actions of humans without doubt do not always engender positive results for the individual, a biological functionalist would argue that the intention was still self-preservation, albeit unsuccessful. An example of this is the belief in luck as an entity; while a disproportionately strong belief in good luck may lead to undesirable results, such as a huge loss in money from gambling, biological functionalism maintains that the newly created ability of the gambler to condemn luck will allow them to be free of individual blame, thus serving a practical and individual purpose. In this sense, biological functionalism maintains that while bad results often occur in life, which do not serve any pragmatic concerns, an entrenched cognitive psychological motivation was attempting to create a positive result, in spite of its eventual failure.
Decline
Structural functionalism reached the peak of its influence in the 1940s and 1950s, and by the 1960s was in rapid decline. By the 1980s, its place was taken in Europe by more conflict-oriented approaches, and more recently by structuralism. While some of the critical approaches also gained popularity in the United States, the mainstream of the discipline has instead shifted to a myriad of empirically oriented middle-range theories with no overarching theoretical orientation. To most sociologists, functionalism is now "as dead as a dodo".
As the influence of functionalism in the 1960s began to wane, the linguistic and cultural turns led to a myriad of new movements in the social sciences: "According to Giddens, the orthodox consensus terminated in the late 1960s and 1970s as the middle ground shared by otherwise competing perspectives gave way and was replaced by a baffling variety of competing perspectives. This third generation of social theory includes phenomenologically inspired approaches, critical theory, ethnomethodology, symbolic interactionism, structuralism, post-structuralism, and theories written in the tradition of hermeneutics and ordinary language philosophy."
While absent from empirical sociology, functionalist themes remained detectable in sociological theory, most notably in the works of Luhmann and Giddens. There are, however, signs of an incipient revival, as functionalist claims have recently been bolstered by developments in multilevel selection theory and in empirical research on how groups solve social dilemmas. Recent developments in evolutionary theory—especially by biologist David Sloan Wilson and anthropologists Robert Boyd and Peter Richerson—have provided strong support for structural functionalism in the form of multilevel selection theory. In this theory, culture and social structure are seen as a Darwinian (biological or cultural) adaptation at the group level.
Criticisms
In the 1960s, functionalism was criticized for being unable to account for social change, or for structural contradictions and conflict (and thus was often called "consensus theory"). Also, it ignores inequalities including race, gender, class, which cause tension and conflict. The refutation of the second criticism of functionalism, that it is static and has no concept of change, has already been articulated above, concluding that while Parsons' theory allows for change, it is an orderly process of change [Parsons, 1961:38], a moving equilibrium. Therefore, referring to Parsons' theory of society as static is inaccurate. It is true that it does place emphasis on equilibrium and the maintenance or quick return to social order, but this is a product of the time in which Parsons was writing (post-World War II, and the start of the cold war). Society was in upheaval and fear abounded. At the time social order was crucial, and this is reflected in Parsons' tendency to promote equilibrium and social order rather than social change.
Furthermore, Durkheim favoured a radical form of guild socialism along with functionalist explanations. Also, Marxism, while acknowledging social contradictions, still uses functionalist explanations. Parsons' evolutionary theory describes the differentiation and reintegration systems and subsystems and thus at least temporary conflict before reintegration (ibid). "The fact that functional analysis can be seen by some as inherently conservative and by others as inherently radical suggests that it may be inherently neither one nor the other."
Stronger criticisms include the epistemological argument that functionalism is tautologous, that is, it attempts to account for the development of social institutions solely through recourse to the effects that are attributed to them, and thereby explains the two circularly. However, Parsons drew directly on many of Durkheim's concepts in creating his theory. Certainly Durkheim was one of the first theorists to explain a phenomenon with reference to the function it served for society. He said, "the determination of function is…necessary for the complete explanation of the phenomena." However Durkheim made a clear distinction between historical and functional analysis, saying, "When ... the explanation of a social phenomenon is undertaken, we must seek separately the efficient cause which produces it and the function it fulfills." If Durkheim made this distinction, then it is unlikely that Parsons did not.
However Merton does explicitly state that functional analysis does not seek to explain why the action happened in the first instance, but why it continues or is reproduced. By this particular logic, it can be argued that functionalists do not necessarily explain the original cause of a phenomenon with reference to its effect. Yet the logic stated in reverse, that social phenomena are (re)produced because they serve ends, is unoriginal to functionalist thought. Thus functionalism is either undefinable or it can be defined by the teleological arguments which functionalist theorists normatively produced before Merton.
Another criticism describes the ontological argument that society cannot have "needs" as a human being does, and even if society does have needs they need not be met. Anthony Giddens argues that functionalist explanations may all be rewritten as historical accounts of individual human actions and consequences (see Structuration).
A further criticism directed at functionalism is that it contains no sense of agency, that individuals are seen as puppets, acting as their role requires. Yet Holmwood states that the most sophisticated forms of functionalism are based on "a highly developed concept of action," and as was explained above, Parsons took as his starting point the individual and their actions. His theory did not however articulate how these actors exercise their agency in opposition to the socialization and inculcation of accepted norms. As has been shown above, Merton addressed this limitation through his concept of deviance, and so it can be seen that functionalism allows for agency. It cannot, however, explain why individuals choose to accept or reject the accepted norms, why and in what circumstances they choose to exercise their agency, and this does remain a considerable limitation of the theory.
Further criticisms have been levelled at functionalism by proponents of other social theories, particularly conflict theorists, Marxists, feminists and postmodernists. Conflict theorists criticized functionalism's concept of systems as giving far too much weight to integration and consensus, and neglecting independence and conflict. Lockwood, in line with conflict theory, suggested that Parsons' theory missed the concept of system contradiction. He did not account for those parts of the system that might have tendencies to mal-integration. According to Lockwood, it was these tendencies that come to the surface as opposition and conflict among actors. However Parsons thought that the issues of conflict and cooperation were very much intertwined and sought to account for both in his model. In this however he was limited by his analysis of an ‘ideal type' of society which was characterized by consensus. Merton, through his critique of functional unity, introduced into functionalism an explicit analysis of tension and conflict. Yet Merton's functionalist explanations of social phenomena continued to rest on the idea that society is primarily co-operative rather than conflicted, which differentiates Merton from conflict theorists.
Marxism, which was revived soon after the emergence of conflict theory, criticized professional sociology (functionalism and conflict theory alike) for being partisan to advanced welfare capitalism. Gouldner thought that Parsons' theory specifically was an expression of the dominant interests of welfare capitalism, that it justified institutions with reference to the function they fulfill for society. It may be that Parsons' work implied or articulated that certain institutions were necessary to fulfill the functional prerequisites of society, but whether or not this is the case, Merton explicitly states that institutions are not indispensable and that there are functional alternatives. That he does not identify any alternatives to the current institutions does reflect a conservative bias, which as has been stated before is a product of the specific time that he was writing in.
As functionalism's prominence was ending, feminism was on the rise, and it attempted a radical criticism of functionalism. It believed that functionalism neglected the suppression of women within the family structure. Holmwood shows, however, that Parsons did in fact describe the situations where tensions and conflict existed or were about to take place, even if he did not articulate those conflicts. Some feminists agree, suggesting that Parsons provided accurate descriptions of these situations. On the other hand, Parsons recognized that he had oversimplified his functional analysis of women in relation to work and the family, and focused on the positive functions of the family for society and not on its dysfunctions for women. Merton, too, although addressing situations where function and dysfunction occurred simultaneously, lacked a "feminist sensibility".
Postmodernism, as a theory, is critical of claims of objectivity. Therefore, the idea of grand theory and grand narrative that can explain society in all its forms is treated with skepticism. This critique focuses on exposing the danger that grand theory can pose when not seen as a limited perspective, as one way of understanding society.
Jeffrey Alexander (1985) sees functionalism as a broad school rather than a specific method or system, such as Parsons, who is capable of taking equilibrium (stability) as a reference-point rather than assumption and treats structural differentiation as a major form of social change. The name 'functionalism' implies a difference of method or interpretation that does not exist. This removes the determinism criticized above. Cohen argues that rather than needs a society has dispositional facts: features of the social environment that support the existence of particular social institutions but do not cause them.
Influential theorists
Kingsley Davis
Michael Denton
Émile Durkheim
David Keen
Niklas Luhmann
Bronisław Malinowski
Robert K. Merton
Wilbert E. Moore
George Murdock
Talcott Parsons
Alfred Reginald Radcliffe-Brown
Herbert Spencer
Fei Xiaotong
See also
Causation (sociology)
Functional structuralism
Historicism
Neofunctionalism (sociology)
New institutional economics
Pure sociology
Sociotechnical system
Systems theory
Vacancy chain
Dennis Wrong (critic of structural functionalism)
Notes
References
Barnard, A. 2000. History and Theory in Anthropology. Cambridge: CUP.
Barnard, A., and Good, A. 1984. Research Practices in the Study of Kinship. London: Academic Press.
Barnes, J. 1971. Three Styles in the Study of Kinship. London: Butler & Tanner.
Elster, J., (1990), “Merton's Functionalism and the Unintended Consequences of Action”, in Clark, J., Modgil, C. & Modgil, S., (eds) Robert Merton: Consensus and Controversy, Falmer Press, London, pp. 129–35
Gingrich, P., (1999) “Functionalism and Parsons” in Sociology 250 Subject Notes, University of Regina, accessed, 24/5/06, uregina.ca
Holy, L. 1996. Anthropological Perspectives on Kinship. London: Pluto Press.
Homans, George Casper (1962). Sentiments and Activities. New York: The Free Press of Glencoe.
Hoult, Thomas Ford (1969). Dictionary of Modern Sociology.
Kuper, A. 1996. Anthropology and Anthropologists. London: Routledge.
Layton, R. 1997. An Introduction to Theory in Anthropology. Cambridge: CUP.
Leach, E. 1954. Political Systems of Highland Burma. London: Bell.
Leach, E. 1966. Rethinking Anthropology. Northampton: Dickens.
Lenski, Gerhard (1966). "Power and Privilege: A Theory of Social Stratification." New York: McGraw-Hill.
Lenski, Gerhard (2005). "Evolutionary-Ecological Theory." Boulder, CO: Paradigm.
Levi-Strauss, C. 1969. The Elementary Structures of Kinship. London: Eyre and Spottis-woode.
Maryanski, Alexandra (1998). "Evolutionary Sociology." Advances in Human Ecology. 7:1–56.
Maryanski, Alexandra and Jonathan Turner (1992). "The Social Cage: Human Nature and the Evolution of Society." Stanford: Stanford University Press.
Marshall, Gordon (1994). The Concise Oxford Dictionary of Sociology.
Parsons, T., (1961) Theories of Society: foundations of modern sociological theory, Free Press, New York
Perey, Arnold (2005) "Malinowski, His Diary, and Men Today (with a note on the nature of Malinowskian functionalism)
Ritzer, George and Douglas J. Goodman (2004). Sociological Theory, 6th ed. New York: McGraw-Hill.
Sanderson, Stephen K. (1999). "Social Transformations: A General Theory of Historical Development." Lanham, MD: Rowman & Littlefield.
Turner, Jonathan (1995). "Macrodynamics: Toward a Theory on the Organization of Human Populations." New Brunswick: Rutgers University Press.
Turner, Jonathan and Jan Stets (2005). "The Sociology of Emotions." Cambridge. Cambridge University Press.
Comparative politics
Functionalism (social theory)
History of sociology
Sociological theories
Anthropology
Cognition | 0.766687 | 0.997675 | 0.764904 |
Aurignacian | The Aurignacian is an archaeological industry of the Upper Paleolithic associated with Early European modern humans (EEMH) lasting from 43,000 to 26,000 years ago. The Upper Paleolithic developed in Europe some time after the Levant, where the Emiran period and the Ahmarian period form the first periods of the Upper Paleolithic, corresponding to the first stages of the expansion of Homo sapiens out of Africa. They then migrated to Europe and created the first European culture of modern humans, the Aurignacian.
The Proto-Aurignacian and the Early Aurignacian stages are dated between about 43,000 and 37,000 years ago. The Aurignacian proper lasted from about 37,000 to 33,000 years ago. A Late Aurignacian phase transitional with the Gravettian dates to about 33,000 to 26,000 years ago.
The type site is the Cave of Aurignac, Haute-Garonne, south-west France. The main preceding period is the Mousterian of the Neanderthals.
One of the oldest examples of figurative art, the Venus of Hohle Fels, comes from the Aurignacian or Proto-Gravettian and is dated to between 40,000 and 35,000 years ago (though now earlier figurative art may be known, see Lubang Jeriji Saléh). It was discovered in September 2008 in a cave at Schelklingen in Baden-Württemberg in western Germany. The German Lion-man figure is given a similar date range.
A "Levantine Aurignacian" culture is known from the Levant, with a type of blade technology very similar to the European Aurignacian, following chronologically the Emiran and Early Ahmarian in the same area of the Near East, and also closely related to them. The Levantine Aurignacian may have preceded European Aurignacian, but there is a possibility that the Levantine Aurignacian was rather the result of reverse influence from the European Aurignacian: this remains unsettled.
Main characteristics
The Aurignacians are part of the wave of anatomically modern humans thought to have spread from Africa through the Near East into Paleolithic Europe, and became known as European early modern humans, or Cro-Magnons. This wave of anatomically modern humans includes fossils of the Ahmarian, Bohunician, Aurignacian, Gravettian, Solutrean and Magdalenian cultures, extending throughout the Last Glacial Maximum (LGM), covering the period of roughly 48,000 to 15,000 years ago. In terms of population, the Aurignacian cultural complex is chronologically associated with the human remains of Goyet Q116-1, while the subsequent Gravettian is associated with the Vestonice cluster.
The Aurignacian tool industry is characterized by worked bone or antler points with grooves cut in the bottom. Their flint tools include fine blades and bladelets struck from prepared cores rather than using crude flakes. The people of this culture also produced some of the earliest known cave art, such as the animal engravings at Trois Freres and the paintings at Chauvet cave in southern France. They also made pendants, bracelets, and ivory beads, as well as three-dimensional figurines. Perforated rods, thought to be spear throwers or shaft wrenches, also are found at their sites.
Art
Aurignacian figurines have been found depicting faunal representations of the time period associated with now-extinct mammals, including mammoths, rhinoceros, and tarpan, along with anthropomorphized depictions that may be interpreted as some of the earliest evidence of religion.
Many 35,000-year-old animal figurines were discovered in the Vogelherd Cave in Germany. One of the horses, amongst six tiny mammoth and horse ivory figures found previously at Vogelherd, was sculpted as skillfully as any piece found throughout the Upper Paleolithic. The production of ivory beads for body ornamentation was also important during the Aurignacian. The famous paintings in Chauvet cave date from this period.
Typical statuettes consist of women that are called Venus figurines. They emphasize the hips, breasts, and other body parts associated with fertility. Feet and arms are lacking or minimized. One of the most ancient figurines is the Venus of Hohle Fels, discovered in 2008 in the Hohle Fels cave in Germany. The figurine has been dated to 35,000 years ago and is the earliest known, undisputed example of a depiction of a human being in prehistoric art. The Lion-man of Hohlenstein-Stadel, found in the Hohlenstein-Stadel cave of Germany's Swabian Alb and dated to 40,000 years ago, is the oldest known anthropomorphic animal figurine in the world.
Aurignacian finds include bone flutes. The oldest undisputed musical instrument was the Hohle Fels Flute discovered in the Hohle Fels cave in Germany's Swabian Alb in 2008. The flute is made from a vulture's wing bone perforated with five finger holes, and dates to approximately 35,000-40,000 years ago. A flute was also found at the Abri Blanchard in southwestern France.
Gallery
Tools
Stone tools from the Aurignacian culture are known as Mode 4, characterized by blades (rather than flakes, typical of mode 2 Acheulean and mode 3 Mousterian) from prepared cores. Also seen throughout the Upper Paleolithic is a greater degree of tool standardization and the use of bone and antler for tools. Based on the research of scraper reduction and paleoenvironment, the early Aurignacian group moved seasonally over greater distances to procure reindeer herds within cold and open environments than those of the earlier tool cultures.
Population
A 2019 demographic analysis estimated a mean population of 1,500 persons (upper limit: 3,300; lower limit: 800) for western and central Europe during the Aurignacian period (~42,000 to 33,000 y cal BP).
A 2005 study estimated the population of Upper Palaeolithic Europe from 40–30 thousand years ago was 1,738–28,359 (average 4,424).
Association with modern humans
The sophistication and self-awareness demonstrated in the work led archaeologists to consider the makers of Aurignacian artifacts the first modern humans in Europe. Human remains and Late Aurignacian artifacts found in juxtaposition support this inference. Although finds of human skeletal remains in direct association with Proto-Aurignacian technologies are scarce in Europe, the few available are also probably modern human. The best dated association between Aurignacian industries and human remains are those of at least five individuals from the Mladeč caves in the Czech Republic, dated by direct radiocarbon measurements of the skeletal remains to at least 31,000–32,000 years old.
At least three robust, but typically anatomically-modern, individuals from the Peștera cu Oase cave in Romania, were dated directly from the bones to ca. 35,000–36,000 BP. Although not associated directly with archaeological material, these finds are within the chronological and geographical range of the Early Aurignacian in southeastern Europe. On genetic evidence it has been argued that both Aurignacian and the Dabba culture of North Africa came from an earlier big game hunting Levantine Aurignacian culture of the Levant.
Genetics
In a genetic study published in Nature in May 2016, the remains of an early Aurignacian individual, Goyet Q116-1 from modern-day Belgium, were examined. He belonged to the paternal haplogroup C1a and the maternal haplogroup M. Haplogroups identified in other Aurignacian samples are the paternal haplogroups C1b and K2a; and mt-DNA haplogroup N, R, and U.
The Aurignacian material culture is associated with the expansion of 'early West Eurasians' during the Upper-Paleolithic, replacing or merging with previous Initial Upper Paleolithic cultures to which possibly relates the European Châtelperronian.
{
"type": "ExternalData",
"service": "page",
"title": "ROCEEH/Aurignacian.map"
}
Location
Europe
Cave of Aurignac
Bacho Kiro cave
Chauvet Cave
Hohle Fels
Potok Cave
Near-East
Ksar Akil
HaYonim Cave
Asia
Lebanon/Palestine/Israel region
Contained within a stratigraphic column, along with other cultures.
Siberia
Many sites in Siberia including around Lake Baikal, the Ob River valley, and Minusinsk.
See also
Cave of Aurignac
Ksar Akil
Venus figurine
Bacho Kiro cave
Notes
References
Sources
External links
Picture Gallery of the Paleolithic (reconstructional palaeoethnology), Libor Balák at the Czech Academy of Sciences, the Institute of Archaeology in Brno, The Center for Paleolithic and Paleoethnological Research
Upper Paleolithic cultures of Europe
Peopling of Europe
Industries (archaeology)
Early European modern humans | 0.767678 | 0.996319 | 0.764852 |
Androcentrism | Androcentrism (Ancient Greek, ἀνήρ, "man, male") is the practice, conscious or otherwise, of placing a masculine point of view at the center of one's world view, culture, and history, thereby culturally marginalizing femininity. The related adjective is androcentric, while the practice of placing the feminine point of view at the center is gynocentric.
Androcentrism has been described as a pervasive form of sexism. However, it has also been described as a movement centered on, emphasizing, or dominated by males or masculine interests.
Etymology
The term androcentrism was introduced as an analytic concept by Charlotte Perkins Gilman in a scientific debate. Perkins Gilman described androcentric practices in society and the resulting problems they created in her investigation on The Man-Made World; or, Our Androcentric Culture, published in 1911. Because of this, androcentrism can be understood as a societal fixation on masculinity whereby all things originate. Under androcentrism, masculinity is normative and all things outside of masculinity are defined as other. According to Perkins Gilman, masculine patterns of life and masculine mindsets claimed universality while female patterns were considered as deviance.
Science
Until the 19th century, women were effectively barred from higher education in Western countries. For over 300 years, Harvard admitted only white men from prominent families. Many universities, such as for example the University of Oxford, consciously practiced a numerus clausus and restricted the number of female undergraduates they accepted. Due to the later access of women to university and academic life, the participation of women in fundamental research is marginal. The basic principles in sciences, even human sciences, are hence predominantly formed by men.
Medicine
There is a gender health data gap and women are systematically discriminated against and misdiagnosed in medicine. Early medical research has been carried out nearly exclusively on male corpses. Women were considered "small men" and not investigated. To this day, clinical studies are frequently confirmed for both sexes even though only men have participated and the female body is often not considered in animal tests, even when "women diseases" are concerned. However, female and male bodies differ, all the way up to the cell level. The same diseases can have different symptoms in the sexes, calling for different treatment, and medicines can work completely differently, including different side effects. Since male symptoms are much more prominent, women are symptomatically under- and misdiagnosed, and have for example a 50% increased risk to die from a heart attack. Here, the male and known symptoms are chest-, and shoulder pain, the female symptoms are upper abdominal pain and nausea.
Literature
Research by Dr. David Anderson and Dr. Mykol Hamilton has documented the under-representation of female characters in a sample of 200 books that included top-selling children's books from 2001 and a seven-year sample of Caldecott award-winning books. There were nearly twice as many male main characters as female main characters, and male characters appeared in illustrations 53 percent more than female characters. Most of the plot-lines centered on the male characters and their experiences of life.
The arts
In 1985, a group of female artists from New York, the Guerrilla Girls, began to protest the under-representation of female artists. According to them, male artists and the male viewpoint continued to dominate the visual art world. In a 1989 poster (displayed on NYC buses) titled "Do women have to be naked to get into the Met. Museum?" they reported that less than 5% of the artists in the Modern Art sections of the Met Museum were women, but 85% of the nudes were female.
Over 20 years later, women were still under-represented in the art world. In 2007, Jerry Saltz (journalist from the New York Times) criticized the Museum of Modern Art for undervaluing work by female artists. Of the 400 works of art he counted in the Museum of Modern Art, only 14 were by women (3.5%). Saltz also found a significant under-representation of female artists in the six other art institutions he studied.
Generic male language
In literature, the use of masculine language to refer to men, women, intersex, and non-binary people may indicate a male or androcentric bias in society where men are seen as the 'norm', and women, intersex, and non-binary people are seen as the 'other'. Philosophy scholar Jennifer Saul argues that the use of male generic language marginalizes women, intersex, and non-binary people in society. In recent years, some writers have started to use more gender-inclusive language (for instance, using the pronouns they/them and using gender-inclusive words like humankind, person, partner, spouse, businessperson, firefighter, chairperson, and police officer).
Many studies have shown that male generic language is not interpreted as truly gender-inclusive. Psychological research has shown that, in comparison to unbiased terms such as "they" and "humankind", masculine terms lead to male-biased mental imagery in the mind of both the listener and the communicator.
Three studies by Mykol Hamilton show that there is not only a male → people bias but also a people → male bias. In other words, a masculine bias remains even when people are exposed to only gender neutral language (although the bias is lessened). In two of her studies, half of the participants (after exposure to gender neutral language) had male-biased imagery but the rest of the participants displayed no gender bias at all. In her third study, only males showed a masculine-bias (after exposure to gender neutral language) – females showed no gender bias. Hamilton asserted that this may be due to the fact that males have grown up being able to think more easily than females of "any person" as generic "he," since "he" applies to them. Further, of the two options for neutral language, neutral language that explicitly names women (e.g., "he or she") reduces androcentrism more effectively than neutral language that makes no mention of gender whatsoever (e.g., "human").
Feminist anthropologist Sally Slocum argues that there has been a longstanding male bias in anthropological thought as evidenced by terminology used when referring to society, culture, and humankind. According to Slocum, "All too often the word 'man' is used in such an ambiguous fashion that it is impossible to decide whether it refers to males or just the human species in general, including both males and females."
Men's language will be judged as the 'norm' and anything that women do linguistically will be judged negatively against this. The speech of a socially subordinate group will be interpreted as linguistically inadequate against that used by socially dominant groups. It has been found that women use more hedges and qualifiers than men. Feminine speech has been viewed as more tentative and has been deemed powerless speech. This is based on the view that masculine speech is the standard.
Generic male symbols
On the Internet, many avatars are gender-neutral (such as an image of a smiley face). However, when an avatar is human and discernibly gendered, it usually appears to be a man.
Depictions of skeletons typically have male anatomy rather than female, even when the character of the skeleton is meant to be female.
Impacts
Men are more severely impacted by androcentric thinking. However, the ideology has substantial effects on the way of thinking of everyone within it. In a 2022 study, in which 3815 people were shown a selection of 256 images containing illusory faces (objects, in which humans see faces), 90% of the objects were on average by the participants identified as male.
See also
Honorary male
Male as norm
Male supremacy
Manosphere
Patriarchy
Phallocentrism
Trophy wife
References
Literature
Ginzberg, Ruth (1989), "Uncovering gynocentric science", in
Social epistemology
Feminist philosophy
Feminist terminology
Patriarchy
Philosophy of science
Feminism and society | 0.772004 | 0.990701 | 0.764825 |
Corporatism | Corporatism is a political system of interest representation and policymaking whereby corporate groups, such as agricultural, labour, military, business, scientific, or guild associations, come together on and negotiate contracts or policy (collective bargaining) on the basis of their common interests. The term is derived from the Latin corpus, or "body".
Corporatism does not refer to a political system dominated by large business interests, even though the latter are commonly referred to as "corporations" in modern American vernacular and legal parlance; instead, the correct term for this theoretical system would be corporatocracy. Corporatism is not government corruption in politics or the use of bribery by corporate interest groups. The terms 'corporatocracy' and 'corporatism' are often confused due to their name and the use of corporations as organs of the state.
Corporatism developed during the 1850s in response to the rise of classical liberalism and Marxism, as it advocated cooperation between the classes instead of class conflict. Adherents of diverse ideologies, including communism, economic liberalism, fascism, and socialism have advocated for corporatist models. Corporatism became one of the main tenets of Italian fascism, and Benito Mussolini's Fascist regime in Italy advocated the total integration of divergent interests into the state for the common good; however, the more democratic neo-corporatism often embraced tripartism.
Corporatist ideas have been expressed since ancient Greek and Roman societies, with integration into Catholic social teaching and Christian democratic political parties. They have been paired by various advocates and implemented in various societies with a wide variety of political systems, including authoritarianism, absolutism, fascism, liberalism, and social democracy.
Kinship corporatism
Kinship-based corporatism emphasizing clan, ethnic and family identification has been a common phenomenon in Africa, Asia, and Latin America. Confucian societies based upon families and clans in Eastern and Southeast Asia have been considered types of corporatism. Islamic societies often feature strong clans which form the basis for a community-based corporatist society. Family businesses are common worldwide in capitalist societies.
Politics and political economy
Communitarian corporatism
Early concepts of corporatism evolved in Classical Greece. Plato developed the concept of a totalitarian and communitarian corporatist system of natural-based classes and natural social hierarchies that would be organized based on function, such that groups would cooperate to achieve social harmony by emphasizing collective interests while rejecting individual interests.
In Politics, Aristotle described society as being divided between natural classes and functional purposes: those of priests, rulers, slaves and warriors. Ancient Rome adopted Greek concepts of corporatism into its own version of corporatism, adding the concept of political representation on the basis of function that divided representatives into military, professional and religious groups and set up institutions for each group known as collegia.
After the 5th-century fall of Rome and the beginning of the Early Middle Ages, corporatist organizations in western Europe became largely limited to religious orders and to the idea of Christian brotherhood — especially within the context of economic transactions. From the High Middle Ages onward, corporatist organizations became increasingly common in Europe, including such groups as religious orders, monasteries, fraternities, military orders such as the Knights Templar and the Teutonic Order, educational organizations such as the emerging European universities and learned societies, the chartered towns and cities, and most notably the guild system which dominated the economies of population centers in Europe. The military orders notably gained prominence during the period of the Crusades. These corporatist systems co-existed with the governing medieval estates system, and members of the first estate (the clergy), the second estate (the aristocracy), and third estate (the common people) could also participate in various corporatist bodies. The development of the guild system involved the guilds gaining the power to regulate trade and prices, and guild members included artisans, tradesmen, and other professionals. This diffusion of power is an important aspect of corporatist economic models of economic management and class collaboration. However, from the 16th century onward, absolute monarchies began to conflict with the diffuse, decentralized powers of the medieval corporatist bodies. Absolute monarchies during the Renaissance and Enlightenment gradually subordinated corporatist systems and corporate groups to the authority of centralized and absolutist governments, removing any checks on royal power these corporatist bodies had previously utilized.
After the outbreak of the French Revolution (1789), the existing absolutist corporatist system in France was abolished due to its endorsement of social hierarchy and special "corporate privilege". The new French government considered corporatism's emphasis on group rights as inconsistent with the government's promotion of individual rights. Subsequently, corporatist systems and corporate privilege throughout Europe were abolished in response to the French Revolution. From 1789 to the 1850s, most supporters of corporatism were reactionaries. A number of reactionary corporatists favoured corporatism in order to end liberal capitalism and to restore the feudal system. Countering the reactionaries were the ideas of Henri de Saint-Simon (1760- 1825), whose proposed "industrial class" would have had the representatives of various economic groups sit in the political chambers, in contrast to the popular representation of liberal democracy.
Progressive corporatism
From the 1850s onward, progressive corporatism developed in response to classical liberalism and to Marxism. Progressive corporatists supported providing group rights to members of the middle classes and working classes in order to secure co-operation among the classes. This was in opposition to the Marxist conception of class conflict. By the 1870s and 1880s, corporatism experienced a revival in Europe with the formation of workers' unions that were committed to negotiations with employers.
In his 1887 work Gemeinschaft und Gesellschaft ("Community and Society"), Ferdinand Tönnies began a major revival of corporatist philosophy associated with the development of neo-medievalism, increasing promotion of guild socialism and causing major changes to theoretical sociology. Tönnies claims that organic communities based upon clans, communes, families and professional groups are disrupted by the mechanical society of economic classes imposed by capitalism. The German Nazi Party used Tönnies' theory to promote their notion of Volksgemeinschaft ("people's community"). However, Tönnies opposed Nazism: he joined the Social Democratic Party of Germany in 1932 to oppose fascism in Germany and was deprived of his honorary professorship by Adolf Hitler in 1933.
Corporatism in the Roman Catholic Church
In 1881, Pope Leo XIII commissioned theologians and social thinkers to study corporatism and to provide a definition for it. In 1884 in Freiburg, the commission declared that corporatism was a "system of social organization that has at its base the grouping of men according to the community of their natural interests and social functions, and as true and proper organs of the state they direct and coordinate labor and capital in matters of common interest". Corporatism is related to the sociological concept of structural functionalism.
Corporatism's popularity increased in the late 19th century and a corporatist internationale was formed in 1890, followed by the 1891 publishing of Rerum novarum by the Catholic Church that for the first time declared the Church's blessing to trade unions and recommended that politicians recognize organized labour. Many corporatist unions in Europe were endorsed by the Catholic Church to challenge the anarchist, Marxist and other radical unions, with the corporatist unions being fairly conservative in comparison to their radical rivals. Some Catholic corporatist states include Austria under the 1932–1934 leadership of Federal Chancellor Engelbert Dollfuss and Ecuador under the leadership of García Moreno (1861–1865 and 1869–1875). The economic vision outlined in Rerum novarum and Quadragesimo anno (1931) also influenced the régime (1946–1955 and 1973–1974) of Juan Perón and Justicialism in Argentina and influenced the drafting of the 1937 Constitution of Ireland. In response to the Roman Catholic corporatism of the 1890s, Protestant corporatism developed, especially in Germany, the Netherlands and Scandinavia. However, Protestant corporatism has been much less successful in obtaining assistance from governments than its Roman Catholic counterpart.
Corporate solidarism
Sociologist Émile Durkheim (1858-1917) advocated a form of corporatism termed "solidarism" that advocated creating an organic social solidarity of society through functional representation. Solidarism built on Durkheim's view that the dynamic of human society as a collective is distinct from the dynamic of an individual, in that society is what places upon individuals their cultural and social attributes.
Durkheim posited that solidarism would alter the division of labour by evolving it from mechanical solidarity to organic solidarity. He believed that the existing industrial capitalist division of labour caused "juridical and moral anomie", which had no norms or agreed procedures to resolve conflicts and resulted in chronic confrontation between employers and trade unions. Durkheim believed that this anomie caused social dislocation and felt that by this "it is the law of the strongest which rules, and there is inevitably a chronic state of war, latent or acute". As a result, Durkheim believed it is a moral obligation of the members of society to end this situation by creating a moral organic solidarity based upon professions as organized into a single public institution.
Corporate solidarism is a form of corporatism that advocates creating solidarity instead of collectivism in society through functional representation, believing that it is up to the people to end the chronic confrontation between employers and labor unions by creating a single public institution. Solidarism rejects a materialistic approach to social, economic, and political problems, while also rejecting class conflict. Just like corporatism, it embraces tripartism as its economic system.
Liberal corporatism
John Stuart Mill discussed corporatist-like economic associations as needing to "predominate" in society to create equality for labourers and to give them influence with management by economic democracy. Unlike some other types of corporatism, liberal corporatism does not reject capitalism or individualism, but believes that capitalist companies are social institutions that should require their managers to do more than maximize net income by recognizing the needs of their employees.
This liberal corporatist ethic is similar to Taylorism but endorses democratization of capitalist companies. Liberal corporatists believe that inclusion of all members in the election of management in effect reconciles "ethics and efficiency, freedom and order, liberty and rationality".
Liberal corporatism began to gain disciples in the United States during the late 19th century. Economic liberal corporatism involving capital-labour cooperation was influential in Fordism. Liberal corporatism has also been an influential component of the liberalism in the United States that has been referred to as "interest group liberalism".
Fascist corporatism
A fascist corporation can be defined as a governmental entity incorporating workers' and employers' syndicates affiliated with the same profession and sector, with the aim of overseeing production in a comprehensive manner. Theoretically, each corporation within this structure assumes the responsibility of advocating for the interests of its respective profession, particularly through the negotiation of labor agreements and similar measures. Fascists theorized that this method could result in harmony amongst social classes.
In Italy, from 1922 until 1943, corporatism became influential amongst Italian nationalists led by Benito Mussolini. The 1920 Charter of Carnaro gained much popularity as the prototype of a "corporative state", having displayed much within its tenets as a guild system combining the concepts of autonomy and authority in a special synthesis. Alfredo Rocco spoke of a corporative state and declared corporatist ideology in detail. Rocco would later become a member of the Italian fascist régime.
Subsequently, the Labour Charter of 1927 was implemented, thus establishing a collective agreement system between employers and employees, becoming the main form of class collaboration in the fascist government.
Italian Fascism involved a corporatist political system in which the economy was collectively managed by employers, workers and state officials by formal mechanisms at the national level. Its supporters claimed that corporatism could better recognize or "incorporate" every divergent interest into the state organically, unlike majority-rules democracy, which (they said) could marginalize specific interests. This total consideration was the inspiration for their use of the term "totalitarian", described without coercion (which is connoted in the modern meaning) in the 1932 Doctrine of Fascism as thus:
A popular slogan of the Italian Fascists under Mussolini was "Tutto nello Stato, niente al di fuori dello Stato, nulla contro lo Stato" ("everything within the state, nothing outside the state, nothing against the state").
Within the corporative model of Italian fascism each corporate interest was supposed to be resolved and incorporated under the state. Much of the corporatist influence upon Italian fascism was partly due to the Fascists' attempts to gain endorsement by the Roman Catholic Church that itself sponsored corporatism. However, the Roman Catholic Church's corporatism favored a bottom-up corporatism, whereby groups such as families and professional groups would voluntarily work together, whereas fascist corporatism was a top-down model of state control managed primarly by government officials.
The fascist state corporatism of Roman Catholic Italy influenced the governments and economies — not only of other Roman Catholic-majority countries, such as the governments of Engelbert Dollfuss in Austria, António de Oliveira Salazar in Portugal and Getúlio Vargas in Brazil — but also of Konstantin Päts and Kārlis Ulmanis in non-Catholic Estonia and Latvia.
Fascists in non-Catholic countries also supported Italian Fascist corporatism, including Oswald Mosley of the British Union of Fascists, who commended corporatism and said that "it means a nation organized as the human body, with each organ performing its individual function but working in harmony with the whole". Mosley also regarded corporatism as an attack on laissez-faire economics and "international finance".
The corporatist state of Portugal had similarities to Benito Mussolini's Italian fascist corporatism, but also differences in its moral approach to governing. Although Salazar admired Mussolini and was influenced by his Labour Charter of 1927, he distanced himself from fascist dictatorship, which he considered a pagan Caesarist political system that recognised neither legal nor moral limits. Salazar also had a strong dislike of Marxism and liberalism.
In 1933, Salazar stated: "Our Dictatorship clearly resembles a fascist dictatorship in the reinforcement of authority, in the war declared against certain principles of democracy, in its accentuated nationalist character, in its preoccupation of social order. However, it differs from it in its process of renovation. The fascist dictatorship tends towards a pagan Caesarism, towards a state that knows no limits of a legal or moral order, which marches towards its goal without meeting complications or obstacles. The Portuguese New State, on the contrary, cannot avoid, not think of avoiding, certain limits of a moral order which it may consider indispensable to maintain in its favour of its reforming action".
The Patriotic People's Movement (IKL) in Finland envisioned a system with elements of direct democracy and professional parliament. The president would be elected with direct vote, who would then appoint the government from among professionals in their respective fields. All parties would be banned, and members of parliament would be elected by vote from corporate groups representing different sectors; Agriculture, Industry and Public servants, free trades, etc. Every law that is passed in the parliament is either ratified or overturned by a referendum.
Neo-corporatism
During the post-World War II reconstruction period in Europe, corporatism was favored by Christian democrats (often under the influence of Catholic social teaching), national conservatives and social democrats in opposition to liberal capitalism. This type of corporatism became unfashionable but revived again in the 1960s and 1970s as "neo-corporatism" in response to the new economic threat of recession-inflation.
Neo-corporatism is a democratic form of corporatism which favors economic tripartism, which involves strong labour unions, employers' associations and governments that cooperate as "social partners" to negotiate and manage a national economy. Social corporatist systems instituted in Europe after World War II include the ordoliberal system of the social market economy in Germany, the social partnership in Ireland, the polder model in the Netherlands (although arguably the polder model already was present at the end of World War I, it was not until after World War II that a social-service system gained foothold there), the concertation system in Italy, the Rhine model in Switzerland and the Benelux countries and the Nordic model in the Nordic countries.
Attempts in the United States to create neo-corporatist capital-labor arrangements were unsuccessfully advocated by Gary Hart and Michael Dukakis in the 1980s. As secretary of labor during the Clinton administration, Robert Reich promoted neo-corporatist reforms.
Contemporary examples by country
China
Jonathan Unger and Anita Chan in their essay "China, Corporatism, and the East Asian Model" describe Chinese corporatism as follows: [A]t the national level the state recognizes one and only one organization (say, a national labour union, a business association, a farmers' association) as the sole representative of the sectoral interests of the individuals, enterprises or institutions that comprise that organization's assigned constituency. The state determines which organizations will be recognized as legitimate and forms an unequal partnership of sorts with such organizations. The associations sometimes even get channelled into the policy-making processes and often help implement state policy on the government's behalf.
By establishing itself as the arbiter of legitimacy and assigning responsibility for a particular constituency with one sole organization, the state limits the number of players with which it must negotiate its policies and co-opts their leadership into policing their own members. This arrangement is not limited to economic organizations such as business groups and social organizations.
The political scientist Jean C. Oi coined the term "local state corporatism" to describe China's distinctive type of state-led growth, in which a communist party-state with Leninist roots commits itself to policies which are friendly to the market and to growth.
The use of corporatism as a framework to understand the central state's behaviour in China has been criticized by authors such as Bruce Gilley and William Hurst.
Hong Kong and Macau
In two special administrative regions, some legislators are chosen by functional constituencies (Legislative Council of Hong Kong) where the voters are a mix of individuals, associations, and corporations or indirect election (Legislative Assembly of Macau) where a single association is designated to appoint legislators.
Ireland
Most members of the Seanad Éireann, the upper house of the Oireachtas (parliament) of Ireland, are elected as part of vocational panels nominated partly by current Oireachtas members and partly by vocational and special interest associations. The Seanad also includes two university constituencies.
The Constitution of Ireland of 1937 was influenced by Roman Catholic Corporatism as expressed in the papal encyclical, Quadragesimo anno (1931).
The Netherlands
Under the Dutch Polder Model, the Social and Economic Council of the Netherlands (Sociaal-Economische Raad, SER) was established by the 1950 Industrial Organisation Act (Wet op de bedrijfsorganisatie). It is led by representatives of unions, employer organizations, and government appointed experts. It advises the government and has administrative and regulatory power. It oversees Sectoral Organisation Under Public Law (Publiekrechtelijke Bedrijfsorganisatie, PBO) which are similarly organized by union and industry representatives, but for specific industries or commodities.
Slovenia
The Slovene National Council, the upper house of the Slovene Parliament, has 18 members elected on a corporatist basis.
Western Europe
Generally supported by nationalist and/or social-democratic political parties, social corporatism developed in the post-World War II period, influenced by Christian democrats and social democrats in Western European countries such as Austria, Germany, the Netherlands, Denmark, Finland, Norway and Sweden. Social corporatism has also been adopted in different configurations and to varying degrees in various Western European countries.
The Nordic countries have the most comprehensive form of collective bargaining, where trade unions are represented at the national level by official organizations alongside employers' associations. Together with the welfare state policies of these countries, this forms what is termed the Nordic model. Less extensive models exist in Austria and Germany which are components of Rhine capitalism.
See also
Class collaboration
Co-determination
Conflict theories
Corporate nationalism
Corporate statism
Cooperative
Distributism
Fascism
Gemeinschaft and Gesellschaft
Gremialismo
Guild
Guild socialism
Holacracy
Managerialism
Mutualism (movement)
Integralism
National syndicalism
Paritarian Institutions
Pillarisation
Solidarism (disambiguation)
Third Position
Proprietary corporation
Notes
References
Black, Antony (1984). Guilds and civil society in European political thought from the twelfth century to present. Cambridge, United Kingdom: Cambridge University Press, .
Further reading
Acocella, N. and Di Bartolomeo, G. [2007], "Is corporatism feasible?", in: Metroeconomica, 58(2): 340-59.
Jones, Eric. 2008. Economic Adjustment and Political Transformation in Small States. Oxford University Press.
Jones, R. J. Barry. Routledge Encyclopedia of International Political Economy: Entries A-F. Taylor & Frances, 2001. .
Schmitter, P. (1974). "Still the Century of Corporatism?" The Review of Politics, 36(1), 85-131.
Taha Parla and Andrew Davison, Corporatist Ideology in Kemalist Turkey Progress or Order?, 2004, Syracuse University Press,
On Italian corporatism
Constitution of Fiume
Rerum novarum: encyclical of Pope Leo XIII on capital and labor
Quadragesimo Anno: encyclical of Pope Pius XI on reconstruction of the social order
On fascist corporatism and its ramifications
Baker, David, "The political economy of fascism: Myth or reality, or myth and reality?", New Political Economy, Volume 11, Issue 2 June 2006, pages 227–250.
Marra, Realino, "Aspetti dell'esperienza corporativa nel periodo fascista, Annali della Facoltà di Giurisprudenza di Genova, XXIV-1.2, 1991–92, pages 366–79.
There is an essay on "The Doctrine of Fascism" credited to Benito Mussolini that appeared in the 1932 edition of the Enciclopedia Italiana, and excerpts can be read at Doctrine of Fascism. There are also links there to the complete text.
My rise and fall, Volumes 1–2 – two autobiographies of Mussolini, editors Richard Washburn Child, Max Ascoli, Richard Lamb, Da Capo Press, 1998
The 1928 autobiography of Benito Mussolini. Online. My Autobiography. Book by Benito Mussolini; Charles Scribner's Sons, 1928. .
On neo-corporatism
Katzenstein, Peter. Small States in World Markets: industrial policy in Europe. Ithaca, 1985. Cornell University Press. .
Olson, Mancur. The Logic of Collective Action: Public Goods and the Theory of Groups. 1965, 1971. Harvard University Press. .
Schmitter, P. C. and Lehmbruch, G. (eds.). Trends toward Corporatist Intermediation. London, 1979. .
Rodrigues, Lucia Lima. "Corporatism, liberalism and the accounting profession in Portugal since 1755." Journal of Accounting Historians, June 2003.
External links
Encyclopedias
Corporatism – Encyclopædia Britannica
Corporatism – The Canadian Encyclopedia
Articles
Professor Thayer Watkins, The economic system of corporatism , San Jose State University, Department of Economics.
Chip Berlet, "Mussolini on the Corporate State", 2005, Political Research Associates.
"Economic Fascism" by Thomas J. DiLorenzo, The Freeman, Vol. 44, No. 6, June 1994, Foundation for Economic Education; Irvington-on-Hudson, New York.
Collectivism
Economic ideologies
Fascism
Political systems
Political theories | 0.765819 | 0.998698 | 0.764822 |
Gender and religion | Gender, defined as the range of characteristics pertaining to, and differentiating between, masculinity and femininity, and religion, a system of beliefs and practices followed by a community, share a multifaceted relationship that influences both individual and collective identities. The manner in which individuals express and experience their religious convictions is profoundly shaped by gender. Experts from diverse disciplines such as theology, sociology, anthropology, and gender studies have delved into the effects of gender on religious politics and societal standards. At times, the interplay between gender and religion can confine gender roles, but in other instances, it can empower and uphold them. Such insights shed light on the ways religious doctrines and rituals can simultaneously uphold specific gender expectations and offer avenues for gender expression.
Investigating the relationship between gender and religion entails evaluating sacred texts as well as religious institutions' practices. This investigation is part of a greater interest in the phenomenon of religion and is strongly tied to the larger study of gender and sexuality. Scholars can better comprehend the complex dynamics of gender within religious contexts by researching how societies and cultures develop gender roles and identities, as well as how gender connects with other societal and cultural categories.
Sex differences in religion can be classified as either "internal" or "external". Internal religious issues are studied from the perspective of a given religion, and might include religious beliefs and practices about the roles and rights of men and women in government, education and worship; beliefs about the sex or gender of deities and religious figures; and beliefs about the origin and meaning of human gender. External religious issues can be broadly defined as an examination of a given religion from an outsider's perspective, including possible clashes between religious leaders and laity; and the influence of, and differences between, religious perspectives on social issues.
Gender of deities
The earliest documented religions, and some contemporary animist religions, involve the deification of characteristics of the natural world. These spirits are typically, but not always, gendered. It has been proposed, since the 19th century, that polytheism arose out of animism, as religious epic provided personalities to autochthonous animist spirits in various parts of the world, notably in the development of ancient near eastern and Indo-European literature. Polytheistic gods are also typically gendered. The earliest evidence of monotheism is the worship of the goddess Eurynome, Aten in Egypt, the teaching of Moses in the Torah and Zoroastrianism in Persia. Aten, Yahweh and Ahura Mazda are all masculine deities, embodied only in metaphor, so masculine rather than reproductively male.
Hinduism
Kali, the Hindu goddess of both the life cycle and destructive war, breaks the gender role of women representing love, sex, fertility, and beauty. This Hindu deity reflects the modern view of feminism through their depiction of female strength.
Christianity
In Christianity, one entity of the Trinity, the Son, is believed to have become incarnate as a human male. Christians have traditionally believed that God the Father has masculine gender rather than male sex because the Father has never been incarnated. By contrast, there is less historical consensus on the gender of the Holy Spirit.
In Christianity the gender of God is referenced several times throughout the KJV Bible. One point of reference for GOD being male is found in the Gospel of John when Jesus Christ says to Mary Magdalene, "Touch me not; for I am not yet ascended to my Father: but go to my brethren, and say unto them, I ascend unto my Father, and your Father; and to my God, and your God."
Islam
In Islam, God is not gendered literally or metaphorically. God is referred to with the masculine pronoun in Arabic [Huwa or 'He'], as there is no neuter in the Arabic language. Ascribing natural gender to God is considered heretical because God is described as incomparable to creation. The Quran says: There is nothing like Him, and He ˹alone˺ is the All-Hearing, All-Seeing.
In contrast to Christian theology, Jesus is viewed as a prophet rather than a human male incarnation of God, and the primary sources of Islam [the Quran and Sunnah] do not refer to God as the 'Father'.
Creation myths about human gender
In many stories, man and woman are created at the same time, with equal standing. One example is the creation story in Genesis 1: "And God created the man in his image. In the image of God he created him. Male and female he created them." Some commentators interpret the parallelism to be deliberately stressing that mankind is, in some sense, a "unity in diversity" from a divine perspective (compare e pluribus unum),
and that women as well as men are included in God's image. The first man, Adam, has been viewed as a spiritual being or an ideal who can be distinguished as both male and female; an androgynous being with no sex. Pierre Chaunu argues that Genesis' gender-inclusive conception of humanity contrasts sharply with the views of gender found in older literature from surrounding cultures, and suggests a higher status of women in western society due to Judæo-Christian influence, and based on this verse. Some scholars, such as Philo, argue that the "sexes" were developed through an accidental division of the "true self" which existed prior to being assigned with gender.
In other accounts, man is created first, followed by woman. This is the case in the creation account of Genesis 2, where the first woman (Eve) is created from the rib of the first man (Adam), as a companion and helper. This story about Adam and Eve shows that God created two genders, a woman from a man. God also told the woman that she can only desire the man and that he shall rule over her. This is the earliest idea in catholic religion that says women should only be attracted and loyal to men, therefore supporting the claim that there is only a woman for a man. These two gender creation stories imagine the ideal of the unitary self. However, the unitary self is either androgynous or physically male; both of which are masculine in configuration.
In Plato's Symposium, Aristophanes provides an account to explain gender and romantic attraction.
There were originally three sexes: the all-male, the all-female, and the "androgynous", who was half man, half woman. As punishment for attacking the gods, each was split in half. The halves of the androgynous being became heterosexual men and women, while the halves of the all-male and all-female became gays and lesbians, respectively.
Gendered clothing in religion
Among many religions, there are traditional clothing standards unique to each culture. One aspect every religion has in common is modesty among women. There can be many reasons why women specifically are taught to cover up or dress according to a standard, but most reasons are sourced through official religious publications.
God said, in the book of Deuteronomy, “A woman shall not wear a man's garment, nor shall a man put on a woman's cloak, for whoever does these things is an abomination to the Lord your God." The book intends to set a specific idea of what a man and women should, and should not wear based on their gender, or they will disappoint the Lord. In religion, this is a way to clearly show a gender divide through the idea of individuals should only wear their gender specific clothing in order to meet the religions standards. Gods response to the idea of his creations cross dressing, influences the negative opinion towards transgendered individuals.
Leadership roles
Some religions restrict leadership to males. The ordination of women has been a controversial issue in some religions where either the rite of ordination, or the role that an ordained person fulfills, has traditionally been restricted to men because of cultural or theological prohibitions.
The journey to religious leadership presents distinct challenges for women across various faith traditions. Historically, many religious institutions have been dominated by male leadership, and this has influenced the roles and opportunities available to women. For instance, women are not permitted to become priests in the Catholic Church, and in Orthodox Judaism, the role of a rabbi is traditionally reserved for men. While Islam does permit women to serve as religious leaders, they are typically not allowed to lead mixed-gender prayers. In Hinduism, some sects or regions allow women to become priests, though this remains a rarity.
While many significant religious organizations in the U.S. ordain women and permit them to hold leadership positions, few women have served at the highest levels. For instance, the Episcopal Church had a woman, Katharine Jefferts Schori, serving as presiding bishop from 2006 to 2015. However, many of the largest denominations in the U.S., such as the Roman Catholic Church and the Southern Baptist Convention, do not ordain women or allow them to hold top church leadership roles. Efforts have been made in recent decades to challenge these norms. For example, while roughly six-in-ten American Catholics (59%) in a 2015 Pew Research Center survey expressed support for ordaining women in their church, the official stance remains unchanged.
This underrepresentation underscores the broader societal challenges women face in asserting their leadership in traditionally male-dominated spheres. As the discourse around gender equality continues to evolve, it's crucial to understand and address the systemic barriers that women encounter in religious leadership.
Beginning in the 19th century, some Christian denominations have ordained women. Among those who do not, many believe it is forbidden by . Some of those denominations ordain women to the diaconate, believing this is encouraged by .
Some Islamic communities (mainly outside the Middle East) have recently appointed women as imams, normally with ministries restricted to leading women in prayer and other charitable ministries.
Indian religions
Both masculine and feminine deities feature prominently in Hinduism. The identity of the Vedic writers is not known, but the first hymn of the Rigveda is addressed to the masculine deity Agni, and the pantheon of the Vedas is dominated by masculine gods. The most prominent Avatars of Vishnu are men.
Mostly, the traditional religious leaders of Jainism are men. The 19th tirthankara (traditional leader) Māllīnātha in this half cycle was female.
Siddhartha Gautama (the Buddha) was a man, but the female Buddha Vajrayogini also plays a role in Buddhism. In some East Asian Buddhist communities, a number of women are ordained as monks as well.
Abrahamic religions
In Abrahamic religions, Abraham himself, Moses, David and Elijah are among the most significant leaders documented according to the traditions of the Hebrew Bible. John the Baptist, Jesus and his apostles, and Saul of Tarsus again give the New Testament an impression of the founders and key figures of Christianity being male dominated. They were followed by a millennium of theologians known as the Church Fathers. Islam was founded by Muhammad, and his successor Abu Bakr, Umar, Uthman ibn Affan and Ali, for Sunnis and Ali ibn Abi Talib and The Twelve Imams for those of Shia faith, were also men. On the other hand, The Virgin Mary, the mother of Jesus of Nazareth, is not associated with leadership or teaching, but is nonetheless a key figure in Catholicism. Fatimah, daughter of Muhammad, is regarded by Muslims as an exemplar for men and women.
The Baháʼí Faith, a fast growing religion, teaches that men and women are equal. Prominent women celebrated in Baháʼí history include Bahíyyih Khánum, who acted head of the faith for several periods during the ministries of `Abdu'l-Bahá and Shoghi Effendi, and Táhirih, who is also held by Baháʼís as a penultimate leader. Women serve in higher percentages of leadership in appointed and elected national and international institutions of the religion than in the general population. However, only men are allowed to be members of the religion's highest governing body, the Universal House of Justice.
Nakayama Miki was the founder of Tenrikyo, which may be the largest religion to have a woman founder. Ellen G. White was instrumental to the founding of the Seventh-day Adventist Church and is officially considered a prophet by Seventh-day Adventists.
Segregation
Many religions have traditionally practiced sex segregation.
In traditional Jewish synagogues, the women's section is separated from the men's section by a wall or curtain called a mechitza. Men are not permitted to pray in the presence of women, to prevent distraction. The mechitza shown in the picture on the right is one in a synagogue affiliated with the 'left wing' (more modern side) of Modern Orthodox Judaism, which requires the mechitza to be of the height shown in the picture. More traditional or 'right wing' Modern Orthodox Judaism, and all forms of Haredi Judaism, require the mechitza to be of a type which absolutely prevents the men from seeing the women.
Enclosed religious orders are usually segregated by gender.
Sex segregation in Islam includes guidelines on interaction between men and women. Men and women also worship in different areas in most mosques. Both men and women cover their awra when in the presence of members of the opposite sex (who are not close relations).
Roles in marriage
Nearly all religions recognize marriage, and many religions also promote views on appropriate gender roles within marriage.
Christianity
Within Christianity, the two notable views on gender roles in a marriage are complementarianism and egalitarianism. The complementarian view of marriage is widely accepted in Christianity, where the husband is viewed as the leader and the wife is viewed as the follower. Essentially, the man is given more of a headship role and the woman is viewed as a supporting partner. In Genesis 3, Adam named his wife Eve ("life") because she "was the mother of all living" (Genesis 3:20).
In mainstream Christian tradition, the relationship between a husband and wife is believed to mirror the relationship between Christ and the Church. This can be seen in Ephesians 5:25:Husbands love their wives, just as Christ loved the church and gave himself up for her.Christian traditionalists believe that men are meant to be living martyrs for their wives, "giving himself up for her" daily and through acts of unselfish love. The women, on the other hand, are meant to be their helpers.
While complementarianism has been the norm for years, some Christians have moved toward egalitarian views. As the nature of gender roles within societies changes, religious views on gender roles in marriage change as well.
In complementarianism, the relationship between man and woman is compared to the one between Christ and the Church. In the way Christ loved and cared for the Church, a man is expected to do the same for his wife. Brown says that this is what makes a marriage a holy union, rather than a simple contract between the two.
According to Paul, husbands and wives had rights that they could expect from each other. After the death of their husband, women are expected to not marry again because, on the day of resurrection, no one will 'claim' her as their wife.
Marriage, according to Augustine, is a second resort to not being able to remain celibate and a virgin. The role of virginity is one that heavily impacts marriage in general according to Augustine. Virginity and celibacy are extremely important ideals that Christians must carry, but they can marry if they cannot do so. Within a marriage, women were predicted to be more likely to be unfaithful. This left the task of being sure that the wife remained faithful to the husband. Augustine states that although men have dominance over women, they must implement it with compassion and love.
In relation to Augustine's views on marriage and virginity, some women preferred the celibate lifestyle in order to gain freedom from male control. Dennis R. Macdonald estimated that groups of widows and unmarried virgins had an impact on the patriarchal society. Researchers believe that, from a sex and gender view, remaining a virgin was a form of rebellion against male domination. The most direct way for men to dominate women was through marriage, and in remaining celibate, these women did not have that domination over them.
Islam
In Islam, a woman's primary responsibility is usually interpreted as fulfilling her role as a wife and mother, whereas women still have the right and are free to work. A man's role is to work and be able to protect and financially support his wife and family.
In regards to guidelines in marriage, a man is allowed to marry a Muslim, Jewish, Sabaean, or Christian woman whereas a woman is only allowed to marry a Muslim man. Both genders cannot marry nonbelievers or polytheists.
The matter of divorce is discussed in verse 2:228 of the Qu'ran. The Qu'ran instructs women to wait at least three menstrual periods, called Iddah, before committing to a second marriage. The purpose of the Iddah is to ensure that a woman's pregnancy will be linked to the correct biological father. In the case of a Talāq, which is a divorce initiated by the man, the man is supposed to announce the words "I divorce you" aloud three times, each separated by a three-month waiting period. Certain practices of the Talāq divorce allows the "I divorce you" utterance to be completed in one sitting; however, the concept of "Triple Divorce" in one sitting is considered wrong in some branches of Islam such as with the Shia Muslims. During the three-month waiting period, only the man has the right to initiate a marital reunion if both sides desire to reconcile. This yields a gender equality perspective in the sense that women have power over men in regards to finance parallel to how men have power over women in regards to obedience, both of which are only valid to a reasonable extent. While a Ṭalāq can be completed easily, a divorce that is initiated by the woman, called a Khula, is harder to obtain due to a woman's requirement to repay her dowry and give up child custody. More specifically, a woman is to give up custody of her child if the child is over the age of seven. If a woman gains custody of her child who is under the age of seven, she must still forfeit custody upon the child's seventh birthday. Although the Islamic religion requires the woman to repay her dowry, she is also entitled to receive financial support from her former husband if needed. This cycle of financial matters protects the woman's property from being taken advantage of during or after marriage.
Gender and religious expressions
The manner in which individuals express and experience their religious convictions is profoundly shaped by gender. Experts from diverse disciplines such as theology, sociology, anthropology, and gender studies have delved into the effects of gender on religious politics and societal standards. At times, the interplay between gender and religion can confine gender roles, but in other instances, it can empower and uphold them. Such insights shed light on the ways religious doctrines and rituals can simultaneously uphold specific gender expectations and offer avenues for gender expression. Furthermore, gender plays a notable role in patterns of religious conversion. According to the Pew Research Center, an estimated 83.4% of women worldwide identify with a faith group, compared to 79.9% of men. While specific conversion trends, such as women's inclination towards Christianity or men's propensity for conversion in Islam, can vary, it's essential to approach these patterns with an understanding that individual choices are influenced by a myriad of personal, cultural, and societal factors.
Cultural effects on religious practice
Religious worship may vary by individual due to differing cultural experiences of gender.
Greco-Roman Paganism
Both men and women who practiced paganism in ancient civilizations such as Rome and Greece worshiped male and female deities, but goddesses played a much larger role in women's lives. Roman and Greek goddesses' domains often aligned with culturally specific gender expectations at the time which served to perpetrate them in many cases. One such expectation of women was to marry at a relatively young age. The quadrennial Bear Festival, known as Arkteia, was held on the outskirts of Athens in honor of Artemis and involved girls ages seven to fourteen. The girls would compete in public athletic events as Greek men sat as onlookers, observing potential wives.
Demeter, the goddess of fertility, was a prominent deity due to women's ability to relate to her. The myth surrounding Demeter involves her losing her daughter, Persephone, against her will to Hades and the grief she experiences after the event. Mother-daughter relationships were very important to ancient Greeks. The severance of this relationship by fathers and husbands created much strain in young women who were forced to leave their mothers, submit to their husbands, and yield to the patriarchal society. Demeter was honored through female-exclusive ceremonies in various rituals due to her general disdain for the behaviors of men. Aphrodite, too, was honored by similar means. To women during this time period, the thought of Aphrodite's attitude toward males was comforting as she refused to answer to any mortal man, exhibiting the control that mortal women desired to have in their own lives.
Religious teachings on gender-related issues
Abortion
Women choosing to or not choosing to have an abortion is one of many gender related issues among different religions. In many religions, abortion is considered immoral.
The Catholic Church recognizes conception as the beginning of a human life, thus abortion is prohibited under all circumstances. However, according to the Second Vatican Council, women who have had an abortion but are willing to commit to the right of life are ensured forgiveness.
The Catholic Church has thoroughly fought the legalization of abortion and has expressed their thoughts on the issue. They take a pure pro-life stance with the help of protests, presentations, and proposals. They still cannot take a pluralistic stance between the values of life and liberty. Though through media and political delivery, theological problems in the area of one's conscience confuses the rights to religious freedom. Issues of consciousness is only something an individual can possess and yet the Church feels lesser than about the people who choose abortion
Regular church attendance has been shown to correlate with a higher attitude against abortion. This means that most church-goers are pro-life and believe that life begins at conception.
The pastoral message also has to be observed as each member of a church can interpret a message differently. The context of the church has to be considered as well, such as being in an urban or rural environment. The religious messages and how they are exposed in different cultural contexts can determine the effect it has on its listeners. Particularly women, who are more inclined to be religious, are more passionate about the idea of not getting an abortion.
In Hinduism, it is a woman's human duty to produce offspring, thus having an abortion is a violation of that duty. The Vedas, which are age-old sacred Sanskrit texts, suggests that abortion is more sinful than killing a priest or one's own parents. The practice of a woman having an abortion is deemed as unacceptable in the Hindu community, both socially and morally.
On the other hand, some religions recognize that abortion is acceptable only in some circumstances. Mormons believe the act of having an abortion is troublesome and destructive; however, health risks and complications, rape, and closely mating with relatives are the only situations in which abortion is not considered a sin.
Homosexuality
Homosexuality is expressly forbidden in many religions, but typically in casuistic rather than apodictic laws. As such, the rationale for such proscriptions is not clearly evident, though avoidance of procreation and contribution to society via establishing families are sometimes offered as pragmatic considerations.
In general, homosexuality is perceived as sinful in conservative movements, while fully accepted in liberal movements. For example, the Southern Baptist Christian denomination and Islamic community consider homosexuality a sin, whereas the American Baptist denomination perceives homosexuality on an inclusive scale.
Transgender identities
Paganism and Neo-paganism
Many Pagan religions place an emphasis on female divine energy which is manifested as The Goddess. The consensus is unclear on what is considered female and male. During PantheaCon in 2011, a group of Dianic Wiccans performing an all-female ritual turned away trans-women from joining due to their concept of women as capable of experiencing menstruation and childbirth.
Other pagans, however, have embraced a multitude of gender identities by worshiping transgender, intersex, and queer gods from antiquity, such as Greek god, Hermaphroditus.
Religious support for gender equality
Some religions, religious scholars and religious persons have argued that "gender inequality" exists either generally or in certain instances, and have supported a variety of remedies.
Discrimination based on gender and religion is frequently the result of laws and practices that are justified by religious beliefs. Certain religions, for example, forbid women from acting as clergy. The priesthood is reserved for men in the Catholic Church2. While men and women pray separately in Islam, women frequently have restricted room in mosques. Such traditions demonstrate the complicated interplay of prejudice at the intersection of gender and religion.
Sikhs believe in equality of men and women. Gender equality in Sikhism is exemplified by the following quote from Sikh holy scriptures: “From woman, man is born; within woman, man is conceived; to a woman he is engaged and married. Woman becomes his friend; through woman, the future generations come. When his woman dies, he seeks another woman; to woman he is bound. So why call her bad from whom kings are born. From woman, woman is born; without woman, there would be no one at all." – Translated into English from Gurmukhi, Siri Guru Nanak Sahib in Raag Aasaa, Siri Guru Granth Sahib pp 473
Pierre Chaunu has argued that the influence of Christianity promotes equality for women.
Priyamvada Gopal, of Churchill College, Cambridge, argues that increased gender equality is indeed a product of Judeo-Christian doctrine, but not exclusive to it. She expresses concern that gender equality is used by western countries as a rationale for "neocolonialism". Jamaine Abidogun argues that Judeo-Christian influence has indeed shaped gender roles in Nigeria (a strongly Christianised country); however, she does not consider feminism to be a product of Judeo-Christian doctrine, but rather a preferable form of "neocolonialism".
Gender patterns in religious observance
In studies pertaining to gender patterns in religions, it has been widely accepted that females are more likely to be religious than males. In 1997, statistics gathered by Beit-Hallahmi and Argyle theorized this phenomenon into three primary causes. The first explanation is that women feel emotions at greater heights than men do, thus women tend to turn to religion more in times of high emotions such as gratitude or guilt. The second explanation is that female socialization is more likely to align with values that are commonly found in religion such as conflict mediation, tenderness, and humility. In contrast, male socialization is more likely to emphasize rebellion, thus making the guideline aspects of religion less appealing. The third explanation, which is also the most recent theory, is that females are more likely to be able to identify with religion as a natural consequence of societal structures. For example, since a majority of religions emphasize women as caretakers of the home, the societal expectation of women to take greater responsibility than men for the upbringing of a child makes religion an appealing commitment. Another example is that traditionally, men tend to work outside the home whereas women tend to work inside the home, which corresponds to studies that have shown that people are more likely to be religious when working inside of their homes.
The Pew Research Center studied the effects of gender on religiosity throughout the world, finding that women are generally more religious than men, yet the gender gap is greater for Christians than Muslims.
Specific religions
More information on the role of gender in specific religions can be found on the following pages:
Baháʼí Faith – Baháʼí Faith and gender equality
Buddhism – Women in Buddhism
Christianity – Women in Christianity
Mormonism – Women and Mormonism
Hinduism – Women in Hinduism
Islam – Women in Islam
Judaism – Gender and Judaism, Women in Judaism
Sikhism – Women in Sikhism
See also
Thealogy aka Feminine divine
Religion and sexuality
Sex segregation
Transgender people and religion
Anti-gender movement
References
External links
Is Christianity an Inherently Feminine Religion? Brett and Kate McKay
Women in the Bible by the Atheist Foundation of Australia Inc.
Feminism and spirituality | 0.778381 | 0.982565 | 0.76481 |
Palearctic realm | The Palearctic or Palaearctic is the largest of the eight biogeographic realms of the Earth. It stretches across all of Eurasia north of the foothills of the Himalayas, and North Africa.
The realm consists of several bioregions: the Euro-Siberian region; the Mediterranean Basin; North Africa; North Arabia; and Western, Central and East Asia. The Palaearctic realm also has numerous rivers and lakes, forming several freshwater ecoregions.
The term 'Palearctic' was first used in the 19th century, and is still in use as the basis for zoogeographic classification.
History
In an 1858 paper for the Proceedings of the Linnean Society, British zoologist Philip Sclater first identified six terrestrial zoogeographic realms of the world: Palaearctic, Aethiopian/Afrotropic, Indian/Indomalayan, Australasian, Nearctic, and Neotropical. The six indicated general groupings of fauna, based on shared biogeography and large-scale geographic barriers to migration.
Alfred Wallace adopted Sclater's scheme for his book The Geographical Distribution of Animals, published in 1876. This is the same scheme that persists today, with relatively minor revisions, and the addition of two more realms: Oceania and the Antarctic realm.
Major ecological regions
The Palearctic realm includes mostly boreal/subarctic-climate and temperate-climate ecoregions, which run across Eurasia from western Europe to the Bering Sea.
Euro-Siberian region
The boreal and temperate Euro-Siberian region is the Palearctic's largest biogeographic region, which transitions from tundra in the northern reaches of Russia and Scandinavia to the vast taiga, the boreal coniferous forests which run across the continent. South of the taiga are a belt of temperate broadleaf and mixed forests and temperate coniferous forests. This vast Euro-Siberian region is characterized by many shared plant and animal species, and has many affinities with the temperate and boreal regions of the Nearctic realm of North America. Eurasia and North America were often connected by the Bering land bridge, and have very similar mammal and bird fauna, with many Eurasian species having moved into North America, and fewer North American species having moved into Eurasia. Many zoologists consider the Palearctic and Nearctic to be a single Holarctic realm. The Palearctic and Nearctic also share many plant species, which botanists call the Arcto-Tertiary Geoflora.
Mediterranean Basin
The lands bordering the Mediterranean Sea in southern Europe, north Africa, and western Asia are home to the Mediterranean Basin ecoregions, which together constitute the world's largest and most diverse mediterranean climate region of the world, with generally mild, rainy winters and hot, dry summers. The Mediterranean basin's mosaic of Mediterranean forests, woodlands, and scrub are home to 13,000 endemic species. The Mediterranean basin is also one of the world's most endangered biogeographic regions; only 4% of the region's original vegetation remains, and human activities, including overgrazing, deforestation, and conversion of lands for pasture, agriculture, and urbanization, have degraded much of the region. Formerly the region was mostly covered with forests and woodlands, but heavy human use has reduced much of the region to the sclerophyll shrublands known as chaparral, matorral, maquis, or garrigue. Conservation International has designated the Mediterranean basin as one of the world's biodiversity hotspots.
Sahara and Arabian deserts
A great belt of deserts, including the Atlantic coastal desert, Sahara desert, and Arabian desert, separates the Palearctic and Afrotropic ecoregions. This scheme includes these desert ecoregions in the palearctic realm; other biogeographers identify the realm boundary as the transition zone between the desert ecoregions and the Mediterranean basin ecoregions to the north, which places the deserts in the Afrotropic, while others place the boundary through the middle of the desert.
Western and Central Asia
The Caucasus mountains, which run between the Black Sea and the Caspian Sea, are a particularly rich mix of coniferous, broadleaf, and mixed forests, and include the temperate rain forests of the Euxine-Colchic deciduous forests ecoregion.
Central Asia and the Iranian plateau are home to dry steppe grasslands and desert basins, with montane forests, woodlands, and grasslands in the region's high mountains and plateaux. In southern Asia the boundary of the Palearctic is largely altitudinal. The middle altitude foothills of the Himalaya between about form the boundary between the Palearctic and Indomalaya ecoregions.
East Asia
China, Korea and Japan are more humid and temperate than adjacent Siberia and Central Asia, and are home to rich temperate coniferous, broadleaf, and mixed forests, which are now mostly limited to mountainous areas, as the densely populated lowlands and river basins have been converted to intensive agricultural and urban use. East Asia was not much affected by glaciation in the ice ages, and retained 96 percent of Pliocene tree genera, while Europe retained only 27 percent. In the subtropical region of southern China and southern edge of the Himalayas, the Palearctic temperate forests transition to the subtropical and tropical forests of Indomalaya, creating a rich and diverse mix of plant and animal species. The mountains of southwest China are also designated as a biodiversity hotspot. In Southeastern Asia, high mountain ranges form tongues of Palearctic flora and fauna in northern Indochina and southern China. Isolated small outposts (sky islands) occur as far south as central Myanmar (on Nat Ma Taung, ), northernmost Vietnam (on Fan Si Pan, ) and the high mountains of Taiwan.
Freshwater
The realm contains several important freshwater ecoregions as well, including the heavily developed rivers of Europe, the rivers of Russia, which flow into the Arctic, Baltic, Black, and Caspian seas, Siberia's Lake Baikal, the oldest and deepest lake on the planet, and Japan's ancient Lake Biwa.
Flora and fauna
One bird family, the accentors (Prunellidae), is endemic to the Palearctic region. The Holarctic has four other endemic bird families: the divers or loons (Gaviidae), grouse (Tetraoninae), auks (Alcidae), and waxwings (Bombycillidae).
There are no endemic mammal orders in the region, but several families are endemic: Calomyscidae (mouse-like hamsters), Prolagidae, and Ailuridae (red pandas). Several mammal species originated in the Palearctic and spread to the Nearctic during the Ice Age, including the brown bear (Ursus arctos, known in North America as the grizzly), red deer (Cervus elaphus) in Europe and the closely related elk (Cervus canadensis) in far eastern Siberia, American bison (Bison bison), and reindeer (Rangifer tarandus, known in North America as the caribou).
Megafaunal extinctions
Several large Palearctic animals became extinct from the end of the Pleistocene into historic times, including Irish elk (Megaloceros giganteus), aurochs (Bos primigenius), woolly rhinoceros (Coelodonta antiquitatis), woolly mammoth (Mammuthus primigenius), North African elephant (Loxodonta africana pharaoensis), Chinese elephant (Elephas maximus rubridens), cave bear (Ursus spelaeus), Straight tusked elephant (Palaeoloxodon antiquus) and European lion (Panthera leo europaea).
Palearctic terrestrial ecoregions
References
General references
Amorosi, T. "Contributions to the zooarchaeology of Iceland: some preliminary notes" in The Anthropology of Iceland (eds. E.P. Durrenberger & G. Pálsson). Iowa City: University of Iowa Press, pp. 203–227, 1989.
Buckland, P.C., et al. "Holt in Eyjafjasveit, Iceland: a paleoecological study of the impact of Landnám" in Acta Archaeologica 61: pp. 252–271. 1991.
http://www.Merriam-Webster.com
http://www.Canadianbiodiversity.mcgill.ca
http://www.bbc.co.uk/nature/ecozones/Palearctic_ecozone
Edmund Burke III, "The Transformation of the middle Eastern Environment, 1500 B.C.E.–2000 C.E." in The Environment and World History, ed. Edmund Burke III and Kenneth Pomeranz. Berkeley: University of California Press. 2009, 82–84.
External links
Avionary 1500 Bird species of the Western and Central Palaearctic in 46 languages
Map of the ecozones
Biogeography
.
.
Biogeographic realms
Natural history of Asia
Natural history of Europe
Natural history of Africa
Phytogeography | 0.768709 | 0.994914 | 0.764799 |
Late Pleistocene | The Late Pleistocene is an unofficial age in the international geologic timescale in chronostratigraphy, also known as the Upper Pleistocene from a stratigraphic perspective. It is intended to be the fourth division of the Pleistocene Epoch within the ongoing Quaternary Period. It is currently defined as the time between c. 129,000 and c. 11,700 years ago. The late Pleistocene equates to the proposed Tarantian Age of the geologic time scale, preceded by the officially ratified Chibanian (commonly known as the Middle Pleistocene). The beginning of the Late Pleistocene is the transition between the end of the Penultimate Glacial Period and the beginning of the Last Interglacial around 130,000 years ago (corresponding with the beginning of Marine Isotope Stage 5). The Late Pleistocene ends with the termination of the Younger Dryas, some 11,700 years ago when the Holocene Epoch began.
The term Upper Pleistocene is currently in use as a provisional or "quasi-formal" designation by the International Union of Geological Sciences (IUGS). Although the three oldest ages of the Pleistocene (the Gelasian, the Calabrian and the Chibanian) have been officially defined, the late Pleistocene has yet to be formally defined.
Following the brief Last Interglacial warm period (~130–115,000 years ago), where temperatures were comparable to or warmer than the Holocene, the Late Pleistocene was dominated by the cool Last Glacial Period, with temperatures gradually lowering throughout the period, reaching their lowest during the Last Glacial Maximum around 26-20,000 years ago.
Most of the world's large (megafaunal) animals became extinct during the Late Pleistocene as part of the Late Pleistocene extinctions, a trend that continued into the Holocene. In palaeoanthropology, the late Pleistocene contains the Upper Palaeolithic stage of human development, including the early human migrations of modern humans outside of Africa, and the extinction of all archaic human species.
Last Ice Age
The proposed beginning of the late Pleistocene is the end of the Penultimate Glacial Period (PGP) 126 ka when the Riß glaciation (Alpine) was being succeeded by the Eemian (Riß-Würm) interglacial period. The Riß-Würm ended 115 ka with the onset of the Last Glacial Period (LGP) which is known in Europe as the Würm (Alpine) or Devensian (Great Britain) or Weichselian glaciation (northern Europe); these are broadly equated with the Wisconsin glaciation (North America), though technically that began much later.
The Last Glacial Maximum was reached during the later millennia of the Würm/Weichselian, estimated between 26 ka and 19 ka when deglaciation began in the Northern Hemisphere. The Würm/Weichselian endured until 16 ka with Northern Europe, including most of Great Britain, covered by an ice sheet. The glaciers reached the Great Lakes in North America. Sea levels fell and two land bridges were temporarily in existence that had significance for human migration: Doggerland, which connected Great Britain to mainland Europe; and the Bering land bridge which joined Alaska to Siberia.
The last Ice Age was followed by the Late Glacial Interstadial, a period of global warming to 12.9 ka, and the Younger Dryas, a return to glacial conditions until 11.7 ka. Paleoclimatology holds that there was a sequence of stadials and interstadials from about 16 ka until the end of the Pleistocene. These were the Oldest Dryas (stadial), the Bølling oscillation (interstadial), the Older Dryas (stadial), the Allerød oscillation (interstadial) and finally the Younger Dryas.
The end of the Younger Dryas marks the boundary between the Pleistocene and Holocene Epochs. Hominids in all parts of the world were still culturally and technologically in the Palaeolithic (Old Stone) Age. Tools and weapons were basic stone or wooden implements. Nomadic tribes followed moving herds. Non-nomadics acquired their food by gathering and hunting.
Africa
Its present physical geography and climate have changed over time caused by the movement of tectonic plates and volcanoes but glacial cycles and sea level variation have a more significant effect on the vertebrate communities during the Late Pleistocene.
The Late Pleistocene was the time when most animals evolved to resemble modern-day animals and they managed to live through the Late mid-Pleistocene since there were no extinction events of megafauna until the end of the Late Pleistocene.
Some species which went extinct at the end of the Late Pleistocene in Southern Africa are the giant warthog, long-horn buffalo, Southern springbok, etc. These species were common because their distribution changed in response to climatic influences on vegetation. Carnivores were more widespread due to their varying habitat requirements.
In Egypt, the Late (or Upper) Palaeolithic began sometime after 30,000 BC. People in North Africa had relocated to the Nile Valley as the Sahara was transformed from grassland to desert. The Nazlet Khater skeleton was found in 1980 and has been radiocarbon dated to between 30,360 and 35,100 years ago.
Most of the knowledge of the Late Pleistocene is obtained from regions like Morocco, Algeria, Tunisia, some coastal regions of Maghreb, Libya and Egypt. The only issue with interpreting the data from this region is due to the lack of chronological information. The resemblance of Late Pleistocene species in Northern Africa to modern animals is the same as in Southern Africa but it's extremely difficult to date when these fauna came into place because of the lack of reliable samples from the mid-Pleistocene. Most of the significant fossil records are from the Maghreb because of its geology which helps to create deep caves which is conducive for preserving fossils.
Eurasia
Neanderthal hominins (Homo neanderthalensis) inhabited Eurasia until becoming extinct between 40 and 30 ka, towards the end of the Pleistocene and possibly into the early Holoceneand were replaced with modern humans (Homo sapiens) who emerged from East Africa about 195,000 years ago. Neanderthals co-existed with the Homo sapiens until they died out.
In Eurasia, extinction happened throughout the Pleistocene but those that happened during the Later Pleistocene were of megafauna and there were no replacements for the extinct species. Some Molluscan species went extinct but not on the same scale as the mammals living during the time. Some examples of species which extinct without replacements include the Straight-tusked elephant (Palaeoloxodon antiquus), Giant deer (Megaloceros giganteus), cave bear (Ursus spelaeus) and woolly rhinoceros (Coelodonta antiquitatis). Several large mammalian species including the mammoth, mastodon, and Irish elk became extinct.
Upper Paleolithic people also made paintings and engravings on walls. Cave paintings have been found at Lascaux in the Dordogne which may be more than 17,000 years old. These are mainly buffalo, deer, and other animals hunted by humans. Later paintings occur in caves throughout the world, including Altamira, Spain, and in India, Australia, and the Sahara.
Magdalenian hunter-gatherers were widespread in western Europe about 20 -12.500 cal BP years ago until the end of the Pleistocene. An example of this is the antler-working done by the human groups who lived in the Santimamine cave in the Magdalenian. They invented the earliest known harpoons using reindeer horn.
Climatic conditions during the Late Pleistocene in Eurasia were predominantly cold with glaciation events happening in northern Europe, northwest Siberia and the Alps and interglacials (temperate phase). The evidence of the changes in climatic conditions was from fragmentary sequences in formerly glaciated areas in northern Europe.
The only domesticated animal in the Pleistocene was the dog, which evolved from the grey wolf into its many modern breeds. It is believed that the grey wolf became associated with hunter-gatherer tribes around 15 Ka. The earliest remains of a true domestic dog have been dated to 14,200 years ago. Domestication first happened in Eurasia but could have been anywhere from Western Europe to East Asia. Domestication of other animals such as cattle, goats, pigs, and sheep did not begin until the Holocene when settled farming communities became established in the Near East. The cat was probably not domesticated before at the earliest, again in the Near East.
A butchered brown bear patella found in Alice and Gwendoline Cave in County Clare and dated to 10,860 to 10,641 BC indicates the first known human activity in Ireland.
Far East
The topography and geography of Asia were subject to frequent changes such as the creation of land bridges when sea levels dropped which helped with the expansion and migration of human populations. The first human habitation in the Japanese archipelago has been traced to prehistoric times between 40,000 BC and 30,000 BC. The earliest fossils are radiocarbon dated to c. 35,000 BC. An archeological record of Neanderthals has been found in Asia along with records of two other hominin populations, the Denisovans and Homo floresiensis.
Japan was once linked to the Asian mainland by land bridges via Hokkaido and Sakhalin Island to the north but was unconnected at this time when the main islands of Hokkaido, Honshu, Kyushu and Shikoku were all separate entities.
North America
Human migrations happened during this time with people coming in from Eurasia. From about 28 ka, there were migrations across the Bering land bridge from Siberia to Alaska. The people became the Native Americans. It is believed that the original tribes subsequently moved down to Central and South America under pressure from later migrations.
In the North American land mammal age scale, the Rancholabrean spans the time from c. 240,000 years ago to c. 11,000 years ago. It is named after the Rancho La Brea fossil site in California, characterized by extinct forms of bison in association with other Pleistocene species such as the mammoth.During the Late Pleistocene about 35 genera of megafauna went extinct including species such as mastodons, saber-toothed cats and giant ground sloths. Some other species went extinct in North America but not globally. it is still heavily debated what caused the extinctions.
Bison occidentalis and Bison antiquus, an extinct subspecies of the smaller present-day bison, survived the late Pleistocene period, between about 12 and 11 ka ago. Clovis people depended on these bison as their major food source. Earlier kills of camels, horses, and muskoxen found at Wally's beach were dated to 13.1–13.3 ka B.P.
South America
Over 50 genera (~ 83%) of megafauna in South and North America went extinct during the Pleistocene. most mega mammals (>1000kg) and large mammals (>40kg) went extinct by the end of the Late Pleistocene. During this period there was a major cooling event called the Younger Dryas and the Clovis culture of capturing game became more prominent. Diverse factors such as climate change may have triggered this extinction but it's still in debate what the major factors were.
The Late Pleistocene saw a change in the use of coastal resources and advancements in marine technology. The reasons for these changes have not been confirmed; various triggering mechanisms have been theorized such as climate change, the arrival of new people, or the struggle for resources.
The South American land mammal age, the Lujanian, corresponds with the late Pleistocene. The Lujanian is a geologic period from 0.8 - 0.11Ma specifically for prehistoric South American fauna.
Oceania
There is evidence of human habitation in mainland Australia, Indonesia, New Guinea and Tasmania from c. 45,000 BC. The finds include rock engravings, stone tools and evidence of cave habitation.
In Australia, there are sites which show evidence of pollen records from the Late Pleistocene and they are mostly found in more temperate regions of the continent. Some megafauna decreased in size over time, while others remained the same; however, the fossil record is limited in the exact chronologies of the extinctions.
In general, various reasons have been stated to have caused the extinctions during the Late Pleistocene but the topic is still being debated.
References
Bibliography
Further reading
Ehlers, J., and P.L. Gibbard, 2004a, Quaternary Glaciations: Extent and Chronology 2: Part II North America. Elsevier, Amsterdam.
Ehlers, J., and P L. Gibbard, 2004b, Quaternary Glaciations: Extent and Chronology 3: Part III: South America, Asia, Africa, Australia, Antarctica.
Frison, George C., Prehistoric Human and Bison Relationships on the Plains of North America, August 2000, International Bison Conference, Edmonton, Alberta.
Gillespie, A. R., S. C. Porter, and B. F. Atwater, 2004, The Quaternary Period in the United States. Developments in Quaternary Science no. 1. Elsevier, Amsterdam.
Mangerud, J., J. Ehlers, and P. Gibbard, 2004, Quaternary Glaciations : Extent and Chronology 1: Part I Europe. Elsevier, Amsterdam.
Sibrava, V., Bowen, D. Q., and Richmond, G. M., 1986, Quaternary Glaciations in the Northern Hemisphere, Quaternary Science Reviews. vol. 5, pp. 1–514.
03
Pleistocene geochronology
Quaternary geochronology
Geological ages | 0.766603 | 0.997638 | 0.764792 |
Marxist historiography | Marxist historiography, or historical materialist historiography, is an influential school of historiography. The chief tenets of Marxist historiography include the centrality of social class, social relations of production in class-divided societies that struggle against each other, and economic constraints in determining historical outcomes (historical materialism). Marxist historians follow the tenets of the development of class-divided societies, especially modern capitalist ones.
Marxist historiography has developed in varied ways across different regional and political contexts. It has had unique trajectories of development in the West, the Soviet Union, and in India, as well as in the pan-Africanist and African-American traditions, adapting to these specific regional and political conditions in different ways.
Marxist historiography has made contributions to the history of the working class, and the methodology of a history from below.
Marxist historiography is sometimes criticized as deterministic, in that it posits a direction of history, towards an end state of history as classless human society. Marxist historiography within Marxist circles is generally seen as a tool; its aim is to bring those it perceives as oppressed by history to self-consciousness, and to arm them with tactics and strategies from history. For these Marxists, it is both a historical and a liberatory project.
However, not all Marxist historiography is socialist. Methods from Marxist historiography, such as class analysis, can be divorced from the original political intents of Marxism and its deterministic nature; historians who use Marxist methodology, but disagree with the politics of Marxism, often describe themselves as "Marxian" historians, practitioners of this "Marxian historiography" often refer to their techniques as "Marxian".
Marx and Engels
Friedrich Engels' (1820–1895) most important historical contribution to the development of Marxist historiography was Der Deutsche Bauernkrieg (The German Peasants' War, 1850), which analysed social warfare in early Protestant Germany in terms of emerging capitalist classes. Although The German Peasants' War was overdetermined and lacked a rigorous engagement with archival sources, it exemplifies an early Marxist interest in history from below and in class analysis; it also attempts a dialectical analysis.
Karl Marx (1818–1883) contributed important works on social and political history, including The Eighteenth Brumaire of Louis Napoleon (1852), The Communist Manifesto (1848), The German Ideology (written in 1845, published in 1932), and those chapters of Das Kapital (1867–1894) dealing with the historical emergence of capitalists and proletarians from pre-industrial English society.
Labour and class struggle
The key to understanding Marxist historiography is his view of labour. For Marx "historical reality is none other than objectified labour, and all conditions of labour given by nature, including the organic bodies of people, are merely preconditions and 'disappearing moments' of the labour process." This emphasis on the physical as the determining factor in history represents a break from virtually all previous historians. Until Marx developed his theory of historical materialism, the overarching determining factor in the direction of history was some sort of divine agency. In Marx's view of history "God became a mere projection of human imagination" and more importantly "a tool of oppression". There was no more sense of divine direction to be seen. History moved by the sheer force of human labour, and all theories of divine nature were a concoction of the ruling powers to keep the working people in check. For Marx, "The first historical act is... the production of material life itself." As one might expect, Marxist history not only begins with labour, it ends in production: "history does not end by being resolved into "self-consciousness" as "spirit of the spirit," but that in it at each stage there is found a material result: a sum of productive forces, a historically created relation of individuals to nature and to one another, which is handed down to each generation from its predecessor..." For further, and much more comprehensive, information on this topic, see historical materialism.
Historical materialism
Introduction
Historical materialism is a methodology to understand human societies and their development throughout history. Marx's theory of history locates historical change in the rise of class societies and the way humans labour together to make their livelihoods. Marx argues that the introduction of new technologies and new ways of doing things to improve production eventually lead to new social classes which in turn result in political crises which can threaten the established order.
Marx's view of history is in contrast to the commonplace notion that the rise and fall of kingdoms, empires and states, can broadly be explained by the actions, ambitions and policies of the people at the top of society; kings, queens, emperors, generals, or religious leaders. This view of history is summed up by the 19th-century Scottish philosopher Thomas Carlyle who wrote "the history of the world is nothing but the biography of great men". An alternative to the "great man" theory is that history is shaped by the motivating force of "great ideas" – the struggle of reason over superstition or the fight for democracy and freedom.
The "great man" and "great women" theory of history and the view that history is primarily shaped by ideas has provoked no end of debate but many historians have believed there are more fundamental patterns at play beneath historical events.
Marx asserted that the material conditions of a society's mode of production, or in Marxist terms a society's productive forces and relations of production, fundamentally determine society's organization and development including the political commitments, cultural ideas and values that dominate in any society.
Marx argues that there is a fundamental conflict between the class of people who create the wealth of society and those who have ownership or control of the means of production, decide how society's wealth and resources are to be used and have a monopoly of political and military power. Historical materialism provides a profound challenge to the view that the historical process has come to a close and that capitalism is the end of history. Since Marx's time, the theory has been modified and expanded. It now has many Marxist and non-Marxist variants.
The main modes of production that Marx identified generally include primitive communism, slave society, feudalism, mercantilism, and capitalism. In each of these social stages, people interacted with nature and production in different ways. Any surplus from that production was distributed differently as well. To Marx, ancient societies (e.g. Rome and Greece) were based on a ruling class of citizens and a class of slaves; feudalism was based on nobles and serfs; and capitalism based on the capitalist class (bourgeoisie) and the working class (proletariat).
Description
Historical materialism builds upon the idea of historical progress that became popular in philosophy during the Enlightenment, which asserted that the development of human society has progressed through a series of stages, from hunting and gathering, through pastoralism and cultivation, to commercial society. Historical materialism rests on a foundation of dialectical materialism, in which matter is considered primary and ideas, thought, and consciousness are secondary, i.e. consciousness and human ideas about the universe result from material conditions rather than vice versa. Marxism uses this materialist methodology, referred to by Marx and Engels as the materialist conception of history and later better known as historical materialism, to analyse the underlying causes of societal development and change from the perspective of the collective ways in which humans make their living.
Historical materialism springs from a fundamental underlying reality of human existence: that in order for subsequent generations of human beings to survive, it is necessary for them to produce and reproduce the material requirements of everyday life. Marx then extended this premise by asserting the importance of the fact that, in order to carry out production and exchange, people have to enter into very definite social relations, or more specifically, "relations of production". However, production does not get carried out in the abstract, or by entering into arbitrary or random relations chosen at will, but instead are determined by the development of the existing forces of production. How production is accomplished depends on the character of society's productive forces, which refers to the means of production such as the tools, instruments, technology, land, raw materials, and human knowledge and abilities in terms of using these means of production. The relations of production are determined by the level and character of these productive forces present at any given time in history. In all societies, Human beings collectively work on nature but, especially in class societies, do not do the same work. In such societies, there is a division of labour in which people not only carry out different kinds of labour but occupy different social positions on the basis of those differences. The most important such division is that between manual and intellectual labour whereby one class produces a given society's wealth while another is able to monopolize control of the means of production and so both governs that society and lives off of the wealth generated by the labouring classes.
Marx's account of the theory is in The German Ideology (1845) and in the preface A Contribution to the Critique of Political Economy (1859). All constituent features of a society (social classes, political pyramid and ideologies) are assumed to stem from economic activity, forming what is considered as the base and superstructure. The base and superstructure metaphor describes the totality of social relations by which humans produce and re-produce their social existence. According to Marx, the "sum total of the forces of production accessible to men determines the condition of society" and forms a society's economic base.
The base includes the material forces of production such as the labour, means of production and relations of production, i.e. the social and political arrangements that regulate production and distribution. From this base rises a superstructure of legal and political "forms of social consciousness" that derive from the economic base that conditions both the superstructure and the dominant ideology of a society. Conflicts between the development of material productive forces and the relations of production provokes social revolutions, whereby changes to the economic base leads to the social transformation of the superstructure.
This relationship is reflexive, in that the base initially gives rise to the superstructure and remains the foundation of a form of social organization. Those newly formed social organizations can then act again upon both parts of the base and superstructure so that rather than being static, the relationship is dialectic, expressed and driven by conflicts and contradictions. Engels clarified: "The history of all hitherto existing society is the history of class struggles. Freeman and slave, patrician and plebeian, lord and serf, guild-master and journeyman, in a word, oppressor and oppressed, stood in constant opposition to one another, carried on uninterrupted, now hidden, now open fight, a fight that each time ended, either in a revolutionary reconstitution of society at large, or in the common ruin of the contending classes."
Marx considered recurring class conflicts as the driving force of human history as such conflicts have manifested themselves as distinct transitional stages of development in Western Europe. Accordingly, Marx designated human history as encompassing four stages of development in relations of production:
Primitive communism: co-operative tribal societies.
Slave society: development of tribal to city-state in which aristocracy is born.
Feudalism: aristocrats are the ruling class while merchants evolve into the bourgeoisie.
Capitalism: capitalists are the ruling class, who create and employ the proletariat.
While historical materialism has been referred to as a materialist theory of history, Marx did not claim to have produced a master-key to history and that the materialist conception of history is not "an historico-philosophic theory of the , imposed by fate upon every people, whatever the historic circumstances in which it finds itself." In a letter to editor of the Russian newspaper paper (1877), he explained that his ideas are based upon a concrete study of the actual conditions in Europe.
Summary
To summarize, history develops in accordance with the following observations:
Social progress is driven by progress in the material, productive forces a society has at its disposal (technology, labour, capital goods and so on)
Humans are inevitably involved in productive relations (roughly speaking, economic relationships or institutions), which constitute our most decisive social relations. These relations progress with the development of the productive forces. They are largely determined by the division of labour, which in turn tends to determine social class.
Relations of production are both determined by the means and forces of production and set the conditions of their development. For example, capitalism tends to increase the rate at which the forces develop and stresses the accumulation of capital.
The relations of production define the mode of production, e.g. the capitalist mode of production is characterized by the polarization of society into capitalists and workers.
The superstructure—the cultural and institutional features of a society, its ideological materials—is ultimately an expression of the mode of production on which the society is founded.
Every type of state is a powerful institution of the ruling class; the state is an instrument which one class uses to secure its rule and enforce its preferred relations of production and its exploitation onto society.
State power is usually only transferred (migrated) from one class to another by social and political agreements.
When a given relation of production no longer supports further progress in the productive forces, either further progress is strangled, or 'revolution' must occur.
The actual historical process is not predetermined but depends on class struggle, especially the elevation of class consciousness and organization of the working class.
Western historiography
Karl Marx and Friedrich Engels worked in relative isolation together outside the larger mainstream. However, by the turn of the twentieth century, Marxist thought was perhaps the most prominent opposition to the idealist traditions.
R. H. Tawney (1880–1962) was an early historian working in this tradition. The Agrarian Problem in the Sixteenth Century (1912) and Religion and the Rise of Capitalism (1926), reflected his ethical concerns and preoccupations in economic history. He was profoundly interested in the issue of the enclosure of land in the English countryside in the sixteenth and seventeenth centuries and Max Weber's thesis on the connection between the appearance of Protestantism and the rise of capitalism. His belief in the rise of the gentry in the century before the outbreak of the Civil War in England provoked the "Storm over the Gentry" in which his methods were subjected to severe criticisms by Hugh Trevor-Roper and John Cooper.
A circle of historians inside the Communist Party of Great Britain (CPGB) was formed in 1946. It became a highly influential cluster of British Marxist historians, who shared a common interest in and contributed to history from below and class structure in early capitalist society. While some members of the group (most notably Christopher Hill [1912–2003] and E. P. Thompson [1924–1993]) left the CPGB after the 1956 Hungarian Revolution, the common points of British Marxist historiography continued in their works. They placed a great emphasis on the subjective determination of history. E. P. Thompson famously engaged Althusser in The Poverty of Theory, arguing that Althusser's theory overdetermined history, and left no space for historical revolt by the oppressed.
Christopher Hill's studies on 17th-century English history were widely acknowledged and recognized as representative of Marxist historians and Marxist historiography in general. His books include Puritanism and Revolution (1958), Intellectual Origins of the English Revolution (1965 and revised in 1996), The Century of Revolution (1961), AntiChrist in 17th-century England (1971), The World Turned Upside Down (1972) and many others.
E. P. Thompson pioneered the study of history from below in his work, The Making of the English Working Class, published in 1963. It focused on the forgotten history of the first working-class political left in the world in the late-18th and early-19th centuries. In his preface to this book, Thompson set out his approach to writing history from below:
Thompson's work was also significant because of the way he defined "class". He argued that class was not a structure, but a relationship that changed over time. Thompson's work is commonly considered the most influential work of history in the twentieth century and a crucial catalyst for social history and from social history to gender history and other studies of marginalized peoples. His essay, "Time, Work, Discipline, and Industrial Capitalism" is also hugely influential and argues that industrial capitalism fundamentally altered (and accelerated) humans' relationship to time. He opened the gates for a generation of labour historians, such as David Montgomery (1927–2011) and Herbert Gutman (1928–1985), who made similar studies of the American working classes.
Perhaps the best known of the Communist historians was E. B. Hobsbawm (1917–2012). He is credited for establishing many of the basic historical arguments of current historiography and synthesizing huge amounts of modern historical data across time and space – most famously in his trilogy: The Age of Revolutions, The Age of Empires, and The Age of Extremes. Hobsbawm's Bandits is another example of this group's work.
C. L. R. James (1901–1989) was also a great pioneer of the 'history from below' approach. Living in Britain when he wrote his most notable work The Black Jacobins (1938), he was an anti-Stalinist Marxist and so outside of the CPGB. The Black Jacobins was the first professional historical account of the greatest and only successful slave revolt in colonial American history, the Haitian Revolution. James's history is still touted as a remarkable work of history nearly a century after publication, an immense work of historical investigation, story-telling, and creativity.
Other important British Marxist historians included Raphael Samuel (1934–1996), A. L. Morton (1903–1987), and Brian Pearce (1915–2008).
In the United States, Marxist historiography greatly influenced the history of slavery and labour history. Marxist historiography also greatly influenced French historians, including France's most famous and enduring historian Fernand Braudel (1902–1985), as well as Italian historians, most famously the Autonomous Marxist and micro-history fields.
In the Soviet Union
Soviet-era historiography was deeply influenced by Marxism. Marxism maintains that the moving forces of history are determined by material production and the rise of different socioeconomic formations. Applying this perspective to socioeconomic formations such as slavery and feudalism is a major methodological principle of orthodox Marxist historiography. Based on this principle, historiography predicts that there will be an abolition of capitalism by a socialist revolution made by the working class. Soviet historians believed that Marxist–Leninist theory permitted the application of categories of dialectical and historical materialism in the study of historical events.
However Soviet historiography was significantly influenced by the strict control by the authorities aimed at propaganda and Soviet power as well, as a result Marxist historiography suffered in the Soviet Union, as the government required overdetermined historical writing. Soviet historians tended to avoid contemporary history (after 1903) where possible, and effort was predominantly directed at pre-modern history (before 1850). As history was considered to be a politicized academic discipline, historians limited their creative output to avoid prosecution. Since the late 1930s, Soviet historiography treated the party line and reality as one and the same. As such, if it was a science, it was a science in service of a specific political and ideological agenda, commonly employing historical negationism. In the 1930s, historical archives were closed and original research was severely restricted. Historians were required to pepper their works with references—appropriate or not—to Stalin and other "Marxist–Leninist classics", and to pass judgment—as prescribed by the Party—on pre-revolution historic Russian figures. Nikita Khrushchev commented that "Historians are dangerous and capable of turning everything upside down. They have to be watched."
The Soviet interpretation of Marxism predetermined much of the research done by historians. Research by scholars in the USSR was limited to a large extent due to this predetermination. Some Soviet historians could not offer non-Marxist theoretical explanations that did not fit with the party's official ideology for their interpretation of sources. This was true even when alternate theories had a greater explanatory power in relation to a historian's reading of source material.
Marx and Engels' ideas of the importance of class struggle in history, the destiny of the working class, and the role of the dictatorship of the proletariat and the revolutionary party are of major importance in Marxist methodology.
Marxist–Leninist historiography has several aspects. It explains the social basis of historical knowledge, determines the social functions of historical knowledge and the means by which these functions are carried out, and emphasizes the need to study concepts in connection with the social and political life of the period in which these concepts were developed.
It studies the theoretical and methodological features in every school of historical thought. Marxist–Leninist historiography analyses the source-study basis of a historical work, the nature of the use of sources, and specific research methods. It analyses problems of historical research as the most important sign of the progress and historical knowledge and as the expression of the socioeconomic and political needs of a historical period.
The Marxist theory of historical materialism identified means of production as chief determinants of the historical process. They led to the creation of social classes, and class struggle was the motor of history. The sociocultural evolution of societies was considered to progress inevitably from slavery, through feudalism and capitalism to socialism and finally communism. In addition, Leninism argued that a vanguard party was required to lead the working class in the revolution that would overthrow capitalism and replace it with socialism.
Soviet historiography interpreted this theory to mean that the creation of the Soviet Union was the most important turning event in human history, since the USSR was considered to be the first socialist society. Furthermore, the Communist Party – considered to be the vanguard of the working class – was given the role of permanent leading force in society, rather than a temporary revolutionary organization. As such, it became the protagonist of history, which could not be wrong. Hence the unlimited powers of the Communist Party leaders were claimed to be as infallible and inevitable as the history itself. It also followed that a worldwide victory of communist countries is inevitable. All research had to be based on those assumptions and could not diverge in its findings. In 1956, Soviet academician Anna Pankratova said that "the problems of Soviet historiography are the problems of our Communist ideology."
Soviet historians have also been criticized for a Marxist bias in the interpretation of other historical events, unrelated to the Soviet Union. Thus, for example, they assigned to the rebellions in the Roman Empire the characteristics of the social revolution.
Often, the Marxist bias and propaganda demands came into conflict: hence the peasant rebellions against the early Soviet rule, such as the Tambov Rebellion of 1920–21, were simply ignored as inconvenient politically and contradicting the official interpretation of the Marxist theories.
Notable histories include the Short Course History of the Communist Party of the Soviet Union (Bolshevik), published in 1938, which was written to justify the nature of Bolshevik party life under Joseph Stalin. This work crystallised the piatichlenka or five acceptable moments of history in terms of vulgar dialectical materialism: primitive-communism, slavery, feudalism, capitalism and socialism.
In China
Most Chinese history that is published in the People's Republic of China (PRC) is based on a Marxist interpretation of history. These theories were first applied in the 1920s by Chinese scholars such as Guo Moruo and became orthodoxy in academic study after 1949. The Marxist view of history is that history is governed by universal laws and that according to these laws, a society moves through a series of stages, with the transition between stages being driven by class struggle. These stages are:
Slave society
Feudal society
Capitalist society
Socialist society
The world communist society
The official historical view within the People's Republic of China associates each of these stages with a particular era in Chinese history.
Labour society (digital) – Xia to Shang
Feudal society (decentralized) – Zhou to Sui
Feudal society (bureaucratic) – Tang to the First Opium War
Feudal society (semi-colonial) – First Opium War to end of Qing dynasty
Capitalist society – Republican era
Socialist society – PRC 1949 to present
Because of the strength of the Chinese Communist Party and the importance of the Marxist interpretation of history in legitimizing its rule, it was for many years difficult for historians within the PRC to actively argue in favour of non-Marxist and anti-Marxist interpretations of history. However, this political restriction is less confining than it may first appear in that the Marxist historical framework is surprisingly flexible, and it is a rather simple matter to modify an alternative historical theory to use language that at least does not challenge the Marxist interpretation of history.
Partly because of the interest of Mao Zedong, historians in the 1950s took a special interest in the role of peasant rebellions in Chinese history and compiled documentary histories to examine them.
There are several problems associated with imposing Marx's European-based framework on Chinese history. First, slavery existed throughout China's history but never as the primary form of labour. While the Zhou and earlier dynasties may be labeled as feudal, later dynasties were much more centralized than how Marx analysed their European counterparts as being. To account for the discrepancy, Chinese Marxists invented the term "bureaucratic feudalism". The placement of the Tang as the beginning of the bureaucratic phase rests largely on the replacement of patronage networks with the imperial examination. Some world-systems analysts, such as Janet Abu-Lughod, claim that analysis of Kondratiev waves shows that capitalism first arose in Song dynasty China, although widespread trade was subsequently disrupted and then curtailed.
The Japanese scholar Tanigawa Michio, writing in the 1970s and 1980s, set out to revise the generally Marxist views of China prevalent in post-war Japan. Tanigawa writes that historians in Japan fell into two schools. One held that China followed the set European pattern which Marxists thought to be universal; that is, from ancient slavery to medieval feudalism to modern capitalism; while another group argued that "Chinese society was extraordinarily saturated with stagnancy, as compared to the West" and assumed that China existed in a "qualitatively different historical world from Western society". That is, there is an argument between those who see "unilinear, monistic world history" and those who conceive of a "two-tracked or multi-tracked world history". Tanigawa reviewed the applications of these theories in Japanese writings about Chinese history and then tested them by analysing the Six Dynasties 220–589 CE period, which Marxist historians saw as feudal. He concluded that China did not have feudalism in the sense that Marxists use, that Chinese military governments did not lead to a European-style military aristocracy. The period established social and political patterns which shaped China's history from that point on.
There was a gradual relaxation of Marxist interpretation after the death of Mao in 1976, which was accelerated after the Tian'anmen Square protest and other revolutions in 1989, which damaged Marxism's ideological legitimacy in the eyes of Chinese academics.
In India
In India Marxist historiography takes the form of Marxian historiography where Marxian techniques of analysis are used but Marxist political intentions and prescriptions are discarded.
B. N. Datta and D. D. Kosambi are considered the founding fathers of Marxist historiography in India. D. D. Kosambi, a polymath, viewed Indian History from a Marxist viewpoint. The other Indian scholars of Marxian historiography are R. S. Sharma, Irfan Habib, D. N. Jha, and K. N. Panikkar. Other historians such as Satish Chandra, Romila Thapar, Bipan Chandra, Arjun Dev, and Dineshchandra Sircar, are sometimes referred to as "influenced by the marxian approach to history."
The Marxian historiography of India has focused on studies of economic development, land ownership, and class conflict in precolonial India and deindustrialization during the colonial period.
One debate in Indian history that relates to a historical materialist schema is on the nature of feudalism in India. D. D. Kosambi in the 1960s outlined the idea of "feudalism from below" and "feudalism from above". Element of his feudalism thesis was rejected by R. S. Sharma in his monograph Indian Feudalism (2005) and various other books, However R. S. Sharma also largely agrees with Kosambi in his various other books. Most Indian Marxian historians argue that the economic origins of communalism are feudal remnants and the economic insecurities caused by slow development in India.
The Marxian school of Indian historiography is accused of being too ideologically influenced. B. R. Ambedkar criticized Marxists, as he deemed them to be unaware or ignorant of the specifics of caste issues.
Many have alleged that Marxian historians used negationism to whitewash some of the atrocities committed by Muslim rulers in the Indian Subcontinent. Since the late 1990s, Hindu nationalist scholars especially have polemicized against the Marxian tradition in India for neglecting what they believe to be the country's 'illustrious past' based on Vedic-puranic chronology. An example of such works is Arun Shourie's Eminent Historians (1998).
The effects of Marxist historiography
Marxist historiography has had an enormous influence on historiography, and compares with empiricist historiography as one of the basic and foundational historiographic methodologies. Most non-Marxist historians make use of tools developed within Marxist historiography, like dialectical analysis of social formations, class analysis, or the project of broadening the scope of history into social history. Marxist historiography provided the first sustained efforts at social history, and is still highly influential within this area. The contribution of class analysis has also led to the development of gender and race as other analytical tools.
Marxism was one of the key influences on the Annales school tradition of French historiography.
See also
Alltagsgeschichte
Cleometrics
Mode of production
Historical materialism
Philosophy of history
Historical determinism
Historicism
References
Perry Anderson, In the tracks of Historical Materialism
Paul Blackledge, Reflections on the Marxist Theory of History (2006)
"Deciphering the past" International Socialism 112 (2006)
Historiography
Marxism | 0.771345 | 0.991496 | 0.764786 |
Old Europe and New Europe | Old Europe and New Europe are terms used to contrast parts of Europe with each other in a rhetorical way. In the 21st century, the terms have been used by conservative political analysts in the United States to describe post-Communist era countries in Central and Eastern Europe as 'newer' and parts of Western Europe as 'older', suggesting that the latter were less important. The term Old Europe attracted attention when it was used by then-U.S. Secretary of Defense Donald Rumsfeld in January 2003 to refer to democratic European countries before the fall of Communism in Europe, after which a significant number of new members have eventually joined NATO, the European Union and other European bodies.
Old Europe can mean – in a wider sense – Europe of an older historical period, as opposed to a newer historical period. Before Rumsfeld’s use, the term had been used in various historical contexts to refer to Europe as the "Old World" as opposed to America as the "New World"; or, in Marxist usage, to Europe in the expectation of Communist revolutions.
Rumsfeld's term
On January 22, 2003, Secretary of Defense Donald Rumsfeld answered a question from Dutch journalist Charles Groenhuijsen about a potential U.S. invasion of Iraq:
{{quote|Q: Sir, a question about the mood among European allies. You were talking about the Islamic world a second ago. But now the European allies. If you look at, for example, France, Germany, also a lot of people in my own country -- I'm from Dutch public TV, by the way -- it seems that a lot of Europeans rather give the benefit of the doubt to Saddam Hussein than President George Bush. These are U.S. allies. What do you make of that?Rumsfeld: Well, it's -- what do I make of it?
Q: They have no clerics. They have no Muslim clerics there.
Rumsfeld: Are you helping me? (Laughter.) Do you think I need help? (Laughter.)
What do I think about it? Well, there isn't anyone alive who wouldn't prefer unanimity. I mean, you just always would like everyone to stand up and say, Way to go! That's the right thing to do, United States.
Now, we rarely find unanimity in the world. I was ambassador to NATO, and I -- when we would go in and make a proposal, there wouldn't be unanimity. There wouldn't even be understanding. And we'd have to be persuasive. We'd have to show reasons. We'd have to -- have to give rationales. We'd have to show facts. And, by golly, I found that Europe on any major issue is given -- if there's leadership and if you're right, and if your facts are persuasive, Europe responds. And they always have.
Now, you're thinking of Europe as Germany and France. I don't. I think that's old Europe. If you look at the entire NATO Europe today, the center of gravity is shifting to the east. And there are a lot of new members. And if you just take the list of all the members of NATO and all of those who have been invited in recently -- what is it? Twenty-six, something like that? -- you're right. Germany has been a problem, and France has been a problem.
Q: But opinion polls --
Rumsfeld: But -- just a minute. Just a minute. But you look at vast numbers of other countries in Europe. They're not with France and Germany on this, they're with the United States.}}
The expression was interpreted as a dig against a "sclerotic" and old-fashioned Western Europe. Those countries, Rumsfeld added on the same occasion, were "of no importance." It became a potent symbol, especially after division emerged over Iraq between France and Germany and some of the new Central and Southeastern European entrants and applicants to NATO and the European Union.
Rumsfeld would later claim his comment was "unintentional," and that he had meant to say "old NATO" instead of "old Europe;" during his time as ambassador to NATO, there were only fifteen alliance members, and France and Germany had played a much larger role than after the admission of many new (particularly Eastern European) countries. Nonetheless, he claims he "was amused by the ruckus" when the term became debated.
Further diplomatic tension built up when Rumsfeld pointed out in February 2003, that Germany, Cuba and Libya were the only nations completely opposing a possible war in Iraq (a statement that was formally correct at the time). This was interpreted by many that he would put Germany on a common level with dictatorships violating human rights.
Later developments
The German translation altes Europa was the word of the year for 2003 in Germany, because German politicians and commentators responded by often using it in a sarcastic way. It was frequently used with pride and a reference to a perceived position of greater moral integrity. The terms altes Europa and Old Europe have subsequently surfaced in European economic and political discourse. For example, in a January 2005 unveiling for the new Airbus A380 aircraft, German chancellor Gerhard Schröder said, "There is the tradition of good old Europe that has made this possible." A BBC News article about the unveiling said Schröder "deliberately redefined the phrase previously used by... Rumsfeld."
Outside of Rumsfeld's usage of "Old Europe", the term New Europe (and neues Europa) also appeared, indicating either the European states that supported the war, the Central European states that had been newly accepted to the EU, or a new economically and technologically dynamic and liberal Europe, often including the United Kingdom.
Rumsfeld made fun of his statement shortly before a 2005 diplomatic trip to Europe. "When I first mentioned I might be travelling in France and Germany it raised some eyebrows. One wag said it ought to be an interesting trip after all that has been said. I thought for a moment and then I replied: 'Oh, that was the old Rumsfeld.'"
The phrase continued to be used after Rumsfeld's tenure. In a March 2009 speech to the United States Congress, British Prime Minister Gordon Brown said "There is no old Europe, no new Europe. There is only your friend Europe," which The Boston Globe called "an oblique shot at" Rumsfeld. The next month, speaking in Prague, U.S. President Barack Obama, echoing Brown's words, said, "in my view, there is no old Europe or new Europe. There is only a united Europe."
Earlier uses
The Communist Manifesto by Karl Marx and Friedrich Engels starts with the words:
When Marx used the term in 1848, the year of failed liberal revolutions across Europe, he was referring to the restoration of Ancien régime dynasties, following the defeat of Napoleon. Of his three sets of pairs, each pair links figures who might on the surface be considered adversaries, in alliances that he clearly sees as unholy. An "Old Europe" must find a mental contrast with a posited "New Europe".
In his ultra-nationalistic, anti-European book of 1904, America Rules the World, E. David used 'Old Europe' in the following context:
In his book La Hora de los Pueblos (1968), Argentine politician Juan Perón used the phrase when he enunciated the main principles of his purported new tricontinental'' political vision:
See also
Common Foreign and Security Policy of the European Union
Euroscepticism
Old World
Letter of the eight
Pan-European identity
Transatlantic relations
References
External links
Article from Slate
Politics of Europe
Donald Rumsfeld
Iraq War terminology | 0.783561 | 0.976034 | 0.764782 |
Social construction of gender | The social construction of gender is a theory in the humanities and social sciences about the manifestation of cultural origins, mechanisms, and corollaries of gender perception and expression in the context of interpersonal and group social interaction. Specifically, the social construction of gender theory stipulates that gender roles are an achieved "status" in a social environment, which implicitly and explicitly categorize people and therefore motivate social behaviors.
Social constructionism is a theory of knowledge that explores the interplay between reality and human perception, asserting that reality is shaped by social interactions and perceptions. This theory contrasts with objectivism, particularly in rejecting the notion that empirical facts alone define reality. Social constructionism emphasizes the role of social perceptions in creating reality, often relating to power structures and hierarchies.
Gender, a key concept in social constructionism, distinguishes between biological sex and socialized gender roles. Feminist theory views gender as an achieved status, shaped by social interactions and normative beliefs. The World Health Organization highlights that gender intersects with social and economic inequalities, a concept known as intersectionality. Gender roles are socially constructed and vary across cultures and contexts, with empirical studies indicating more similarities than differences between genders. Judith Butler's distinction between gender performativity and gender roles underscores the performative aspect of gender, influenced by societal norms and individual expression.
Gender identity refers to an individual's internal sense of their own gender, influenced by social contexts and personal experiences. This identity intersects with other social identities, such as race and class, affecting how individuals navigate societal expectations. The accountability for gender performance is omnirelevant, meaning it is constantly judged in social interactions. Some studies show that gender roles and expectations are learned from early childhood and reinforced throughout life, impacting areas like the workplace, where gender dynamics and discrimination are evident.
In education and media, gender construction plays a significant role in shaping individuals' identities and societal expectations. Teachers and media representations influence how gender roles are perceived and enacted, often perpetuating stereotypes. The concept of gender performativity suggests that gender is an ongoing performance shaped by societal norms, rather than a fixed trait. This performative view of gender challenges traditional binary understandings and opens up discussions on the fluidity of gender and the impact of socialization on gender identity.
Basic concepts
Social constructionism
Social constructionism is a theory of knowledge which describes the relationship between the objectivity of reality and the capacity of human senses and cognition. Specifically it asserts that reality exists as the summation of social perceptions and expression; and that the reality which is perceived is the only reality worth consideration. This is accompanied by the corollaries that any perceived reality is valid, that reality is subject to manipulation via control over social perceptions and expressions.
The social constructionist movement emerged in relation to both criticism and rejection of Objectivism developed by Russian-American writer Ayn Rand. Specifically, in the assumption of a positivist basis for knowledge; which is to say that social constructionism rejects the notion that empirical facts can be known about reality, where as objectivism is defined by it. Though not explicitly reliant on it, much literature on the subject of social constructionism focuses on its relationship in many facets to hierarchy and power. This intimacy demonstrates the close inspirational source of Marxist doctrine, as utilized in the works of Foucault and his writings on discourse.
The work The Blank Slate of Harvard psychologist Steven Pinker, argues for the existence of socially constructed categories such as "money, tenure, citizenship, decorations for bravery, and the presidency of the United States." which "exist only because people tacitly agree to act as if they exist." However they are not in support of social constructionism as the sole means of understanding reality, rather as a specific context for specific phenomena, and support the consideration of empirical scientific data in our understanding of the nature of human existence. In this manner, Pinker explicitly contradicts social constructionist scholars Marecek, Crawford & Popp who in "On the Construction of Gender, Sex, and Sexualities", argue against the idea that socially organized patterns can emerge from isolated origins and favor instead the theory of Tabula rasa, which states that knowledge and meaning are generated exclusively as a collective effort and that the individual is incapable of doing so independently. In essence, the creation of meaning is a shared effort even when achieved by an individual in solitary conditions, because individuality is an illusion found at the intersection of myriad external influences being filtered through Id, Ego, and Super-ego.
Fitzsimmons & Lennon also note that the constructionist accounts of gender creation can be divided into two main streams:
Materialist theories, which underline the structural aspects of the social environment that are responsible for perpetuating certain gender roles;
Discursive theories, which stress the creation, through language and culture, of meanings that are associated with gender.
They also argue that both the materialist and discursive theories of social construction of gender can be either essentialist or non-essentialist. This means that some of these theories assume a clear biological division between women and men when considering the social creation of masculinity and femininity, while other contest the assumption of the biological division between the sexes as independent of social construction.
Theories that imply that gendered behavior is totally or mostly due to social conventions and culture represent an extreme nurture position in the nature versus nurture debate. Other theories have offered a mediating perspective claiming that both nature and nurture influence gender behavior.
Gender
Gender is used as a means of describing the distinction between the biological sex and socialized aspects of femininity and masculinity. According to West and Zimmerman, is not a personal trait; it is "an emergent feature of social situations: both as an outcome of and a rationale for various social arrangements, and as a means of legitimating one of the most fundamental divisions of society."
According to Kessler and McKenna, a world of two "sexes" is a result of the socially shared, taken-for-granted methods that members use to construct reality.
As a social construct, gender is considered an achieved status by feminist theory, typically (though not exclusively) one which is achieved very early in childhood. The view as achieved is supported by the contemporary constructionist perspective, as proposed by Fenstermaker and West, asserts regarding gender as an activity ("doing") of utilizing normative prescriptions and beliefs about sex categories based on situational variables. These "gender activities" constitute sets of behavior, such as masculine and feminine, which are associated with their sexual counterpart and thus define concepts such as "man" and "woman" respectively. It is noted, however, that the perception as masculine or feminine is not limited or guaranteed to match the expression's typical or intended nature. Hence, gender can be understood as external to the individual, consisting of a series of ongoing judgements and evaluations by others, as well as of others.
Status and hierarchy
The World Health Organization stated in 2023 that
In the context of feminist theory, the word status deviates from its colloquial usage meaning rank or prestige but instead refers to a series of strata or categories by which societies are divided, in some ways synonymous with "labels" or "roles". The semantic distinctions of "labels" and "roles" are homogenized into the term "status" and then re-differentiated by the division into "ascribed status" and "achieved status" respectively.
Gender roles
Gender roles are a continuation of the gender status, consisting of other achieved statuses that are associated with a particular gender status. In less theoretical terms, gender roles are functional position in a social dynamic for which fulfillment is a part of "doing gender"
Empirical investigations suggest that gender roles are "social constructs that vary significantly across time, context, and culture". Ronald F. Levant and Kathleen Alto write:
American philosopher Judith Butler makes a distinction between gender performativity and gender roles, which delineates between the social behaviors of the individual seeking to express the behavior which articulate their own perception of their gender; and behavior which creates the perception of compliance with societal gender expressions in aggregate. This is not to imply that participation in gender performativity cannot correspond to pressure to fulfill a gender role, nor that fulfillment of a gender role cannot satisfy the desire for gender performativity. The distinction refers primarily to context and motivation, rather than particular behaviors and consequences- which are often closely linked. Research by Liva and Arqueros describes gendered behaviors being taught. In Argentina, missionaries intending to educate the Qom people reinforced a conversion to gender norms and European modernity on the indigenous community.
In some subdomains of feminism, such as intersectional feminism, gender is a major though not solitary axis along which factors of oppression are considered, as expressed by Berkowitz, who wrote "The gender order is hierarchical in that, overall, men dominate women in terms of power and privilege; yet multiple and conflicting sources of power and oppression are intertwined, and not all men dominate all women. Intersectionality theorizes how gender intersects with race, ethnicity, social class, sexuality, and nation in variegated and situationally contingent ways".
Berkowitz also asserts that gender at large, especially gender roles, contribute greatly as a prolific and potent avenue by which manipulations of social perceptions and expression manifest reality. Specifically, a reality in which women are typically oppressed by men within a social structure that establishes roles for women, which are of explicitly lesser capacity for accruing and exercising arbitrary power. The system which manifest and exercises this power, is typically referred to as "patriarchy". To clarify, the term arbitrary here is used to denote the source of power as being derived from status as feminist theory describes it. The particular model of patriarchy prescribed, does not make any distinction of stratification or power originating from competence or prestige.
Anthropologist Catherine L. Besteman observes the differences in gender roles in the context of parenting by Somali Bantu refugees in Lewiston, Maine; The separate roles communicate the agency of individuals based on their gender – agency in which males tend to be favored in terms of social power. Girls seemed to be "under increasing scrutiny to behave respectably as parents attempted to protect them from America's public sexual culture in the only way they know: early arranged marriage and lots of responsibilities for domestic tasks". Boys, however, were given less responsibilities and more freedom. The distinction between the responsibilities of boys and girls define the refugees' children's understanding of what it means to belong to a particular gender in America with association to "parental authority". Besteman observed the contrast to be a result of a lack traditional male chores in America compared to Somalia, such as farm work, while the traditional female chores were able to be maintained.
Gender identity
Gender identity is a related concept, which instead of referring to the external social understanding developed between persons, gender identity refers to the internal sense of ones own gender on an individual scale. According to Alsop, Fitzsimmons & Lennon, "Gender is part of an identity woven from a complex and specific social whole, and requiring very specific and local readings". Thus, gender identity can be defined as part of socially situated understanding of gender. LaFrance, Paluck and Brescoll note that as a term, "gender identity" allows individuals to express their attitude towards and stance in relation to their current status as either women or men. Turning the scope of gender from a social consensus to objectivity to one's self-identification with a certain gender expression leaves much more space for describing variation among individuals.
Intersections of gender identity with other identities
While men and women are held accountable for normative conceptions of gender, this accountability can differ in content based on ethnicity, race, age, class, etc. Hurtado argues that white women and women of color experience gender differently because of their relationship to males of different races and that both groups of women have traditionally been used to substantiate male power in different ways. Fenstermaker says that some women of color are subordinated through rejection, or denial of the "patriarchal invitation to privilege". For instance, some white men may see women of color as workers and objects of sexual aggression; this would allow the men to display power and sexual aggression without the emotional attachment that they have to white women. White women are accountable for their gendered display as traditionally subservient to white men while women of color may be held accountable for their gendered performance as sexual objects and as recalcitrant and bawdy women in relations with white men. West and Fenstermaker conclude that doing gender involves different versions of accountability, depending on women's "relational position" to white men.
Gender in the workplace
Moroccan women in Belgium with high-skill jobs report struggling to find a work–life balance; they leave ethnicity out of the discussed influences on professional identity, but do discuss gender. Portrayals of gender can be advantageous or disadvantageous for Moroccan women in the Belgian workplace. Disadvantages include the view of women in their twenties as busy with homemaking and child-rearing, and the Islamic tradition of wearing a headscarf leading to discrimination. Advantages include second generation immigrant women receiving less discrimination than men, and being highly educated further reduces chances of discrimination.
In the U.S., changes in gender ideology relate to changes in an individual's life, such as becoming a parent, getting a job, and other milestones. Racial differences and gender are determiners of treatment in the workplace; African American mothers suffer a wage penalty if they are married with big families, while white women are penalized upon becoming a mother. African American husbands are not seen as serious economic providers, and do not receive a wage premium for parenthood, while white fathers do. Current, full-time working women have a more egalitarian gender ideology than non-working or part-time women. Men relate work to providing roles and only shift to a more egalitarian gender ideology when opportunities are blocked and they learn to redefine success; blocked opportunities are more prevalent for black men.
Sexuality/sexual orientation
In recent years, elementary schools in the U.S. have started carrying chapter books that include either non-traditional families with same-sex parents, homosexual role models, or (in fewer cases) an adolescent who is discovering and accepting their own sexuality/sexual orientation. Hermann-Wilmarth and Ryan acknowledge this rise in representation, while critiquing the way that the limited selection of books present these characters with an eye towards popularized characterizations of homosexuality. The authors characterize this style of representation as "homonormative", and in the only example of a book where the protagonist questions their gender identity, it is left ambiguous as to whether or not they are a trans man or that they were simply pretending.
Diamond and Butterworth argue that gender identity and sexual identity are fluid and do not always fall into two essentialist categories (man or woman and gay or straight); they came to this conclusion via interviewing women that fall into a sexual minority group over the course of ten years. One woman had a relatively normal early childhood but around adolescence questioned her sexuality and remained stable in her gender and sexual identity until she started working with men and assumed a masculine "stance" and started to question her gender identity. When 'she' became a 'he' he began to find men attractive and gradually identified as a homosexual man.
The perception of sexuality by others is an extension of others' perceptions of one's gender. Heterosexuality is assumed for those individuals who appear to act appropriately masculine or appropriately feminine. If one wants to be perceived as a lesbian, one must first be perceived as a woman; if one wants to be seen as a gay man, one has to be seen as a man.
Interactionism
In Gender: An Ethnomethodological Approach (1978), Suzanne Kessler and Wendy McKenna famously proposed gender as an accomplishment. Their analysis, which was heavily based in the observation of transsexuality, is one of the earliest affirmations of the everyday production of gender in social interactions, and was further developed by West and Zimmerman. Accomplishment is "the activity of managing situated conduct in light of normative conceptions of attitudes and activities appropriate for one's sex category". People do not have to be in mixed gender groups or in groups at all for the performance of gender to occur; the production of gender occurs with others and is even performed alone, in the imagined presence of others. "Doing gender" is not just about conforming to stereotypical gender roles – it is the active engagement in any behavior that is gendered, or behavior that may be evaluated as gendered.
The performance of gender varies given the context: time, space, social interaction, etc. The enactment of gender roles is context dependent – roles are "situated identities" instead of "master identities". The sociology of knowledge must first of all concern itself with what people "know" as "reality" in their everyday, non- or pre-theoretical lives. In other words, individual perceptions of ""knowledge" or reality...must be the central focus." These performances normalize the essentialism of sex categories: by doing gender, we reinforce the essential categories of gender – that there are only two categories that are mutually exclusive. The idea that men and women are essentially different is what makes men and women behave in ways that appear essentially different. Though sex categorization is based on biological sex, it is maintained as a category through socially constructed displays of gender (for example, one could identify a transgender person as female even though she was assigned male at birth).
Institutions also create normative conceptions of gender. In other words, gender is simultaneously created and maintained – "both a process and a product, medium and outcome of such power relations". In his examination of blue and white-collar workers, Mumby argued that hegemonic or dominant masculinity provides a standard of acceptable behavior for men, and at the same time, is the product of men's behavior. This can be said for constructions of any identity in certain contexts (e.g. femininity, race, Black femininity, etc.).
Because gender is "done" or constructed, it can also be "undone" or deconstructed. The study of the interactional level could expand beyond simply documenting the persistence of inequality to examine: (1) when and how social interactions become less gendered, not just differently gendered, (2) the conditions under which gender is irrelevant in social interactions, (3) whether all gendered interactions reinforce inequality, (4) how the structural (institutional) and interactional levels might work together to produce change, and (5) interaction as the site of change.
Accountability
People hold themselves and each other accountable for their presentations of gender (how they 'measure up'). They are aware that others may evaluate and characterize their behavior. This is an interactional process (not just an individual one). Social constructionism asserts that gender is a category that people evaluate as omnirelevant to social life. Gender as omnirelevant means that people can always be judged by what they do as a man or as a woman. This is the basis for the reasoning that people are always performing gender and that gender is always relevant in social situations.
Accountability can apply to behaviors that do conform to cultural conceptions as well as those behaviors that deviate – it is the possibility of being held accountable that is important in social constructionism. For example, Stobbe examined the rationale that people gave for why there were small numbers of women in the auto industry. Men cited the idea that such dirty work was unsuitable for women and women were unable to train because of family duties. Stobbe argues that the male workers created a machismo masculinity to distinguish themselves from women who might have been qualified to work in the auto shop. Women who do work in male-dominated professions have to carefully maintain and simultaneously balance their femininity and professional credibility.
Even though gender seems more salient in some situations – for instance, when a woman enters a male-dominated profession – gender categories also become salient in contexts in which gender is less obvious. For instance, gender is maintained before the woman enters the male-dominated group through conceptions of masculinity.
Race, class, and other oppressions can also be omnirelevant categories, though they are not all identically salient in every set of social relationships in which inequality is done. People have preconceived notions about what particular racial groups look like (although there is no biological component to this categorization). Accountability is interactional because it does not occur solely within the individual. It is also institutional because individuals may be held accountable for their behaviors by institutions or by others in social situations, as a member of any social group (gender, race, class, etc.). This notion of accountability makes gender dynamic because what is considered appropriate behavior for men and women changes and is reproduced over time and is reproduced differently depending on context. Gender is created in different ways among uneducated and educated African Americans.
Social cognitive theory
Gender features strongly in most societies and is a significant aspect of self-definition for most people. One way to analyze the social influences that affect the development of gender is through the perspective of the social cognitive theory. According to Kay Bussey, social cognitive theory describes "how gender conceptions are developed and transformed across the life span". The social cognitive theory views gender roles as socially constructed ideas that are obtained over one's entire lifetime. These gender roles are "repeatedly reinforced through socialization". Hackman verifies that these gender roles are instilled in us from "the moment we are born". For the individual, gender construction starts with assignments to a sex category on the basis of biological genitalia at birth. Following this sexual assignment, parents begin to influence gender identity by dressing children in ways that clearly display this biological category. Therefore, biological sex becomes associated with a gender through naming, dress, and the use of other gender markers. Gender development continues to be affected by the outlooks of others, education institutions, parenting, media, etc. These variations of social interactions force individuals to "learn what is expected, see what is expected, act and react in expected ways, and thus simultaneously construct and maintain the gender order".
Gender-based harassment
It is very common for gender-based harassment to occur throughout the academic years of a person's life. This serves as a form of gender boundary policing. Women are expected to conform to stereotypical gendered appearances, as are men. Students regularly take part in policing gender boundaries through bullying. Male students frequently harass male and female students, while female students generally only harass other female students. The practice of male students bullying other male students is explicitly linked to machismo, which is the notion that boys are expected to subscribe to in order to be constructed and related to as 'normal' boys. Many girls report that boys tease and ridicule them on the basis of their appearance, which is linked to boys asserting masculine power through sexist practices of denigrating girls. This also serves to perpetuate the idea that appearance is a female's most important asset. In their study, "Correlates and Consequences of Peer Victimization: Gender Differences in Direct and Indirect Forms of Bullying", Lopez, Esbensen & Brick state that "boys were more likely to experience direct or physical forms of bullying and girls were more likely to report being teased or joked about." This can be interpreted as females typically harassing other females in more of a mental, emotional, and psychological torment while males take more of a physical and aggressive approach. Unique appearances and attempts to stand out among girls are regarded very negatively. This type of female on female bullying sets the standard for norms on appearance and the importance of conforming to the societal expectations of that appearance for females. Overall, gender-based harassment serves to define and enforce gender boundaries of students by students.
Adolescent view of adulthood
Gender is a cultural construction which creates an environment where an adolescent's performance in high school is related to their life goals and expectations. Because some young women believe that they want to be mothers and wives, the choice of professions and future goals can be inherently flawed by the gender constraints. Because a girl may want to be a mother later, her academics in high school can create clear gender differences because "higher occupational expectations, educational expectations, and academic grades were more strongly associated with the expected age of parenthood for girls than for boys". With "young women recognizing potential conflicts between the demands of work and family", they will not try as hard in high school allowing males to achieve higher academic achievement then girls. Crocket and Beal in their article "The Life Course in the Making: Gender and the Development of Adolescents", "gender differences in the anticipated timing of future role transitions, the impact of expectations and values on these expected timings, and the extent to which expectations foreshadow actual behavior". The actions of a youth in high school greatly impact the choices the individual will have over a lifetime. Women especially are constrained in the way they view their adulthood even at a young age because of motherhood.
Males can also be subject to gender construction due to social expectations of masculinity. According to Jack Halberstam (under the name Judith), people correlate masculinity with "maleness and to power to domination", something that he believes is a result of patriarchy. In a 2015 study published in the American Journal of Public Health, researchers stated that gender construct can differ depending on the man's race or ethnicity and stated that for white men there was an emphasis on "education, employment, and socioeconomic status" whereas the expectations for black men focused on "sexual prowess, physical dominance, and gamesmanship". These expectations can make it harder for males to display emotions without receiving criticism and being seen as less of a man.
Adolescents view on adulthood is also determined by their employment in high school. Many boys work during high school and "unlike young women, young men who had not worked during high school did not quite match their peers". Because many other boys are working, those who do not work may not be as successful after graduation. In the book Working and Growing Up in America, Jeylan T. Mortimer explains "youth who work during high school, and those who devote more hours to work, are more vocationally successful after leaving high school". This creates a distinct gender difference in which men are more likely to be employed after high school than women if they have worked during high school. This means women may be at an academic advantage if they do not work in high school and focus on school work.
Body image
There are many different factors that affect body image, "including sex, media, parental relationship, and puberty as well as weight and popularity". The intersectionality of these factors causes individualistic experiences for adolescents during this period within their lives. As their body changes, so does the environment in which they live in. Body image is closely linked to psychological well-being during adolescence and can cause harmful effects when a child has body dissatisfaction. In the article "Body Image and Psychological Well-Being in Adolescents: The Relationship between Gender and School Type", Helen Winfield explains that an adolescent's high school experience is closely linked to their perceived body image. She analyzed over 336 teenagers and found "ratings of physical attractiveness and body image remain relatively stable across the early teenage years, but become increasingly negative around age 15–18 years because of pubertal changes". This shift during the high school years may cause serious psychological problems for adolescents. These psychological problems can manifest into eating disorders causing serious lifelong problems. Due to these findings, it is shown that these body image issues are especially prevalent in girls but as boys enter puberty, expectations of height and muscle mass change as well. Geoffrey H. Cohane, Harrison G. Pope Jr. in their article, "Body image in boys: A review of the literature", claim that "girls typically wanted to be thinner, boys frequently wanted to be bigger". This statistic displays that gender difference in body image cause different beauty ideals. Gender can have an impact of affecting an adolescent's body image and potentially their high school experience.
Education
Due to the amount of time that children spend in school, "teachers are influential role models for many aspects of children's educational experiences, including gender socialization". Teachers who endorse the culturally dominant gender-role stereotype regarding the distribution of talent between males and females distort their perception of their students' mathematical abilities and effort resources in mathematics, in a manner that is consistent with their gender-role stereotype and to a greater extent than teachers who do not endorse the stereotype.
According to the 1994 report Intelligence: Knowns and Unknowns by the American Psychological Association, "[m]ost standard tests of intelligence have been constructed so that there are no overall score differences between females and males." Differences have been found, however, in specific areas such as mathematics and verbal measures. Even within mathematics, it is noted that significant differences in performance as a result of gender do not occur until late in high school, a result of biological differences, the exhibition of stereotypes by teachers, and the difference in chosen coursework between individual students. While, on average, boys and girls perform similarly in math, boys are over represented among the very best performers as well as the very worst. Teachers have found that when certain types of teaching (such as experiments that reflect daily life), work for girls, they generally work for boys as well.
Although little difference in mathematics performance was found among younger students, a study of students grade 1–3 by Fennema et al. noted that significant differences in problem-solving strategies were found, with girls tending to use more standard algorithms than the boys. They suggest that this may be due to both the teachers' stereotypical beliefs about mathematics and gender, as well as the study's design permitting "the children's stereotypical beliefs to influence strategy use and thus the development of understanding in these classrooms". A study conducted at Illinois State University examined the effects of gender stereotypes on the teaching practices of three third grade teachers, noting that "[the teachers] claimed gender neutrality, yet they expressed numerous beliefs about gender difference during the study", such as allowing boys (but not girls) to respond to questions without raising their hand or providing reading selections that promoted women in non-traditional roles, but not doing the same for men.
Overall, differences in student performance that arise from gender tend to be smaller than that of other demographic differences, such as race or socioeconomic class. The results of the 1992 NAEP 12th grade science tests, on a 500-point scale, show that the differences of scores between white and African American students were around 48 points, while differences between male and female students were around 11 points.
Media
Social gender construction (specifically for younger audiences) is also influenced by media. In the 21st century, modern technology is abundant in developed countries. In 2018, roughly 42% of tweens and teens experience feelings of anxiety when not near their phones. There is a growing amount of teens that spend an average of 6.5 hours on media daily. This data reflects how much of a teenager's personality is dependent on media. Media influencing gender construction can be seen in advertising, social networking, magazines, television, music, and music videos.
These platforms can affect how a developing human views themselves and those around them. There is both positive and negative media and each type can be perceived differently. Media will often portray men and women in a stereotypical manner, reflecting their "ideal image" for society. These images often act as an extreme expectation for many developing teenagers.
Men are typically portrayed as assertive, powerful, and strong. Particularly in television, men are usually shown as being nonemotional and detached. Women are often portrayed as the opposite. Gender roles are generally more enforced for women in media than they are for men. Women are typically represented as the backbone of the household, the caretaker, and as stay at home mothers. Women in media are often given weak, dependent, and passive personalities. Media presence often perpetuates that men are not allowed to be caring and that women are not allowed to be strong and demanding. These gender influences from the media can mislead a growing child or teenager because while they are still trying to construct their identities and genders in a social environment, they are surrounded by biased influences.
The Internet reflects the values of offline society, and the jokes made online reveal the values and opinions reflected in those jokes, despite them being couched in humor. Memes are used to make sexist ideas into 'jokes', reinforcing sexist gender stereotypes, making threats against women, and mocking transgender people. Many of these views, when questioned or concerns are raised about them, are hidden, saying it was just a joke or a meme. However, memes and internet communities are also very common in feminist and transgender spaces, where jokes about gender are kinder and come from within the community rather than outside of it.
Gender performativity
The term gender performativity was first coined by American philosopher and gender theorist Judith Butler in their 1990 book Gender Trouble: Feminism and the Subversion of Identity. In the book, Butler sets out to criticize what they consider to be an outdated perception of gender. This outdated perception, according to Butler, is limiting in that it adheres to the dominant societal constraints that label gender as binary. In scrutinizing gender, Butler introduces a nuanced perception in which they unite the concepts of performativity and gender. In chapter one, Butler introduces the unification of the terms gender and performativity in stating that "gender proves to be performance—that is, constituting the identity it is purported to be. In this sense, gender is always a doing, though not a doing by a subject who might be said to pre-exist the deed".
In demystifying this concept, Butler sets out to clarify that there is indeed a difference in the terms gender performance and gender performativity. In a 2011 interview, Butler stated it this way:
Thus, Butler perceives gender as being constructed through a set of acts that are said to be in compliance with dominant societal norms. Butler is, however, not stating that gender is a sort of performance in which an individual can terminate the act; instead, what Butler is stating is that this performance is ongoing and out of an individual's control. In fact, rather than an individual producing the performance, the opposite is true. The performance is what produces the individual. Specifically, Butler approvingly quotes Nietzsche's claim that "there is no 'being' behind doing... 'the doer' is merely a fiction added to the deed – the deed is everything", Thus, the emphasis is placed not on the individual producing the deed but on the deed itself, and its cessation becomes as problematic as the eagle rendering itself a mere lamb in Nietzsche's respective analogy. Butler, in fact, goes on to stress in their own words: “there is no gender identity behind the expressions of gender; that identity is performatively constituted by the very ‘expressions’ that are said to be its results”. Overall, they may be said to have been thoroughly influenced by Nietzsche's philosophy of subjectivity.
Amelia Jones proposes that this mode of viewing gender offered a way to move beyond the theories of the gaze and sexual fetishism, which had attained much prominence in academic feminism, but which by the 1980s Jones viewed as outdated methods of understanding women's societal status. Jones believes the performative power to act out gender is extremely useful as a framework, offering new ways to consider images as enactments with embodied subjects rather than inanimate objects for men's viewing pleasure.
Applications
Infancy and young childhood
The idea around gender performativity, when applied to infancy and young childhood, deals with the idea that from the moment one is conceived, arguably even before that, who they are and who they will become is predetermined. Children learn at a very young age what it means to be a boy or girl in our society. Individuals are either given masculine or feminine names based on their sex, are assigned colors that are deemed appropriate only when utilized by a particular sex and are even given toys that will aid them in recognizing their proper places in society. According to Barbara Kerr and Karen Multon, many parents would be puzzled to know "the tendency of little children to think that it is their clothing or toys that make them boy or girl". Parents are going as far as coordinating their daughter with the color pink because it is feminine, or blue for their son because it is masculine. In discussing these points, Penelope Eckert, in her text titled Language and Gender, states: "the first thing people want to know about a baby is its sex, and social convention provides a myriad of props to reduce the necessity of asking". Thus, this reinforces the importance and emphasis that society places not only on sex but also on ways in which to point towards one's sex without implicitly doing so. Eckert furthers this in stating that determining sex at one's birth is also vital of how one presents themselves in society at an older age because "sex determination sets the stage for a lifelong process of gendering". Eckert's statement points to Judith Butler's view of gender as being performative. Similar to Butler, Eckert is hinting to the fact that gender is not an internal reality that cannot be changed. What Eckert is instead stating is that this is a common misconception that a majority of the population unknowingly reinforces, which sees its emergence during infancy.
Butler suggests in both "Critically Queer" and "Melancholy Gender" that the child/subject's ability to grieve the loss of the same-sex parent as a viable love object is barred. Following from Sigmund Freud's notion of melancholia, such a repudiation results in a heightened identification with the Other that cannot be loved, resulting in gender performances which create allegories of, and internalize the lost love that the subject is subsequently unable to acknowledge or grieve. Butler explains that "a masculine gender is formed from the refusal to grieve the masculine as a possibility of love; a feminine gender is formed (taken on, assumed) through the fantasy which the feminine is excluded as a possible object of love, an exclusion never grieved, but 'preserved' through the heightening of feminine identification itself".
Teen years
One's teen years are the prime time in which socialization occurs as well as the time in which how one presents themselves in society is of high concern. Often, this is the time in which one's ability to master their gender performance labels them as successful, and thus normal, or unsuccessful, and thus strange and unfitting. One of the sources that demonstrate how successful performance is acted out is magazines, specifically magazines targeting young girls. According to Eckert, "When we are teenagers, the teen magazines told girls how to make conversation with boys...". This not only emphasizes the fact that gender is something that is taught to us and is continuously being shaped by society's expectations, but it also points to one of the ways in which individuals are being subconsciously trained to be ideal participants in the gender binary. Thus calling back to Butler's perception that gender is not a fact about us but is something that is taught to us and is being constantly reinforced. This idea that gender is constantly shaped by expectations is relevant in the online community. Teenagers are easily able to formulate relationships and friendships online, thus increasing the probability of a teenager's delicate identity to be manipulated and distorted. Teenagers often come across situations in real life and online that cause them to question themselves when facing society, including gender performance.
Queer identity
The Butlerian model presents a queer perspective on gender performance and explores the possible intersection between socially constructed gender roles and compulsory heterosexuality. This model diverges from the hegemonic analytical framework of gender that many claim is heteronormative, contending with the ways in which queer actors problematize the traditional construction of gender. Butler adapts the psychoanalytical term of melancholia to conceptualize homoerotic subtext as it exists in western literature and especially the relationship between women writers, their gender, and their sexuality. Melancholia deals with mourning, but for homosexual couples it is not just mourning the death of the relationship, instead it is the societal disavowal of the relationship itself and the ability to mourn, thus leading to repression of these feelings. This idea is reflected in the activism organized by political groups such as ACT UP during the AIDS crisis. Many of the survivors that participated in this activism were homosexuals who had lost their partners to the disease. The survivors commemorated the dead by quilting together their rags, repurposing their possessions, and displaying their own bodies for premature mourning. All of these protests amounted to a message that some part of them will be left in the world after they have expired.
Criticism
Feminist theory
Elizabeth Grosz states that the sex–gender distinction maintained by some constructionist feminists is still based on essentialism:
Martha Nussbaum criticizes gender performativity as a misguided retreat from engaging with real-world concerns:
Transgender studies
In Assuming a Body: Transgender and Rhetorics of Materiality (2010), Gayle Salamon examined trans studies' affinity with feminist and queer theorizing of gender.
Such objections can be found in the works of Jay Prosser, Viviane Namaste, and Henry Rubin, often in relation to Butler's theory of gender performativity. Salamon interprets their arguments as misreadings of social constructionism, while Jack Halberstam identifies a "recommitment to essentialism within transexual theory".
Conversely, Susan Stryker affirmed that gender performativity "became central to the self-understandings of many transgender people" and is in line with Sandy Stone's posttranssexual call.
See also
Notes
References
Further reading
Gender roles
Gender
Feminist theory | 0.768729 | 0.994857 | 0.764775 |
Emic and etic | In anthropology, folkloristics, linguistics, and the social and behavioral sciences, emic and etic refer to two kinds of field research done and viewpoints obtained.
The "emic" approach is an insider's perspective, which looks at the beliefs, values, and practices of a particular culture from the perspective of the people who live within that culture. This approach aims to understand the cultural meaning and significance of a particular behavior or practice, as it is understood by the people who engage in it.
The "etic" approach, on the other hand, is an outsider's perspective, which looks at a culture from the perspective of an outside observer or researcher. This approach tends to focus on the observable behaviors and practices of a culture, and aims to understand them in terms of their functional or evolutionary significance. The etic approach often involves the use of standardized measures and frameworks to compare different cultures and may involve the use of concepts and theories from other disciplines, such as psychology or sociology.
The emic and etic approaches each have their own strengths and limitations, and each can be useful in understanding different aspects of culture and behavior. Some anthropologists argue that a combination of both approaches is necessary for a complete understanding of a culture, while others argue that one approach may be more appropriate depending on the specific research question being addressed.
Definitions
"The emic approach investigates how local people think...". How they perceive and categorize the world, their rules for behavior, what has meaning for them, and how they imagine and explain things. "The etic (scientist-oriented) approach shifts the focus from local observations, categories, explanations, and interpretations to those of the anthropologist. The etic approach realizes that members of a culture often are too involved in what they are doing... to interpret their cultures impartially. When using the etic approach, the ethnographer emphasizes what he or she considers important."
Although emics and etics are sometimes regarded as inherently in conflict and one can be preferred to the exclusion of the other, the complementarity of emic and etic approaches to anthropological research has been widely recognized, especially in the areas of interest concerning the characteristics of human nature as well as the form and function of human social systems.
Emic and etic approaches of understanding behavior and personality fall under the study of cultural anthropology. Cultural anthropology states that people are shaped by their cultures and their subcultures, and we must account for this in the study of personality. One way is looking at things through an emic approach. This approach "is culture specific because it focuses on a single culture and it is understood on its own terms." As explained below, the term "emic" originated from the specific linguistic term "phonemic", from phoneme, which is a language-specific way of abstracting speech sounds.
An 'emic' account is a description of behavior or a belief in terms meaningful (consciously or unconsciously) to the actor; that is, an emic account comes from a person within the culture. Almost anything from within a culture can provide an emic account.
An 'etic' account is a description of a behavior or belief by a social analyst or scientific observer (a student or scholar of anthropology or sociology, for example), in terms that can be applied across cultures; that is, an etic account attempts to be 'culturally neutral', limiting any ethnocentric, political or cultural bias or alienation by the observer.
When these two approaches are combined, the "richest" view of a culture or society can be understood. On its own, an emic approach would struggle with applying overarching values to a single culture. The etic approach is helpful in enabling researchers to see more than one aspect of one culture, and in applying observations to cultures around the world.
History
The terms were coined in 1954 by linguist Kenneth Pike, who argued that the tools developed for describing linguistic behaviors could be adapted to the description of any human social behavior. As Pike noted, social scientists have long debated whether their knowledge is objective or subjective. Pike's innovation was to turn away from an epistemological debate, and turn instead to a methodological solution. Emic and etic are derived from the linguistic terms phonemic and phonetic, respectively, where a phone is a distinct speech sound or gesture, regardless of whether the exact sound is critical to the meanings of words, whereas a phoneme is a speech sound in a given language that, if swapped with another phoneme, could change one word to another. The possibility of a truly objective description was discounted by Pike himself in his original work; he proposed the emic-etic dichotomy in anthropology as a way around philosophic issues about the very nature of objectivity.
The terms were also championed by anthropologists Ward Goodenough and Marvin Harris with slightly different connotations from those used by Pike. Goodenough was primarily interested in understanding the culturally specific meaning of specific beliefs and practices; Harris was primarily interested in explaining human behavior.
Pike, Harris, and others have argued that cultural "insiders" and "outsiders" are equally capable of producing emic and etic accounts of their culture. Some researchers use "etic" to refer to objective or outsider accounts, and "emic" to refer to subjective or insider accounts.
Margaret Mead was an anthropologist who studied the patterns of adolescence in Samoa. She discovered that the difficulties and the transitions that adolescents faced are culturally influenced. The hormones that are released during puberty can be defined using an etic framework, because adolescents globally have the same hormones being secreted. However, Mead concluded that how adolescents respond to these hormones is greatly influenced by their cultural norms. Through her studies, Mead found that simple classifications about behaviors and personality could not be used because peoples’ cultures influenced their behaviors in such a radical way. Her studies helped create an emic approach of understanding behaviors and personality. Her research deduced that culture has a significant impact in shaping an individual's personality.
Carl Jung, a Swiss psychoanalyst, is a researcher who took an emic approach in his studies. Jung studied mythology, religion, ancient rituals, and dreams, leading him to believe that there are archetypes that can be identified and used to categorize people's behaviors. Archetypes are universal structures of the collective unconscious that refer to the inherent way people are predisposed to perceive and process information. The main archetypes that Jung studied were the persona (how people choose to present themselves to the world), the anima and animus (part of people experiencing the world in viewing the opposite sex, that guides how they select their romantic partner), and the shadow (dark side of personalities because people have a concept of evil; well-adjusted people must integrate both good and bad parts of themselves). Jung looked at the role of the mother and deduced that all people have mothers and see their mothers in a similar way; they offer nurture and comfort. His studies also suggest that "infants have evolved to suck milk from the breast, it is also the case that all children have inborn tendencies to react in certain ways." This way of looking at the mother is an emic way of applying a concept cross-culturally and universally.
Importance as regards personality
Emic and etic approaches are important to understanding personality because problems can arise "when concepts, measures, and methods are carelessly transferred to other cultures in attempts to make cross-cultural generalizations about personality." It is hard to apply certain generalizations of behavior to people who are so diverse and culturally different. One example of this is the F-scale (Macleod). The F-scale, which was created by Theodor Adorno, is used to measure authoritarian personality, which can, in turn, be used to predict prejudiced behaviors. This test, when applied to Americans accurately depicts prejudices towards black individuals. However, when a study was conducted in South Africa using the F-Scale, (Pettigrew and Friedman) results did not predict any prejudices towards black individuals. This study used emic approaches of study by conducting interviews with the locals and etic approaches by giving participants generalized personality tests.
See also
Exonym and endonym
Other explorations of the differences between reality and humans' models of it:
Blind men and an elephant
Emic and etic units
Internalism and externalism
Map–territory relation
References
Further reading
External links
Emic and Etic Standpoints for the Description of Behavior, chapter 2 in Language in Relation to a Unified Theory of the Structure of Human Behavior, vol 2, by Kenneth Pike (published in 1954 by Summer Institute of Linguistics)
Anthropology
Dichotomies
Ethnography
Folklore
Metatheory | 0.76862 | 0.994972 | 0.764756 |
Fad | A fad, trend, or craze is any form of collective behavior that develops within a culture, a generation or social group in which a group of people enthusiastically follow an impulse for a short time period.
Fads are objects or behaviors that achieve short-lived popularity but fade away. Fads are often seen as sudden, quick-spreading, and short-lived events. Fads include diets, clothing, hairstyles, toys, and more. Some popular fads throughout history are toys such as yo-yos, hula hoops, and fad dances such as the Macarena, floss and the twist.
Similar to habits or customs but less durable, fads often result from an activity or behavior being perceived as popular or exciting within a peer group, or being deemed "cool" as often promoted by social networks. A fad is said to "catch on" when the number of people adopting it begins to increase to the point of being noteworthy or going viral. Fads often fade quickly when the perception of novelty is gone.
Overview
The specific nature of the behavior associated with a fad can be of any type including unusual language usage, distinctive clothing, fad diets or frauds such as pyramid schemes. Apart from general novelty, mass marketing, emotional blackmail, peer pressure, or the desire to conformity may drive fads. Popular celebrities can also drive fads, for example the highly popularizing effect of Oprah's Book Club.
Though some consider the term trend equivalent to fad, a fad is generally considered a quick and short behavior whereas a trend is one that evolves into a long term or even permanent change.
Economics
In economics, the term is used in a similar way. Fads are mean-reverting deviations from intrinsic value caused by social or psychological forces similar to those that cause fashions in political philosophies or consumerisation.
Formation
Many contemporary fads share similar patterns of social organization. Several different models serve to examine fads and how they spread.
One way of looking at the spread of fads is through the top-down model, which argues that fashion is created for the elite, and from the elite, fashion spreads to lower classes. Early adopters might not necessarily be those of a high status, but they have sufficient resources that allow them to experiment with new innovations. When looking at the top-down model, sociologists like to highlight the role of selection. The elite might be the ones that introduce certain fads, but other people must choose to adopt those fads.
Others may argue that not all fads begin with their adopters. Social life already provides people with ideas that can help create a basis for new and innovative fads. Companies can look at what people are already interested in and create something from that information. The ideas behind fads are not always original; they might stem from what is already popular at the time. Recreation and style faddists may try out variations of a basic pattern or idea already in existence.
Another way of looking at the spread of fads is through a symbolic interaction view. People learn their behaviors from the people around them. When it comes to collective behavior, the emergence of these shared rules, meanings, and emotions are more dependent on the cues of the situation, rather than physiological arousal. This connection to symbolic interactionism, a theory that explains people's actions as being directed by shared meanings and assumptions, explains that fads are spread because people attach meaning and emotion to objects, and not because the object has practical use, for instance. People might adopt a fad because of the meanings and assumptions they share with the other people who have adopted that fad. People may join other adopters of the fad because they enjoy being a part of a group and what that symbolizes. Some people may join because they want to feel like an insider. When multiple people adopt the same fad, they may feel like they have made the right choice because other people have made that same choice.
Termination
Primarily, fads end because all innovative possibilities have been exhausted. Fads begin to fade when people no longer see them as new and unique. As more people follow the fad, some might start to see it as "overcrowded", and it no longer holds the same appeal. Many times, those who first adopt the fad also abandon it first. They begin to recognize that their preoccupation with the fad leads them to neglect some of their routine activities, and they realize the negative aspects of their behavior. Once the faddists are no longer producing new variations of the fad, people begin to realize their neglect of other activities, and the dangers of the fad. Not everyone completely abandons the fad, however, and parts may remain.
A study examined why certain fads die out quicker than others. A marketing professor at the University of Pennsylvania's Wharton School of Business, Jonah Berger and his colleague, Gael Le Mens, studied baby names in the United States and France to help explore the termination of fads. According to their results, the faster the names became popular, the faster they lost their popularity. They also found that the least successful names overall were those that caught on most quickly. Fads, like baby names, often lose their appeal just as quickly as they gained it.
Collective behavior
Fads can fit under the broad umbrella of collective behavior, which are behaviors engaged in by a large but loosely connected group of people. Other than fads, collective behavior includes the activities of people in crowds, panics, fashions, crazes, and more.
Robert E. Park, the man who created the term collective behavior, defined it as "the behavior of individuals under the influence of an impulse that is common and collective, an impulse, in other words, that is the result of social interaction". Fads are seen as impulsive, driven by emotions; however, they can bring together groups of people who may not have much in common other than their investment in the fad.
Collective obsession
Fads can also fit under the umbrella of "collective obsessions". Collective obsessions have three main features in common. The first, and most obvious sign, is an increase in frequency and intensity of a specific belief or behavior. A fad's popularity increases quickly in frequency and intensity, whereas a trend grows more slowly. The second is that the behavior is seen as ridiculous, irrational, or evil to the people who are not a part of the obsession. Some people might see those who follow certain fads as unreasonable and irrational. To these people, the fad is ridiculous, and people's obsession of it is just as ridiculous. The third is, after it has reached a peak, it drops off abruptly and then it is followed by a counter obsession. A counter obsession means that once the fad is over, if one engages in the fad they will be ridiculed. A fad's popularity often decreases at a rapid rate once its novelty wears off. Some people might start to criticize the fad after pointing out that it is no longer popular, so it must not have been "worth the hype".
See also
Bandwagon effect
:Category:Fads (notable fads through history)
Coolhunting
Crowd psychology
Google Trends
Hype
List of Internet phenomena
Market trend
Memetics
Peer pressure
Retro style
Social contagion
Social mania
Viral phenomenon
15 minutes of fame
Bellwether (1996 novel)
Notes
References
Best, Joel (2006). Flavor of the Month: Why Smart People Fall for Fads. University of California Press. .
Burke, Sarah. "5 Marketing Strategies, 1 Question: Fad or Trend?". Spokal.
Conley, Dalton (2015). You may ask yourself: An introduction to thinking like a sociologist. New York: W.W. Norton & Co. .
(review/summary)
Griffith, Benjamin (2013). "College Fads". St. James Encyclopedia of Popular Culture – via Gale Virtual Reference Library.
Heussner, Ki Mae. "7 Fads You Won't Forget". ABC News.
Killian, Lewis M.; Smelser, Neil J.; Turner, Ralph H. "Collective behavior". Encyclopædia Britannica.
External links
Popular culture
Crowd psychology
Types of IoT Security Devices | 0.767675 | 0.996156 | 0.764724 |
History of Eurasia | The history of Eurasia is the collective history of a continental area with several distinct peripheral coastal regions: Southwest Asia, South Asia, East Asia, Southeast Asia, and Western Europe, linked by the interior mass of the Eurasian steppe of Central Asia and Eastern Europe. Perhaps beginning with the Steppe Route trade, the early Silk Road, the Eurasian view of history seeks establishing genetic, cultural, and linguistic links between Eurasian cultures of antiquity. Much interest in this area lies with the presumed origin of the speakers of the Proto-Indo-European language and chariot warfare in Central Eurasia.
Prehistory
Lower Paleolithic
Fossilized remains of Homo ergaster and Homo erectus between 1.8 and 1.0 million years old have been found in Europe (Georgia (Dmanisi), Spain), Indonesia (e.g., Sangiran and Trinil), Vietnam, and China (e.g., Shaanxi). (See also:Multiregional hypothesis.) The first remains are of Olduwan culture, later of Acheulean and Clactonian culture. Finds of later fossils, such as Homo cepranensis, are local in nature, so the extent of human residence in Eurasia during 1,000,000 – 300,000 ybp remains a mystery.
Middle Paleolithic
Geologic temperature records indicate two intense ice ages dated around 650000 ybp and 450000 ybp. These would have presented any humans outside the tropics unprecedented difficulties. Indeed, fossils from this period are very few, and little can be said of human habitats in Eurasia during this period. The few finds are of Homo antecessor and Homo heidelbergensis, and Lantian Man in China.
After this, Homo neanderthalensis, with his Mousterian technology, emerged, in areas from Europe to western Asia and continued to be the dominant group of humans in Europe and the Middle East up until 40000–28000 ybp. Peking man has also been dated to this period. During the Eemian Stage, humans probably (e.g. Wolf Cave) spread wherever their technology and skills allowed. The Sahara dried up, forming a difficult area for peoples to cross.
The birth of the first modern humans (Homo sapiens idaltu) has been dated between 200000 and 130000 ybp (see:Mitochondrial Eve, Single-origin hypothesis), that is, to the coldest phase of the Riss glaciation. Remains of Aterian culture appear in the archaeological evidence.
Population bottleneck
In the beginning of the last ice age a supervolcano erupted in Indonesia. Theory states the effects of the eruption caused global climatic changes for many years, effectively obliterating most of the earlier cultures. Y-chromosomal Adam (90000–60000 BP, dated data) was initially dated here. Neanderthals survived this abrupt change in the environment, so it's possible for other human groups too. According to the theory humans survived in Africa, and began to resettle areas north, as the effects of the eruption slowly vanished. Upper Paleolithic revolution began after this extreme event, the earliest finds are dated c.50000 BC.
A divergence in genetical evidence occurs during the early phase of the glaciation. Descendants of female haplogroups M, N and male CT are the ones found among Eurasian peoples today.
Upper Paleolithic
The Southern Dispersal scenario postulates the arrival of anatomically modern humans to Eurasia beginning about 70,000 BC. Moving along the southern coast of Asia, they reached Maritime Southeast Asia by about 65,000 years ago.
The establishment of population centers in Western Asia, the Indian subcontinent and in East Asia is attested by about 50,000 years ago.
The Eurasian Upper Paleolithic proper is taken after c. 45,000 years ago, with the Cro-Magnon expansion into Europe (Mousterian), and the expansion into the Mammoth steppe of Northern Asia.
Migrations
Tracing back minute differences in the genomes of modern humans by methods of genetic genealogy, can and have been used to produce models of historical migration. Though these give indications of the routes taken by ancestral humans, genetic marker dating is becoming more accurate. The earliest migrations (dated c. 75.000 BP) from the Red Sea shores have been most likely along southern coast of Asia. After this, tracking and timing genetical markers gets increasingly difficult. What is known, is that on areas, of what is now Iraq, Iran, Pakistan and Afghanistan, genetic markers diversify (from about 60000 BC), and subsequent migrations emerge to all directions (even back to the Levant) and North Africa. From the foothills of the Zagros, big game hunting cultures developed which spread across the Eurasian steppe. Crossing the Caucasus and the Ural Mountains were the ancestors of Samoyeds and the ancestors of Uralic peoples, developing sleds, skis and canoes. Through Kazakhstan moved the ancestors of the Indigenous Americans (dated 50000–40000 BC). Eastbound (maybe through Dzungaria and the Tarim Basin went the ancestors of the northern Chinese and Koreans. It is possible that the routes taken by the Indo-European ancestors travelled across the Bosphorus. Genetic evidence suggests a number of separate migrations (1.Anatoleans 2. Tocharians, 3 Celto-Illyrians, 4.Germanic and Slav, - possibly in this order). Archaeological evidence has not been identified for a number of different groups. On historical linguistic evidence, see for example classification of Thracian. The traditional view of associating early Celts with the Hallstatt culture, and the Nordic Bronze Age with Germanic peoples. The Roman Empire spread after the first widespread use of iron outside Central Europe from the Villanovan culture area. Most likely there was trade also in these periods, e.g. with amber and salt being major products.
Influences from northern Africa via Gibraltar and Sicilia cannot be readily discounted. Many other questions remain open, too; for example, Neanderthals were still present at this time. More genetic data is being gathered by various research programs.
Early Holocene
As the ice age ended, major environmental changes happened, such as sea level rise (est. 120m), vegetation changes, and animals disappearing in the Holocene extinction event. At the same time Neolithic Revolution began and humans started to make pottery, began to cultivate crops and domesticated some animal species.
Neolithic cultures in Eurasia are many, and best discussed in separate articles. Some of the articles on this subject include: Natufian culture, Jōmon culture, List of Neolithic cultures of China and Mehrgarh. European sites are many, they are discussed in Prehistoric Europe. The finding of Ötzi the Iceman (dated 3300 BC) provides an important insight to Chalcolithic period in Europe. Proto-languages of various peoples have been forming in this period, though no literal evidence can (by definition) be found. Later migrations further complicate the study of migrations in this period.
Emergence of civilizations
Due to the similarities between Indo-European languages spoken throughout Europe, Iran, and India, it is widely believed that a group originating in the Pontic steppe in the 5th millennium BC spread both east and west, gradually making their way towards the Indian subcontinent and China in the east and western Europe in the west. These Proto-Indo-Europeans spread their languages into the Middle East, India, Europe, and to the borders of China.
Early forms of civilization in Southwest Asia began as early as the 8th millennium BC, in proto-urban centers such as Çatalhöyük. Urban civilizations began to emerge in the Chalcolithic. The earliest urban civilizations in Mesopotamia, India, and China all developed along river valleys. The Uruk period of Mesopotamia dates from about 4000 to 3100 BC and provides the earliest signs of the existence of states in the Near East. Civilizations grew along the Indus River around 3300 BC in Bronze Age India and in 1700 BE along the Yellow River in China. The valleys provided plentiful water and the enrichment of the soil due to annual floods, which made it possible to grow excess crops beyond what was needed to sustain an agricultural village. This allowed for some members of the community to engage in non-agricultural activities, such as the construction of buildings, trade, and social organization. Boats on the river provided an easy and efficient way to transport people and goods, allowing for the development of trade and facilitating central control of outlying areas. Writing likely developed independently in multiple Eurasian civilizations, including Mesopotamia (between 3400 and 3100 BC) and China (1200 BC).
In southern Europe, the Minoan civilization of the Aegean Islands began around 3500 BC, with the complex urban civilization beginning around 2000 BC. It left behind a number of massive building complexes, sophisticated art, and writing systems. Its economy benefited from a network of trade around much of the Mediterranean. By the 2nd millennium BC, the eastern coastlines of the Mediterranean were dominated by the Hittite and Egyptian empires, competing for control over the city states in the Levant.
The Black Sea area was another cradle of European civilization. The prehistoric fortified stone settlement of Solnitsata (5500–4200 BC) is one of the oldest known towns in Europe. The Bronze Age arose in this region during the final centuries of the 4th millennium.
The Bronze Age collapse ended the Late Bronze Age in much of Europe and the Mediterranean region, leading to the Early Iron Age. The Bronze Age collapse may be seen in the context of a technological history that saw the slow, comparatively continuous spread of iron-working technology in the region, beginning in the 13th and 12th centuries BC. The cultural collapse of the Mycenaean kingdoms, the Hittite Empire in Anatolia and Syria, and the Egyptian Empire in Syria and Israel, the scission of long-distance trade contacts and sudden eclipse of literacy occurred between 1206 and 1150 BC. The gradual end of the Dark Age that ensued saw the rise of settled Neo-Hittite Aramaean kingdoms of the mid-10th century BC, and the rise of the Neo-Assyrian Empire.
Phoenician expansion from the Levant beginning in the 12th century BC resulted in a "world-economy". The high point of Phoenician culture and sea power is usually placed ca. 1200–800 BC. The Phoenicians and the Assyrians transported elements of the Late Bronze Age culture of the Near East to Iron Age Greece and Italy, but also further afield to Northwestern Africa and to Iberia, initiating the beginning of Mediterranean history now known as Classical Antiquity. They notably spread alphabetic writing, which would become the hallmark of the Mediterranean civilizations of the Iron Age, in contrast to the cuneiform writing of Assyria and the logographic system in the Far East (and later the abugida systems of India).
The Iron Age made large stands of timber essential to a nation's success because smelting iron required so much fuel, and the pinnacles of human civilizations gradually moved as forests were destroyed. In Europe the Mediterranean region was supplanted by the German and Frankish lands. In the Middle East the main power center became Anatolia with the once dominant Mesopotamia its vassal. In China, the economical, agricultural, and industrial center moved from the northern Yellow River to the southern Yangtze, though the political center remained in the north. In part this is linked to technological developments, such as the mouldboard plough, that made life in once undeveloped areas more bearable.
In the Axial Age, China, India, and the Mediterranean formed a continuous belt of civilizations stretching from the Pacific to the Atlantic and connected by the Silk Road. The Indo-Mediterranean was the center of Afro-Eurasian connectivity in general until around 1000 AD, with Warwick Ball and William Dalrymple arguing that the Silk Road's prominence only rose with the Pax Mongolica from the 13th century onwards, and Dalrymple further arguing that until then, the main connecting route in Eurasia was a "Golden Road" going through the Indian Ocean and South Asia.
See also
Bibliography of the history of Central Asia
History of Asia
History of Europe
History of the Middle East
History of South Asia
History of East Asia
History of Southeast Asia
History of Central Asia
History of the World
Indo-European studies
Indo-Aryan migration hypothesis
Steppe Route
Turkic migration
References
Beckwith, Christopher I. (2009): Empires of the Silk Road: A History of Central Eurasia from the Bronze Age to the Present. Princeton: Princeton University Press. .
Schafer, Edward H. The Golden Peaches of Samarkand. Berkeley: University of California Press, 1985 (1963). .
External links
Eurasia
Eurasia | 0.786394 | 0.972437 | 0.764719 |
Information Age | The Information Age (also known as the Third Industrial Revolution, Computer Age, Digital Age, Silicon Age, New Media Age, Internet Age, or the Digital Revolution) is a historical period that began in the mid-20th century. It is characterized by a rapid shift from traditional industries, as established during the Industrial Revolution, to an economy centered on information technology. The onset of the Information Age has been linked to the development of the transistor in 1947 and the optical amplifier in 1957. These technological advances have had a significant impact on the way information is processed and transmitted.
According to the United Nations Public Administration Network, the Information Age was formed by capitalizing on computer microminiaturization advances, which led to modernized information systems and internet communications as the driving force of social evolution.
There is ongoing debate concerning whether the Third Industrial Revolution has already ended and if the Fourth Industrial Revolution has already begun due to the recent breakthroughs in areas such as artificial intelligence and biotechnologies. This next transition has been theorized to harken the advent of the Imagination Age.
History
The digital revolution converted technology from analog format to digital format. By doing this, it became possible to make copies that were identical to the original. In digital communications, for example, repeating hardware was able to amplify the digital signal and pass it on with no loss of information in the signal. Of equal importance to the revolution was the ability to easily move the digital information between media, and to access or distribute it remotely. One turning point of the revolution was the change from analog to digitally recorded music. During the 1980s the digital format of optical compact discs gradually replaced analog formats, such as vinyl records and cassette tapes, as the popular medium of choice.
Previous inventions
Humans have manufactured tools for counting and calculating since ancient times, such as the abacus, astrolabe, equatorium, and mechanical timekeeping devices. More complicated devices started appearing in the 1600s, including the slide rule and mechanical calculators. By the early 1800s, the Industrial Revolution had produced mass-market calculators like the arithmometer and the enabling technology of the punch card. Charles Babbage proposed a mechanical general-purpose computer called the Analytical Engine, but it was never successfully built, and was largely forgotten by the 20th century and unknown to most of the inventors of modern computers.
The Second Industrial Revolution in the last quarter of the 19th century developed useful electrical circuits and the telegraph. In the 1880s, Herman Hollerith developed electromechanical tabulating and calculating devices using punch cards and unit record equipment, which became widespread in business and government.
Meanwhile, various analog computer systems used electrical, mechanical, or hydraulic systems to model problems and calculate answers. These included an 1872 tide-predicting machine, differential analysers, perpetual calendar machines, the Deltar for water management in the Netherlands, network analyzers for electrical systems, and various machines for aiming military guns and bombs. The construction of problem-specific analog computers continued in the late 1940s and beyond, with FERMIAC for neutron transport, Project Cyclone for various military applications, and the Phillips Machine for economic modeling.
Building on the complexity of the Z1 and Z2, German inventor Konrad Zuse used electromechanical systems to complete in 1941 the Z3, the world's first working programmable, fully automatic digital computer. Also during World War II, Allied engineers constructed electromechanical bombes to break German Enigma machine encoding. The base-10 electromechanical Harvard Mark I was completed in 1944, and was to some degree improved with inspiration from Charles Babbage's designs.
1947–1969: Origins
In 1947, the first working transistor, the germanium-based point-contact transistor, was invented by John Bardeen and Walter Houser Brattain while working under William Shockley at Bell Labs. This led the way to more advanced digital computers. From the late 1940s, universities, military, and businesses developed computer systems to digitally replicate and automate previously manually performed mathematical calculations, with the LEO being the first commercially available general-purpose computer.
Digital communication became economical for widespread adoption after the invention of the personal computer in the 1970s. Claude Shannon, a Bell Labs mathematician, is credited for having laid out the foundations of digitalization in his pioneering 1948 article, A Mathematical Theory of Communication.
In 1948, Bardeen and Brattain patented an insulated-gate transistor (IGFET) with an inversion layer. Their concept, forms the basis of CMOS and DRAM technology today. In 1957 at Bell Labs, Frosch and Derick were able to manufacture planar silicon dioxide transistors, later a team at Bell Labs demonstrated a working MOSFET. The first integrated circuit milestone was achieved by Jack Kilby in 1958.
Other important technological developments included the invention of the monolithic integrated circuit chip by Robert Noyce at Fairchild Semiconductor in 1959, made possible by the planar process developed by Jean Hoerni. In 1963, complementary MOS (CMOS) was developed by Chih-Tang Sah and Frank Wanlass at Fairchild Semiconductor. The self-aligned gate transistor, which further facilitated mass production, was invented in 1966 by Robert Bower at Hughes Aircraft and independently by Robert Kerwin, Donald Klein and John Sarace at Bell Labs.
In 1962 AT&T deployed the T-carrier for long-haul pulse-code modulation (PCM) digital voice transmission. The T1 format carried 24 pulse-code modulated, time-division multiplexed speech signals each encoded in 64 kbit/s streams, leaving 8 kbit/s of framing information which facilitated the synchronization and demultiplexing at the receiver. Over the subsequent decades the digitisation of voice became the norm for all but the last mile (where analogue continued to be the norm right into the late 1990s).
Following the development of MOS integrated circuit chips in the early 1960s, MOS chips reached higher transistor density and lower manufacturing costs than bipolar integrated circuits by 1964. MOS chips further increased in complexity at a rate predicted by Moore's law, leading to large-scale integration (LSI) with hundreds of transistors on a single MOS chip by the late 1960s. The application of MOS LSI chips to computing was the basis for the first microprocessors, as engineers began recognizing that a complete computer processor could be contained on a single MOS LSI chip. In 1968, Fairchild engineer Federico Faggin improved MOS technology with his development of the silicon-gate MOS chip, which he later used to develop the Intel 4004, the first single-chip microprocessor. It was released by Intel in 1971, and laid the foundations for the microcomputer revolution that began in the 1970s.
MOS technology also led to the development of semiconductor image sensors suitable for digital cameras. The first such image sensor was the charge-coupled device, developed by Willard S. Boyle and George E. Smith at Bell Labs in 1969, based on MOS capacitor technology.
1969–1989: Invention of the internet, rise of home computers
The public was first introduced to the concepts that led to the Internet when a message was sent over the ARPANET in 1969. Packet switched networks such as ARPANET, Mark I, CYCLADES, Merit Network, Tymnet, and Telenet, were developed in the late 1960s and early 1970s using a variety of protocols. The ARPANET in particular led to the development of protocols for internetworking, in which multiple separate networks could be joined into a network of networks.
The Whole Earth movement of the 1960s advocated the use of new technology.
In the 1970s, the home computer was introduced, time-sharing computers, the video game console, the first coin-op video games, and the golden age of arcade video games began with Space Invaders. As digital technology proliferated, and the switch from analog to digital record keeping became the new standard in business, a relatively new job description was popularized, the data entry clerk. Culled from the ranks of secretaries and typists from earlier decades, the data entry clerk's job was to convert analog data (customer records, invoices, etc.) into digital data.
In developed nations, computers achieved semi-ubiquity during the 1980s as they made their way into schools, homes, business, and industry. Automated teller machines, industrial robots, CGI in film and television, electronic music, bulletin board systems, and video games all fueled what became the zeitgeist of the 1980s. Millions of people purchased home computers, making household names of early personal computer manufacturers such as Apple, Commodore, and Tandy. To this day the Commodore 64 is often cited as the best selling computer of all time, having sold 17 million units (by some accounts) between 1982 and 1994.
In 1984, the U.S. Census Bureau began collecting data on computer and Internet use in the United States; their first survey showed that 8.2% of all U.S. households owned a personal computer in 1984, and that households with children under the age of 18 were nearly twice as likely to own one at 15.3% (middle and upper middle class households were the most likely to own one, at 22.9%). By 1989, 15% of all U.S. households owned a computer, and nearly 30% of households with children under the age of 18 owned one. By the late 1980s, many businesses were dependent on computers and digital technology.
Motorola created the first mobile phone, Motorola DynaTac, in 1983. However, this device used analog communication - digital cell phones were not sold commercially until 1991 when the 2G network started to be opened in Finland to accommodate the unexpected demand for cell phones that was becoming apparent in the late 1980s.
Compute! magazine predicted that CD-ROM would be the centerpiece of the revolution, with multiple household devices reading the discs.
The first true digital camera was created in 1988, and the first were marketed in December 1989 in Japan and in 1990 in the United States. By the early 2000s, digital cameras had eclipsed traditional film in popularity.
Digital ink and paint was also invented in the late 1980s. Disney's CAPS system (created 1988) was used for a scene in 1989's The Little Mermaid and for all their animation films between 1990's The Rescuers Down Under and 2004's Home on the Range.
1989–2005: Invention of the World Wide Web, mainstreaming of the Internet, Web 1.0
Tim Berners-Lee invented the World Wide Web in 1989.
The first public digital HDTV broadcast was of the 1990 World Cup that June; it was played in 10 theaters in Spain and Italy. However, HDTV did not become a standard until the mid-2000s outside Japan.
The World Wide Web became publicly accessible in 1991, which had been available only to government and universities. In 1993 Marc Andreessen and Eric Bina introduced Mosaic, the first web browser capable of displaying inline images and the basis for later browsers such as Netscape Navigator and Internet Explorer. Stanford Federal Credit Union was the first financial institution to offer online internet banking services to all of its members in October 1994. In 1996 OP Financial Group, also a cooperative bank, became the second online bank in the world and the first in Europe. The Internet expanded quickly, and by 1996, it was part of mass culture and many businesses listed websites in their ads. By 1999, almost every country had a connection, and nearly half of Americans and people in several other countries used the Internet on a regular basis. However throughout the 1990s, "getting online" entailed complicated configuration, and dial-up was the only connection type affordable by individual users; the present day mass Internet culture was not possible.
In 1989, about 15% of all households in the United States owned a personal computer.
For households with children, nearly 30% owned a computer in 1989, and in 2000, 65% owned one.
Cell phones became as ubiquitous as computers by the early 2000s, with movie theaters beginning to show ads telling people to silence their phones. They also became much more advanced than phones of the 1990s, most of which only took calls or at most allowed for the playing of simple games.
Text messaging became widely used in the late 1990s worldwide, except for in the United States of America where text messaging didn't become commonplace till the early 2000s.
The digital revolution became truly global in this time as well - after revolutionizing society in the developed world in the 1990s, the digital revolution spread to the masses in the developing world in the 2000s.
By 2000, a majority of U.S. households had at least one personal computer and internet access the following year. In 2002, a majority of U.S. survey respondents reported having a mobile phone.
2005–2020: Web 2.0, social media, smartphones, digital TV
In late 2005 the population of the Internet reached 1 billion, and 3 billion people worldwide used cell phones by the end of the decade. HDTV became the standard television broadcasting format in many countries by the end of the decade. In September and December 2006 respectively, Luxembourg and the Netherlands became the first countries to completely transition from analog to digital television. In September 2007, a majority of U.S. survey respondents reported having broadband internet at home. According to estimates from the Nielsen Media Research, approximately 45.7 million U.S. households in 2006 (or approximately 40 percent of approximately 114.4 million) owned a dedicated home video game console, and by 2015, 51 percent of U.S. households owned a dedicated home video game console according to an Entertainment Software Association annual industry report. By 2012, over 2 billion people used the Internet, twice the number using it in 2007. Cloud computing had entered the mainstream by the early 2010s. In January 2013, a majority of U.S. survey respondents reported owning a smartphone. By 2016, half of the world's population was connected and as of 2020, that number has risen to 67%.
Rise in digital technology use of computers
In the late 1980s, less than 1% of the world's technologically stored information was in digital format, while it was 94% in 2007, with more than 99% by 2014.
It is estimated that the world's capacity to store information has increased from 2.6 (optimally compressed) exabytes in 1986, to some 5,000 exabytes in 2014 (5 zettabytes).
Overview of early developments
Library expansion and Moore's law
Library expansion was calculated in 1945 by Fremont Rider to double in capacity every 16 years where sufficient space made available. He advocated replacing bulky, decaying printed works with miniaturized microform analog photographs, which could be duplicated on-demand for library patrons and other institutions.
Rider did not foresee, however, the digital technology that would follow decades later to replace analog microform with digital imaging, storage, and transmission media, whereby vast increases in the rapidity of information growth would be made possible through automated, potentially-lossless digital technologies. Accordingly, Moore's law, formulated around 1965, would calculate that the number of transistors in a dense integrated circuit doubles approximately every two years.
By the early 1980s, along with improvements in computing power, the proliferation of the smaller and less expensive personal computers allowed for immediate access to information and the ability to share and store it. Connectivity between computers within organizations enabled access to greater amounts of information.
Information storage and Kryder's law
The world's technological capacity to store information grew from 2.6 (optimally compressed) exabytes (EB) in 1986 to 15.8 EB in 1993; over 54.5 EB in 2000; and to 295 (optimally compressed) EB in 2007. This is the informational equivalent to less than one 730-megabyte (MB) CD-ROM per person in 1986 (539 MB per person); roughly four CD-ROM per person in 1993; twelve CD-ROM per person in the year 2000; and almost sixty-one CD-ROM per person in 2007. It is estimated that the world's capacity to store information has reached 5 zettabytes in 2014, the informational equivalent of 4,500 stacks of printed books from the earth to the sun.
The amount of digital data stored appears to be growing approximately exponentially, reminiscent of Moore's law. As such, Kryder's law prescribes that the amount of storage space available appears to be growing approximately exponentially.
Information transmission
The world's technological capacity to receive information through one-way broadcast networks was 432 exabytes of (optimally compressed) information in 1986; 715 (optimally compressed) exabytes in 1993; 1.2 (optimally compressed) zettabytes in 2000; and 1.9 zettabytes in 2007, the information equivalent of 174 newspapers per person per day.
The world's effective capacity to exchange information through two-way telecommunication networks was 281 petabytes of (optimally compressed) information in 1986; 471 petabytes in 1993; 2.2 (optimally compressed) exabytes in 2000; and 65 (optimally compressed) exabytes in 2007, the information equivalent of six newspapers per person per day. In the 1990s, the spread of the Internet caused a sudden leap in access to and ability to share information in businesses and homes globally. A computer that cost $3000 in 1997 would cost $2000 two years later and $1000 the following year, due to the rapid advancement of technology.
Computation
The world's technological capacity to compute information with human-guided general-purpose computers grew from 3.0 × 108 MIPS in 1986, to 4.4 × 109 MIPS in 1993; to 2.9 × 1011 MIPS in 2000; to 6.4 × 1012 MIPS in 2007. An article featured in the journal Trends in Ecology and Evolution in 2016 reported that:
Genetic information
Genetic code may also be considered part of the information revolution. Now that sequencing has been computerized, genome can be rendered and manipulated as data. This started with DNA sequencing, invented by Walter Gilbert and Allan Maxam in 1976-1977 and Frederick Sanger in 1977, grew steadily with the Human Genome Project, initially conceived by Gilbert and finally, the practical applications of sequencing, such as gene testing, after the discovery by Myriad Genetics of the BRCA1 breast cancer gene mutation. Sequence data in Genbank has grown from the 606 genome sequences registered in December 1982 to the 231 million genomes in August 2021. An additional 13 trillion incomplete sequences are registered in the Whole Genome Shotgun submission database as of August 2021. The information contained in these registered sequences has doubled every 18 months.
Different stage conceptualizations
During rare times in human history, there have been periods of innovation that have transformed human life. The Neolithic Age, the Scientific Age and the Industrial Age all, ultimately, induced discontinuous and irreversible changes in the economic, social and cultural elements of the daily life of most people. Traditionally, these epochs have taken place over hundreds, or in the case of the Neolithic Revolution, thousands of years, whereas the Information Age swept to all parts of the globe in just a few years, as a result of the rapidly advancing speed of information exchange.
Between 7,000 and 10,000 years ago during the Neolithic period, humans began to domesticate animals, began to farm grains and to replace stone tools with ones made of metal. These innovations allowed nomadic hunter-gatherers to settle down. Villages formed along the Yangtze River in China in 6,500 B.C., the Nile River region of Africa and in Mesopotamia (Iraq) in 6,000 B.C. Cities emerged between 6,000 B.C. and 3,500 B.C. The development of written communication (cuneiform in Sumeria and hieroglyphs in Egypt in 3,500 B.C. and writing in Egypt in 2,560 B.C. and in Minoa and China around 1,450 B.C.) enabled ideas to be preserved for extended periods to spread extensively. In all, Neolithic developments, augmented by writing as an information tool, laid the groundwork for the advent of civilization.
The Scientific Age began in the period between Galileo's 1543 proof that the planets orbit the Sun and Newton's publication of the laws of motion and gravity in Principia in 1697. This age of discovery continued through the 18th century, accelerated by widespread use of the moveable type printing press by Johannes Gutenberg.
The Industrial Age began in Great Britain in 1760 and continued into the mid-19th century. The invention of machines such as the mechanical textile weaver by Edmund Cartwrite, the rotating shaft steam engine by James Watt and the cotton gin by Eli Whitney, along with processes for mass manufacturing, came to serve the needs of a growing global population. The Industrial Age harnessed steam and waterpower to reduce the dependence on animal and human physical labor as the primary means of production. Thus, the core of the Industrial Revolution was the generation and distribution of energy from coal and water to produce steam and, later in the 20th century, electricity.
The Information Age also requires electricity to power the global networks of computers that process and store data. However, what dramatically accelerated the pace of The Information Age’s adoption, as compared to previous ones, was the speed by which knowledge could be transferred and pervaded the entire human family in a few short decades. This acceleration came about with the adoptions of a new form of power. Beginning in 1972, engineers devised ways to harness light to convey data through fiber optic cable. Today, light-based optical networking systems at the heart of telecom networks and the Internet span the globe and carry most of the information traffic to and from users and data storage systems.
There are different conceptualizations of the Information Age. Some focus on the evolution of information over the ages, distinguishing between the Primary Information Age and the Secondary Information Age. Information in the Primary Information Age was handled by newspapers, radio and television. The Secondary Information Age was developed by the Internet, satellite televisions and mobile phones. The Tertiary Information Age was emerged by media of the Primary Information Age interconnected with media of the Secondary Information Age as presently experienced.
Others classify it in terms of the well-established Schumpeterian long waves or Kondratiev waves. Here authors distinguish three different long-term metaparadigms, each with different long waves. The first focused on the transformation of material, including stone, bronze, and iron. The second, often referred to as Industrial Revolution, was dedicated to the transformation of energy, including water, steam, electric, and combustion power. Finally, the most recent metaparadigm aims at transforming information. It started out with the proliferation of communication and stored data and has now entered the age of algorithms, which aims at creating automated processes to convert the existing information into actionable knowledge.
Information in social and economic activities
The main feature of the information revolution is the growing economic, social and technological role of information. Information-related activities did not come up with the Information Revolution. They existed, in one form or the other, in all human societies, and eventually developed into institutions, such as the Platonic Academy, Aristotle's Peripatetic school in the Lyceum, the Musaeum and the Library of Alexandria, or the schools of Babylonian astronomy. The Agricultural Revolution and the Industrial Revolution came up when new informational inputs were produced by individual innovators, or by scientific and technical institutions. During the Information Revolution all these activities are experiencing continuous growth, while other information-oriented activities are emerging.
Information is the central theme of several new sciences, which emerged in the 1940s, including Shannon's (1949) Information Theory and Wiener's (1948) Cybernetics. Wiener stated: "information is information not matter or energy". This aphorism suggests that information should be considered along with matter and energy as the third constituent part of the Universe; information is carried by matter or by energy. By the 1990s some writers believed that changes implied by the Information revolution will lead to not only a fiscal crisis for governments but also the disintegration of all "large structures".
The theory of information revolution
The term information revolution may relate to, or contrast with, such widely used terms as Industrial Revolution and Agricultural Revolution. Note, however, that you may prefer mentalist to materialist paradigm. The following fundamental aspects of the theory of information revolution can be given:
The object of economic activities can be conceptualized according to the fundamental distinction between matter, energy, and information. These apply both to the object of each economic activity, as well as within each economic activity or enterprise. For instance, an industry may process matter (e.g. iron) using energy and information (production and process technologies, management, etc.).
Information is a factor of production (along with capital, labor, land (economics)), as well as a product sold in the market, that is, a commodity. As such, it acquires use value and exchange value, and therefore a price.
All products have use value, exchange value, and informational value. The latter can be measured by the information content of the product, in terms of innovation, design, etc.
Industries develop information-generating activities, the so-called Research and Development (R&D) functions.
Enterprises, and society at large, develop the information control and processing functions, in the form of management structures; these are also called "white-collar workers", "bureaucracy", "managerial functions", etc.
Labor can be classified according to the object of labor, into information labor and non-information labor.
Information activities constitute a large, new economic sector, the information sector along with the traditional primary sector, secondary sector, and tertiary sector, according to the three-sector hypothesis. These should be restated because they are based on the ambiguous definitions made by Colin Clark (1940), who included in the tertiary sector all activities that have not been included in the primary (agriculture, forestry, etc.) and secondary (manufacturing) sectors. The quaternary sector and the quinary sector of the economy attempt to classify these new activities, but their definitions are not based on a clear conceptual scheme, although the latter is considered by some as equivalent with the information sector.
From a strategic point of view, sectors can be defined as information sector, means of production, means of consumption, thus extending the classical Ricardo-Marx model of the Capitalist mode of production (see Influences on Karl Marx). Marx stressed in many occasions the role of the "intellectual element" in production, but failed to find a place for it into his model.
Innovations are the result of the production of new information, as new products, new methods of production, patents, etc. Diffusion of innovations manifests saturation effects (related term: market saturation), following certain cyclical patterns and creating "economic waves", also referred to as "business cycles". There are various types of waves, such as Kondratiev wave (54 years), Kuznets swing (18 years), Juglar cycle (9 years) and Kitchin (about 4 years, see also Joseph Schumpeter) distinguished by their nature, duration, and, thus, economic impact.
Diffusion of innovations causes structural-sectoral shifts in the economy, which can be smooth or can create crisis and renewal, a process which Joseph Schumpeter called vividly "creative destruction".
From a different perspective, Irving E. Fang (1997) identified six 'Information Revolutions': writing, printing, mass media, entertainment, the 'tool shed' (which we call 'home' now), and the information highway. In this work the term 'information revolution' is used in a narrow sense, to describe trends in communication media.
Measuring and modeling the information revolution
Porat (1976) measured the information sector in the US using the input-output analysis; OECD has included statistics on the information sector in the economic reports of its member countries. Veneris (1984, 1990) explored the theoretical, economic and regional aspects of the informational revolution and developed a systems dynamics simulation computer model.
These works can be seen as following the path originated with the work of Fritz Machlup who in his (1962) book "The Production and Distribution of Knowledge in the United States", claimed that the "knowledge industry represented 29% of the US gross national product", which he saw as evidence that the Information Age had begun. He defines knowledge as a commodity and attempts to measure the magnitude of the production and distribution of this commodity within a modern economy. Machlup divided information use into three classes: instrumental, intellectual, and pastime knowledge. He identified also five types of knowledge: practical knowledge; intellectual knowledge, that is, general culture and the satisfying of intellectual curiosity; pastime knowledge, that is, knowledge satisfying non-intellectual curiosity or the desire for light entertainment and emotional stimulation; spiritual or religious knowledge; unwanted knowledge, accidentally acquired and aimlessly retained.
More recent estimates have reached the following results:
the world's technological capacity to receive information through one-way broadcast networks grew at a sustained compound annual growth rate of 7% between 1986 and 2007;
the world's technological capacity to store information grew at a sustained compound annual growth rate of 25% between 1986 and 2007;
the world's effective capacity to exchange information through two-way telecommunication networks grew at a sustained compound annual growth rate of 30% during the same two decades;
the world's technological capacity to compute information with the help of humanly guided general-purpose computers grew at a sustained compound annual growth rate of 61% during the same period.
Economics
Eventually, Information and communication technology (ICT)—i.e. computers, computerized machinery, fiber optics, communication satellites, the Internet, and other ICT tools—became a significant part of the world economy, as the development of optical networking and microcomputers greatly changed many businesses and industries. Nicholas Negroponte captured the essence of these changes in his 1995 book, Being Digital, in which he discusses the similarities and differences between products made of atoms and products made of bits.
Jobs and income distribution
The Information Age has affected the workforce in several ways, such as compelling workers to compete in a global job market. One of the most evident concerns is the replacement of human labor by computers that can do their jobs faster and more effectively, thus creating a situation in which individuals who perform tasks that can easily be automated are forced to find employment where their labor is not as disposable. This especially creates issue for those in industrial cities, where solutions typically involve lowering working time, which is often highly resisted. Thus, individuals who lose their jobs may be pressed to move up into more indispensable professions (e.g. engineers, doctors, lawyers, teachers, professors, scientists, executives, journalists, consultants), who are able to compete successfully in the world market and receive (relatively) high wages.
Along with automation, jobs traditionally associated with the middle class (e.g. assembly line, data processing, management, and supervision) have also begun to disappear as result of outsourcing. Unable to compete with those in developing countries, production and service workers in post-industrial (i.e. developed) societies either lose their jobs through outsourcing, accept wage cuts, or settle for low-skill, low-wage service jobs. In the past, the economic fate of individuals would be tied to that of their nation's. For example, workers in the United States were once well paid in comparison to those in other countries. With the advent of the Information Age and improvements in communication, this is no longer the case, as workers must now compete in a global job market, whereby wages are less dependent on the success or failure of individual economies.
In effectuating a globalized workforce, the internet has just as well allowed for increased opportunity in developing countries, making it possible for workers in such places to provide in-person services, therefore competing directly with their counterparts in other nations. This competitive advantage translates into increased opportunities and higher wages.
Automation, productivity, and job gain
The Information Age has affected the workforce in that automation and computerization have resulted in higher productivity coupled with net job loss in manufacturing. In the United States, for example, from January 1972 to August 2010, the number of people employed in manufacturing jobs fell from 17,500,000 to 11,500,000 while manufacturing value rose 270%. Although it initially appeared that job loss in the industrial sector might be partially offset by the rapid growth of jobs in information technology, the recession of March 2001 foreshadowed a sharp drop in the number of jobs in the sector. This pattern of decrease in jobs would continue until 2003, and data has shown that, overall, technology creates more jobs than it destroys even in the short run.
Information-intensive industry
Industry has become more information-intensive while less labor- and capital-intensive. This has left important implications for the workforce, as workers have become increasingly productive as the value of their labor decreases. For the system of capitalism itself, the value of labor decreases, the value of capital increases.
In the classical model, investments in human and financial capital are important predictors of the performance of a new venture. However, as demonstrated by Mark Zuckerberg and Facebook, it now seems possible for a group of relatively inexperienced people with limited capital to succeed on a large scale.
Innovations
The Information Age was enabled by technology developed in the Digital Revolution, which was itself enabled by building on the developments of the Technological Revolution.
Transistors
The onset of the Information Age can be associated with the development of transistor technology. The concept of a field-effect transistor was first theorized by Julius Edgar Lilienfeld in 1925. The first practical transistor was the point-contact transistor, invented by the engineers Walter Houser Brattain and John Bardeen while working for William Shockley at Bell Labs in 1947. This was a breakthrough that laid the foundations for modern technology. Shockley's research team also invented the bipolar junction transistor in 1952. The most widely used type of transistor is the metal–oxide–semiconductor field-effect transistor (MOSFET), invented by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1960. The complementary MOS (CMOS) fabrication process was developed by Frank Wanlass and Chih-Tang Sah in 1963.
Computers
Before the advent of electronics, mechanical computers, like the Analytical Engine in 1837, were designed to provide routine mathematical calculation and simple decision-making capabilities. Military needs during World War II drove development of the first electronic computers, based on vacuum tubes, including the Z3, the Atanasoff–Berry Computer, Colossus computer, and ENIAC.
The invention of the transistor enabled the era of mainframe computers (1950s–1970s), typified by the IBM 360. These large, room-sized computers provided data calculation and manipulation that was much faster than humanly possible, but were expensive to buy and maintain, so were initially limited to a few scientific institutions, large corporations, and government agencies.
The germanium integrated circuit (IC) was invented by Jack Kilby at Texas Instruments in 1958. The silicon integrated circuit was then invented in 1959 by Robert Noyce at Fairchild Semiconductor, using the planar process developed by Jean Hoerni, who was in turn building on Mohamed Atalla's silicon surface passivation method developed at Bell Labs in 1957. Following the invention of the MOS transistor by Mohamed Atalla and Dawon Kahng at Bell Labs in 1959, the MOS integrated circuit was developed by Fred Heiman and Steven Hofstein at RCA in 1962. The silicon-gate MOS IC was later developed by Federico Faggin at Fairchild Semiconductor in 1968. With the advent of the MOS transistor and the MOS IC, transistor technology rapidly improved, and the ratio of computing power to size increased dramatically, giving direct access to computers to ever smaller groups of people.
The first commercial single-chip microprocessor launched in 1971, the Intel 4004, which was developed by Federico Faggin using his silicon-gate MOS IC technology, along with Marcian Hoff, Masatoshi Shima and Stan Mazor.
Along with electronic arcade machines and home video game consoles pioneered by Nolan Bushnell in the 1970s, the development of personal computers like the Commodore PET and Apple II (both in 1977) gave individuals access to the computer. However, data sharing between individual computers was either non-existent or largely manual, at first using punched cards and magnetic tape, and later floppy disks.
Data
The first developments for storing data were initially based on photographs, starting with microphotography in 1851 and then microform in the 1920s, with the ability to store documents on film, making them much more compact. Early information theory and Hamming codes were developed about 1950, but awaited technical innovations in data transmission and storage to be put to full use.
Magnetic-core memory was developed from the research of Frederick W. Viehe in 1947 and An Wang at Harvard University in 1949. With the advent of the MOS transistor, MOS semiconductor memory was developed by John Schmidt at Fairchild Semiconductor in 1964. In 1967, Dawon Kahng and Simon Sze at Bell Labs described in 1967 how the floating gate of an MOS semiconductor device could be used for the cell of a reprogrammable ROM. Following the invention of flash memory by Fujio Masuoka at Toshiba in 1980, Toshiba commercialized NAND flash memory in 1987.
Copper wire cables transmitting digital data connected computer terminals and peripherals to mainframes, and special message-sharing systems leading to email, were first developed in the 1960s. Independent computer-to-computer networking began with ARPANET in 1969. This expanded to become the Internet (coined in 1974). Access to the Internet improved with the invention of the World Wide Web in 1991. The capacity expansion from dense wave division multiplexing, optical amplification and optical networking in the mid-1990s led to record data transfer rates. By 2018, optical networks routinely delivered 30.4 terabits/s over a fiber optic pair, the data equivalent of 1.2 million simultaneous 4K HD video streams.
MOSFET scaling, the rapid miniaturization of MOSFETs at a rate predicted by Moore's law, led to computers becoming smaller and more powerful, to the point where they could be carried. During the 1980s1990s, laptops were developed as a form of portable computer, and personal digital assistants (PDAs) could be used while standing or walking. Pagers, widely used by the 1980s, were largely replaced by mobile phones beginning in the late 1990s, providing mobile networking features to some computers. Now commonplace, this technology is extended to digital cameras and other wearable devices. Starting in the late 1990s, tablets and then smartphones combined and extended these abilities of computing, mobility, and information sharing. Metal–oxide–semiconductor (MOS) image sensors, which first began appearing in the late 1960s, led to the transition from analog to digital imaging, and from analog to digital cameras, during the 1980s–1990s. The most common image sensors are the charge-coupled device (CCD) sensor and the CMOS (complementary MOS) active-pixel sensor (CMOS sensor).
Electronic paper, which has origins in the 1970s, allows digital information to appear as paper documents.
Personal computers
By 1976, there were several firms racing to introduce the first truly successful commercial personal computers. Three machines, the Apple II, Commodore PET 2001 and TRS-80 were all released in 1977, becoming the most popular by late 1978. Byte magazine later referred to Commodore, Apple, and Tandy as the "1977 Trinity". Also in 1977, Sord Computer Corporation released the Sord M200 Smart Home Computer in Japan.
Apple II
Steve Wozniak (known as "Woz"), a regular visitor to Homebrew Computer Club meetings, designed the single-board Apple I computer and first demonstrated it there. With specifications in hand and an order for 100 machines at US$500 each from the Byte Shop, Woz and his friend Steve Jobs founded Apple Computer.
About 200 of the machines sold before the company announced the Apple II as a complete computer. It had color graphics, a full QWERTY keyboard, and internal slots for expansion, which were mounted in a high quality streamlined plastic case. The monitor and I/O devices were sold separately. The original Apple II operating system was only the built-in BASIC interpreter contained in ROM. Apple DOS was added to support the diskette drive; the last version was "Apple DOS 3.3".
Its higher price and lack of floating point BASIC, along with a lack of retail distribution sites, caused it to lag in sales behind the other Trinity machines until 1979, when it surpassed the PET. It was again pushed into 4th place when Atari, Inc. introduced its Atari 8-bit computers.
Despite slow initial sales, the lifetime of the Apple II was about eight years longer than other machines, and so accumulated the highest total sales. By 1985, 2.1 million had sold and more than 4 million Apple II's were shipped by the end of its production in 1993.
Optical networking
Optical communication plays a crucial role in communication networks. Optical communication provides the transmission backbone for the telecommunications and computer networks that underlie the Internet, the foundation for the Digital Revolution and Information Age.
The two core technologies are the optical fiber and light amplification (the optical amplifier). In 1953, Bram van Heel demonstrated image transmission through bundles of optical fibers with a transparent cladding. The same year, Harold Hopkins and Narinder Singh Kapany at Imperial College succeeded in making image-transmitting bundles with over 10,000 optical fibers, and subsequently achieved image transmission through a 75 cm long bundle which combined several thousand fibers.
Gordon Gould invented the optical amplifier and the laser, and also established the first optical telecommunications company, Optelecom, to design communication systems. The firm was a co-founder in Ciena Corp., the venture that popularized the optical amplifier with the introduction of the first dense wave division multiplexing system. This massive scale communication technology has emerged as the common basis of all telecommunication networks and, thus, a foundation of the Information Age.
Economy, society, and culture
Manuel Castells captures the significance of the Information Age in The Information Age: Economy, Society and Culture when he writes of our global interdependence and the new relationships between economy, state and society, what he calls "a new society-in-the-making." He cautions that just because humans have dominated the material world, does not mean that the Information Age is the end of history: "It is in fact, quite the opposite: history is just beginning, if by history we understand the moment when, after millennia of a prehistoric battle with Nature, first to survive, then to conquer it, our species has reached the level of knowledge and social organization that will allow us to live in a predominantly social world. It is the beginning of a new existence, and indeed the beginning of a new age, The Information Age, marked by the autonomy of culture vis-à-vis the material basis of our existence."
Thomas Chatterton Williams wrote about the dangers of anti-intellectualism in the Information Age in a piece for The Atlantic. Although access to information has never been greater, most information is irrelevant or insubstantial. The Information Age's emphasis on speed over expertise contributes to "superficial culture in which even the elite will openly disparage as pointless our main repositories for the very best that has been thought."
See also
Technological revolutions
First Industrial Revolution
Second Industrial Revolution
Fourth Industrial Revolution
Attention economy
Attention inequality
Big data
Cognitive-cultural economy
Cybercrime
Cyberterrorism
Cyberwarfare
Democratization of knowledge
Digital dark age
Digital detox
Digital divide
Digital transformation
Imagination age
Indigo Era
Information explosion
Information society
Internet governance
Netizen
Netocracy
Network society
Social Age
Space Age
Technological determinism
Telecommunications
Zettabyte Era
The Hacker Ethic and the Spirit of the Information Age
Information and communication technologies for environmental sustainability
References
Further reading
Oliver Stengel et al. (2017). Digitalzeitalter - Digitalgesellschaft, Springer
Mendelson, Edward (June 2016). In the Depths of the Digital Age, The New York Review of Books
Bollacker, Kurt D. (2010) Avoiding a Digital Dark Age, American Scientist, March–April 2010, Volume 98, Number 2, p. 106ff
Castells, Manuel. (1996–98). The Information Age: Economy, Society and Culture, 3 vols. Oxford: Blackwell.
Gelbstein, E. (2006) Crossing the Executive Digital Divide.
External links
Articles on the impact of the Information Age on business – at Information Age magazine
Beyond the Information Age by Dave Ulmer
Information Age Anthology Vol I by Alberts and Papp (CCRP, 1997) (PDF)
Information Age Anthology Vol II by Alberts and Papp (CCRP, 2000) (PDF)
Information Age Anthology Vol III by Alberts and Papp (CCRP, 2001) (PDF)
Understanding Information Age Warfare by Alberts et al. (CCRP, 2001) (PDF)
Information Age Transformation by Alberts (CCRP, 2002) (PDF)
The Unintended Consequences of Information Age Technologies by Alberts (CCRP, 1996) (PDF)
History & Discussion of the Information Age
Science Museum – Information Age
Digital media
Hyperreality
Digital divide
Contemporary history
Historical eras
Postmodernism
Cultural trends
Western culture | 0.765448 | 0.999033 | 0.764707 |
Nomothetic and idiographic | Nomothetic and idiographic are terms used by Neo-Kantian philosopher Wilhelm Windelband to describe two distinct approaches to knowledge, each one corresponding to a different intellectual tendency, and each one corresponding to a different branch of academia. To say that Windelband supported that last dichotomy is a consequent misunderstanding of his own thought. For him, any branch of science and any discipline can be handled by both methods as they offer two integrating points of view.
Nomothetic is based on what Kant described as a tendency to generalize, and is typical for the natural sciences. It describes the effort to derive laws that explain types or categories of objective phenomena, in general.
Idiographic is based on what Kant described as a tendency to specify, and is typical for the humanities. It describes the effort to understand the meaning of contingent, unique, and often cultural or subjective phenomena.
Use in the social sciences
The problem of whether to use nomothetic or idiographic approaches is most sharply felt in the social sciences, whose subject are unique individuals (idiographic perspective), but who have certain general properties or behave according to general rules (nomothetic perspective).
Often, nomothetic approaches are quantitative, and idiographic approaches are qualitative, although the "Personal Questionnaire" developed by Monte B. Shapiro and its further developments (e.g. Discan scale and PSYCHLOPS) are both quantitative and idiographic. Another very influential quantitative but idiographic tool is the Repertory grid when used with elicited constructs and perhaps elicited elements. Personal cognition (D.A. Booth) is idiographic, qualitative and quantitative, using the individual's own narrative of action within situation to scale the ongoing biosocial cognitive processes in units of discrimination from norm (with M.T. Conner 1986, R.P.J. Freeman 1993 and O. Sharpe 2005). Methods of "rigorous idiography" allow probabilistic evaluation of information transfer even with fully idiographic data.
In psychology, idiographic describes the study of the individual, who is seen as a unique agent with a unique life history, with properties setting them apart from other individuals (see idiographic image). A common method to study these unique characteristics is an (auto)biography, i.e. a narrative that recounts the unique sequence of events that made the person who they are. Nomothetic describes the study of classes or cohorts of individuals. Here the subject is seen as an exemplar of a population and their corresponding personality traits and behaviours. It is widely held that the terms idiographic and nomothetic were introduced to American psychology by Gordon Allport in 1937, but Hugo Münsterberg used them in his 1898 presidential address at the American Psychological Association meeting. This address was published in Psychological Review in 1899.
Theodore Millon stated that when spotting and diagnosing personality disorders, first clinicians start with the nomothetic perspective and look for various general scientific laws; then when they believe they have identified a disorder, they switch their view to the idiographic perspective to focus on the specific individual and his or her unique traits.
In sociology, the nomothetic model tries to find independent variables that account for the variations in a given phenomenon (e.g. What is the relationship between timing/frequency of childbirth and education?). Nomothetic explanations are probabilistic and usually incomplete. The idiographic model focuses on a complete, in-depth understanding of a single case (e.g. Why do I not have any pets?).
In anthropology, idiographic describes the study of a group, seen as an entity, with specific properties that set it apart from other groups. Nomothetic refers to the use of generalization rather than specific properties in the same context.
See also
Nomological
References
Further reading
Cone, J. D. (1986). "Idiographic, nomothetic, and related perspectives in behavioral assessment." In: R. O. Nelson & S. C. Hayes (eds.): Conceptual foundations of behavioral assessment (pp. 111–128). New York: Guilford.
Thomae, H. (1999). "The nomothetic-idiographic issue: Some roots and recent trends." International Journal of Group Tensions, 28(1), 187–215.
Concepts in epistemology | 0.774646 | 0.987158 | 0.764698 |
Silurian hypothesis | The Silurian hypothesis is a thought experiment, which assesses modern science's ability to detect evidence of a prior advanced civilization, perhaps several million years ago. The most probable clues for such a civilization could be carbon, radioactive elements or temperature variation. The name "Silurian" derives from the eponymous sapient species from the BBC science fiction series Doctor Who, who in the series established an advanced civilization prior to humanity, not from the eponymous geological period.
Astrophysicists Adam Frank and Gavin Schmidt proposed the "Silurian Hypothesis" in a 2018 paper, exploring the possibility of detecting an advanced civilization before humans in the geological record. They argued that there has been sufficient fossil carbon to fuel an industrial civilization since the Carboniferous Period (~350 million years ago); however, finding direct evidence, such as technological artifacts, is unlikely due to the rarity of fossilization and Earth's exposed surface. Instead, researchers might find indirect evidence, such as climate changes, anomalies in sediment, or traces of nuclear waste. The hypothesis also speculates that artifacts from past civilizations could be found on the Moon and Mars, where erosion and tectonic activity are less likely to erase evidence. The concept of pre-human civilizations has been explored in popular culture, including novels, television shows, short stories, and video games.
Explanation
The idea was presented in a 2018 paper by Adam Frank, an astrophysicist at the University of Rochester, and Gavin Schmidt, director of the Goddard Institute for Space Studies. Frank and Schmidt imagined an advanced civilization before humans and pondered whether it would "be possible to detect an industrial civilization in the geological record". They argue as early as the Carboniferous period (~350 million years ago) "there has been sufficient fossil carbon to fuel an industrial civilization comparable with our own". However, they also wrote: "While we strongly doubt that any previous industrial civilization existed before our own, asking the question in a formal way that articulates explicitly what evidence for such a civilization might look like raises its own useful questions related both to astrobiology and to Anthropocene studies." The term "Silurian hypothesis" was inspired by the fictional species called the Silurians from the British television series Doctor Who.
According to Frank and Schmidt, since fossilization is relatively rare and little of Earth's exposed surface is from before the Quaternary time period (~2.5 million years ago), there is low probability of finding direct evidence of such a civilization, such as technological artifacts. After a great time span, the researchers concluded, contemporary humans would be more likely to find indirect evidence such as rapid changes in temperature or climate (as occurred during the Paleocene–Eocene Thermal Maximum ~55 million years ago); evidence of tapping geothermal power sources; or anomalies in sediment such as their chemical composition (e.g., evidence of artificial fertilizers) or isotope ratios (e.g., there is no naturally occurring plutonium-244 outside a supernova, so evidence of this isotope could indicate a technologically advanced civilization). Objects that could indicate possible evidence of past civilizations include plastics and nuclear wastes residues buried deep underground or on the ocean floor. The paper also mentions the natural fission reactors at Oklo, Gabon, which were active some two billion years BP — while none of the transuranic elements it produced are still present (they have since decayed to longer-lived or stable daughter nuclides), the depletion of 235U and the characteristic isotope ratios of fission products were used to confirm that fission had indeed occurred.
Frank and Schmidt speculate such a civilization could have gone to space and left artifacts on other celestial bodies, such as the Moon and Mars. Evidence for artifacts on these two worlds would be easier to find than on Earth, where erosion and tectonic activity would erase much of it. Frank first approached Schmidt to discuss how to detect alien civilizations via their potential impact upon climate through the study of ice cores and tree rings. They both realized that the hypothesis could be expanded and applied to Earth and humanity due to the fact that humans have been in their current form for the past 300,000 years and have had sophisticated technology for only the last few centuries.
In popular culture
The idea of pre-human civilizations is explored in H. P. Lovecraft's work, for example in his 1931 short novel At the Mountains of Madness and later novelette The Shadow out of Time. In Isaac Asimov's Big Game (written in 1941 and published in 1974) and Day of the Hunters (1950), the extinction of the dinosaurs was caused by an intelligent race of humanoid dinosaurs. In Asimov's No Connection (1948), a non-human scientist of a future civilization tries to prove the existence of an ancient primate civilization on Earth. Andre Norton's The Time Traders (1958) and later books in the series discussed the idea that most physical evidence of ancient advanced civilizations on Earth could be removed in mere millennia by glaciers, volcanic eruptions, and natural decay.
The eponymous Silurians on Doctor Who are a race of reptilian humanoids from Earth's past, making their first appearance in the show in 1970. Frank and Schmidt cite Inherit the Stars, a 1977 novel by J. P. Hogan as containing a similar hypothesis, but also say they were surprised by how rarely the concept was explored in science fiction. In Larry Niven's 1980 short story "The Green Marauder", an alien over 700 million years old (due to relativistic travel) tells a human about the last time it visited Earth, and the hopeless plea from Earth's anaerobic civilization for help against the growing environmental threat of chlorophyll. The Star Trek Voyager 1997 episode "Distant Origin" has the crew encounter the Voth, a spacefaring race that appear to have evolved on Earth from dinosaurs. When discussing this theory with a Voth scientist, Chakotay speculates that their ancestors evolved on an isolated continent that was destroyed by cataclysm, with all traces buried under oceans or kilometers of rock.
The video game Halo: Combat Evolved depicted that an ancient race of humans roamed the planet hundreds of thousands of years ago before modern humans, becoming an interstellar species, before they were eventually annihilated and de-evolved by an alien race into what would eventually become modern humans, losing all their technology in the process. In the video game Honkai Impact 3rd, there was a Previous Era with a technologically-advanced civilization that was wiped out by the Honkai. Evidence of this Previous Era has been mostly destroyed, but the game takes place in the Current Era and you play as a member of the new technologically-advanced civilization.
See also
Ancient astronauts
Dinosauroid
Out-of-place artifact
Permian–Triassic extinction event
The World Without Us
Xenoarchaeology
References
Further reading
Hypotheses
Thought experiments
2018 introductions
Astrobiology
Extinction
Paleontological concepts and hypotheses | 0.76612 | 0.998129 | 0.764687 |
Stratigraphy | Stratigraphy is a branch of geology concerned with the study of rock layers (strata) and layering (stratification). It is primarily used in the study of sedimentary and layered volcanic rocks.
Stratigraphy has three related subfields: lithostratigraphy (lithologic stratigraphy), biostratigraphy (biologic stratigraphy), and chronostratigraphy (stratigraphy by age).
Historical development
Catholic priest Nicholas Steno established the theoretical basis for stratigraphy when he introduced the law of superposition, the principle of original horizontality and the principle of lateral continuity in a 1669 work on the fossilization of organic remains in layers of sediment.
The first practical large-scale application of stratigraphy was by William Smith in the 1790s and early 19th century. Known as the "Father of English geology", Smith recognized the significance of strata or rock layering and the importance of fossil markers for correlating strata; he created the first geologic map of England. Other influential applications of stratigraphy in the early 19th century were by Georges Cuvier and Alexandre Brongniart, who studied the geology of the region around Paris.
Lithostratigraphy
Variation in rock units, most obviously displayed as visible layering, is due to physical contrasts in rock type (lithology). This variation can occur vertically as layering (bedding), or laterally, and reflects changes in environments of deposition (known as facies change). These variations provide a lithostratigraphy or lithologic stratigraphy of the rock unit. Key concepts in stratigraphy involve understanding how certain geometric relationships between rock layers arise and what these geometries imply about their original depositional environment. The basic concept in stratigraphy, called the law of superposition, states: in an undeformed stratigraphic sequence, the oldest strata occur at the base of the sequence.
Chemostratigraphy studies the changes in the relative proportions of trace elements and isotopes within and between lithologic units. Carbon and oxygen isotope ratios vary with time, and researchers can use those to map subtle changes that occurred in the paleoenvironment. This has led to the specialized field of isotopic stratigraphy.
Cyclostratigraphy documents the often cyclic changes in the relative proportions of minerals (particularly carbonates), grain size, thickness of sediment layers (varves) and fossil diversity with time, related to seasonal or longer term changes in palaeoclimates.
Biostratigraphy
Biostratigraphy or paleontologic stratigraphy is based on fossil evidence in the rock layers. Strata from widespread locations containing the same fossil fauna and flora are said to be correlatable in time. Biologic stratigraphy was based on William Smith's principle of faunal succession, which predated, and was one of the first and most powerful lines of evidence for, biological evolution. It provides strong evidence for the formation (speciation) and extinction of species. The geologic time scale was developed during the 19th century, based on the evidence of biologic stratigraphy and faunal succession. This timescale remained a relative scale until the development of radiometric dating, which was based on an absolute time framework, leading to the development of chronostratigraphy.
One important development is the Vail curve, which attempts to define a global historical sea-level curve according to inferences from worldwide stratigraphic patterns. Stratigraphy is also commonly used to delineate the nature and extent of hydrocarbon-bearing reservoir rocks, seals, and traps of petroleum geology.
Chronostratigraphy
Chronostratigraphy is the branch of stratigraphy that places an absolute age, rather than a relative age on rock strata. The branch is concerned with deriving geochronological data for rock units, both directly and inferentially, so that a sequence of time-relative events that created the rocks formation can be derived. The ultimate aim of chronostratigraphy is to place dates on the sequence of deposition of all rocks within a geological region, and then to every region, and by extension to provide an entire geologic record of the Earth.
A gap or missing strata in the geological record of an area is called a stratigraphic hiatus. This may be the result of a halt in the deposition of sediment. Alternatively, the gap may be due to removal by erosion, in which case it may be called a stratigraphic vacuity. It is called a hiatus because deposition was on hold for a period of time. A physical gap may represent both a period of non-deposition and a period of erosion. A geologic fault may cause the appearance of a hiatus.
Magnetostratigraphy
Magnetostratigraphy is a chronostratigraphic technique used to date sedimentary and volcanic sequences. The method works by collecting oriented samples at measured intervals throughout a section. The samples are analyzed to determine their detrital remanent magnetism (DRM), that is, the polarity of Earth's magnetic field at the time a stratum was deposited. For sedimentary rocks this is possible because, as they fall through the water column, very fine-grained magnetic minerals (< 17 μm) behave like tiny compasses, orienting themselves with Earth's magnetic field. Upon burial, that orientation is preserved. For volcanic rocks, magnetic minerals, which form in the melt, orient themselves with the ambient magnetic field, and are fixed in place upon crystallization of the lava.
Oriented paleomagnetic core samples are collected in the field; mudstones, siltstones, and very fine-grained sandstones are the preferred lithologies because the magnetic grains are finer and more likely to orient with the ambient field during deposition. If the ancient magnetic field were oriented similar to today's field (North Magnetic Pole near the North Rotational Pole), the strata would retain a normal polarity. If the data indicate that the North Magnetic Pole were near the South Rotational Pole, the strata would exhibit reversed polarity.
Results of the individual samples are analyzed by removing the natural remanent magnetization (NRM) to reveal the DRM. Following statistical analysis, the results are used to generate a local magnetostratigraphic column that can then be compared against the Global Magnetic Polarity Time Scale.
This technique is used to date sequences that generally lack fossils or interbedded igneous rocks. The continuous nature of the sampling means that it is also a powerful technique for the estimation of sediment-accumulation rates.
See also
Assise
Bed (geology)
Conodont biostratigraphy
Erygmascope (old instrument for studying strata)
Harris matrix
International Commission on Stratigraphy
Key bed
Sedimentary basin analysis
Sequence stratigraphy
Sadler effect
Tectonostratigraphy
References
Further reading
Christopherson, R. W., 2008. Geosystems: An Introduction to Physical Geography, 7th ed., New York: Pearson Prentice-Hall. .
Montenari, M., 2016. Stratigraphy and Timescales, 1st ed., Amsterdam: Academic Press (Elsevier). .
External links
ICS Subcommission for Stratigraphic Information
University of South Carolina Sequence Stratigraphy Web
International Commission on Stratigraphy
University of Georgia (USA) Stratigraphy Lab
Stratigraphy.net A stratigraphic data provider.
Agenames.org A global index of stratigraphic terms
Petrology
Methods in archaeology | 0.769687 | 0.993489 | 0.764676 |
Political system | In political science, a political system means the form of political organization that can be observed, recognised or otherwise declared by a society or state.
It defines the process for making official government decisions. It usually comprizes the governmental legal and economic system, social and cultural system, and other state and government specific systems. However, this is a very simplified view of a much more complex system of categories involving the questions of who should have authority and what the government influence on its people and economy should be.
Along with a basic sociological and socio-anthropological classification, political systems can be classified on a social-cultural axis relative to the liberal values prevalent in the Western world, where the spectrum is represented as a continuum between political systems recognized as democracies,
totalitarian regimes and, sitting between these two, authoritarian regimes, with a variety of hybrid regimes; and monarchies may be also included as a standalone entity or as a hybrid system of the main three.
Definition
According to David Easton, "A political system can be designated as the interactions through which values are authoritatively allocated for a society". Political system refers broadly to the process by which laws are made and public resources allocated in a society, and to the relationships among those involved in making these decisions.
Basic classification
Social anthropologists generally recognize several kinds of political systems, often differentiating between ones that they consider uncentralized and ones they consider centralized.
Uncentralized systems
Band society
Small family group, no larger than an extended family or clan; it has been defined as consisting of no more than 30 to 50 individuals.
A band can cease to exist if only a small group walks out.
Tribe
Generally larger, consisting of many families. Tribes have more social institutions, such as a chief or elders.
More permanent than bands. Many tribes are sub-divided into bands.
Centralized governments
Chiefdom
More complex than a tribe or a band society, and less complex than a state or a civilization
Characterized by pervasive inequality and centralization of authority.
A single lineage/family of the elite class becomes the ruling elite of the chiefdom
Complex chiefdoms have two or even three tiers of political hierarchy.
"An autonomous political unit comprising a number of villages or communities under the permanent control of a paramount chief"
Sovereign state
A sovereign state is a state with a permanent population, a defined territory, a government and the capacity to enter into relations with other sovereign states.
Supranational political systems
Supranational political systems are created by independent nations to reach a common goal or gain strength from forming an alliance.
Empires
Empires are widespread states consisting of people of different ethnicities under a single rule. Empires - such as the Romans, or British - often made considerable progress in ways of political structures, creating and building city infrastructures, and maintaining civility within the diverse communities. Because of the intricate organization of the empires, they were often able to hold a large majority of power on a universal level.
Leagues
Leagues are international organizations composed of states coming together for a single common purpose. In this way, leagues are different from empires, as they only seek to fulfill a single goal. Often leagues are formed on the brink of a military or economic downfall. Meetings and hearings are conducted in a neutral location with representatives of all involved nations present.
Western socio-cultural paradigmatic-centric analysis
The sociological interest in political systems is figuring out who holds power within the relationship between the government and its people and how the government’s power is used. According to Yale professor Juan José Linz there a three main types of political systems today: democracies,
totalitarian regimes and, sitting between these two, authoritarian regimes (with hybrid regimes). Another modern classification system includes monarchies as a standalone entity or as a hybrid system of the main three. Scholars generally refer to a dictatorship as either a form of authoritarianism or totalitarianism.
Democracy
Authoritarianism
Totalitarian
Monarchy
Hybrid
Marxist/Dialectical materialistic analysis
19th-century German-born philosopher Karl Marx analysed that the political systems of "all" state-societies are the dictatorship of one social class, vying for its interests against that of another one; with which class oppressing which other class being, in essence, determined by the developmental level of that society, and its repercussions implicated thereof, as the society progresses through the passage of time. In capitalist societies, this characterises as the dictatorship of the bourgeoisie or capitalist class, in which the economic and political system is designed to work in their interests collectively as a class, over those of the proletariat or working class.
Marx devised this theory by adapting his forerunner-contemporary Georg Wilhelm Friedrich Hegel's notion of dialectics into the framework of materialism.
See also
Political structure
Polity
Systems theory in political science
Tractatus Politicus
Voting system
Notes
References
Further reading
Almond, Gabriel A., et al. Comparative Politics Today: A World View (Seventh Edition). 2000. .
Ferris, Kerry, and Jill Stein. The Real World An Introduction to Sociology. 3rd ed. New York City: W W Norton & Co, 2012. Print.
"political system". Encyclopædia Britannica. Encyclopædia Britannica Online. Encyclopædia Britannica Inc., 2012. Web. 02 Dec. 2012.
External links
Topic guide on political systems at Governance and Social Development Resource Centre
Political terminology | 0.766589 | 0.997478 | 0.764655 |
Didactic method | A didactic method ( didáskein, "to teach") is a teaching method that follows a consistent scientific approach or educational style to present information to students. The didactic method of instruction is often contrasted with dialectics and the Socratic method; the term can also be used to refer to a specific didactic method, as for instance constructivist didactics.
Overview
Didactics is a theory of teaching, and in a wider sense, a theory and practical application of teaching and learning. In demarcation from "mathetics" (the science of learning), didactics refers only to the science of teaching.
This theory might be contrasted with open learning, also known as experiential learning, in which people can learn by themselves, in an unstructured manner (or in an unusually structured manner) as in experiential education, on topics of interest. It can also be contrasted with autodidactic learning, in which one instructs oneself, often from existing books or curricula.
The theory of didactic learning methods focuses on the baseline knowledge students possess and seeks to improve upon and convey this information. It also refers to the foundation or starting point in a lesson plan, where the overall goal is knowledge. A teacher or educator functions in this role as an authoritative figure, but also as both a guide and a resource for students.
Didactics or the didactic method have different connotations in continental Europe and English-speaking countries. Didacticism was indeed the cultural origin of the didactic method but refers within its narrow context usually pejoratively to the use of language to a doctrinal end. The interpretation of these opposing views are theorised to be the result of a differential cultural development in the 19th century when Great Britain and its former colonies went through a renewal and increased cultural distancing from continental Europe. It was particularly the later appearance of Romanticism and Aestheticism in the Anglo-Saxon world which offered these negative and limiting views of the didactic method. On the other hand, in continental Europe those moralising aspects of didactics were removed earlier by cultural representatives of the Age of Enlightenment, such as Voltaire, Rousseau, and later specifically related to teaching by Johann Heinrich Pestalozzi.
The consequences of these cultural differences then created two main didactic traditions: The Anglo-Saxon tradition of curriculum studies on one side and the Continental and North European tradition of didactics on the other. Still today, the science of didactics carries much less weight in much of the English-speaking world..
With the advent of globalisation at the beginning of the 20th century, however, the arguments for such relative philosophical aspects in the methods of teaching started to diminish somewhat. It is therefore possible to categorise didactics and pedagogy as a general analytic theory on three levels:
a theoretical or research level (denoting a field of study)
a practical level (summaries of curricular activities)
a discursive level (implying a frame of reference for professional dialogs)
Nature of didactics and difference with pedagogy
The discipline of didactics is interested in both theoretical knowledge and practical activities related to teaching, learning and their conditions. It is concerned with the content of teaching (the "what"), the method of teaching (the "how") and the historical, cultural and social justifications of curricular choices (the "why"). It focuses on the individual learner, their cognitive characteristics and functioning when they learn a given content and become a knowing subject. The perspective of educational reality in didactics is drawn extensively from cognitive psychology and the theory of teaching, and sometimes from social psychology. Didactics is descriptive and diachronic ("what is" and "what was"), as opposed to pedagogy, the other discipline related to educational theorizing, which is normative or prescriptive and synchronic ("what should or ought to be") in nature. Didactics can be said to provide the descriptive foundation for pedagogy, which is more concerned with educational goal-setting and with the learner's becoming a social subject and their future role in society.
In continental Europe, as opposed to English-speaking research cultures, pedagogy and didactics are distinct areas of study. Didactics is a knowledge-based discipline concerned with the descriptive and rational study of all teaching-related activities before, during and after the teaching of content in the classroom, which includes the "planning, control and regulation of the teaching context" and its objective is to analyze how teaching leads to learning. On the other hand, pedagogy is a practice-oriented discipline concerned with the normative study of the applied aspects of teaching in real teaching contexts, i.e., inside the classroom. Pedagogy draws from didactic research and can be seen as an applied component of didactics.
Didactic transposition
In France, didactics refers to the science that takes the teaching of disciplined knowledge as its object of study. In other words, didactics is concerned with the teaching of specific disciplines to students. One of the central concepts studied in didactics of a specific discipline in France is the concept of "didactic transposition" (La transposition didactique in French). French philosopher and sociologist Michel Verret introduced this concept in 1975, which was borrowed and elaborated further in the 1980s by the French didactician of Mathematics Yves Chevallard. Although Chevallard initially presented this concept regarding the didactics of mathematics, it has since been generalized for other disciplines as well.
Didactic transposition is composed of multiple steps. The first step, called the "external transposition" (transposition externe), is about how the "scholarly knowledge" (savoir savant) produced by the scholars, scientists or specialists of a certain discipline in a research context, i.e., at universities and other academic institutions is transformed into "knowledge to teach" (savoir à enseigner) by precisely selecting, rearranging and defining the knowledge which will be taught (the official curriculum for each discipline) and how it will be taught, so that it becomes an object of learning accessible to the learner. This external didactic transposition is a socio-political construction made possible by different actors working within various educational institutions: education specialists, political authorities, teachers and their associations define the issues of teaching and choose what should be taught under which form. Chevallard called this socio-political context of institutional organization the “noosphere”, which defines the limits, redefines and reorganizes the knowledge in socially, historically or culturally determined contexts.
The second step, called the "internal transposition" (transposition interne) is about how the knowledge to teach is transformed into "taught knowledge" (savoir enseigné), which is the knowledge actually taught through the day-to-day concrete practices of a teacher in a teaching context, e.g. in a classroom, and which depends on their students and the constraints imposed on them (time, exams, conformity to prevailing school rules, etc.).
In the third and final step, the taught knowledge is transformed into "acquired knowledge" (savoir acquis), which is the knowledge as it is actually acquired by students in a learning context. The acquired knowledge can be used as a feedback to the didactic system. Didactic research has to account for all the aforementioned steps of didactic transposition.
Didactic triangle
The teacher is given the knowledge or content to be taught to students in what is called a teaching situation. The teaching or didactic situation is represented by a triangle with three vertices: the knowledge or content to be taught, the teacher, and the student. This is called the "didactic triangle". In this triangle, the teacher-content side is concerned with didactic elaboration, the student-content side is about didactic appropriation, and the teacher-student side is about didactic interaction.
Didactic teaching
Didactic method provides students with the required theoretical knowledge. It is an effective method used to teach students who are unable to organize their work and depend on the teachers for instructions. It is also used to teach basic skills of reading and writing. The teacher or the literate is the source of knowledge and the knowledge is transmitted to the students through didactic method.
Didactic Teaching materials:
The Montessori school had preplanned teaching (Didactic) materials designed, to develop practical, sensory, and formal skills. Lacing and buttoning frames, weights, and packet to be identified by their sound or smell. Because they direct learning in the prepared environment, Montessori educators are called directress rather than teachers.
In Brazil, there has been for more than 80 years the government program called PNLD (National Program of Didactic Book). This program seeks to provide basic education schools with didactic and pedagogical records, expanding access to the book and democratizing access to sources of information and culture. Textbooks, in many cases, are the only sources of information that poor children and young people have access to in a poor country like Brazil. These books are also valuable support to teachers, offering modern learning methodologies and updated concepts and content in the most diverse disciplines
Functions of didactic method
cognitive function: to understand and learn basic concepts
formative-educative function: to develop skills, behavior, abilities, etc.
instrumental function : to achieve educational objectives
normative function : helps to achieve productive learning, attain required results, etc.
Method of teaching
In didactic method of teaching, the teacher gives instructions to the students and the students are mostly passive listeners . It is a teacher-centered method of teaching and is content-oriented. Neither the content nor the knowledge of the teacher are questioned.
The process of teaching involves the teacher who gives instructions, commands, delivers content, and provides necessary information. The pupil activity involves listening and memorization of the content. In the modern education system, lecture method which is one of the most commonly used methods is a form of didactic teaching.
Limitations
Though the didactic method has been given importance in several schools, it does not satisfy the needs and interests of all students. It can be tedious for students to listen to the possible lectures. There is minimum interaction between the students and the teachers. Learning which also involves motivating the students to develop an interest towards the subject may not be satisfied through this teaching method.
It may be a monologue process and experience of the students may not have a significant role in learning.
See also
Guy Brousseau
References
External links
Didactics
Pedagogy | 0.771327 | 0.991336 | 0.764644 |
World Values Survey | The World Values Survey (WVS) is a global research project that explores people's values and beliefs, how they change over time, and what social and political impact they have. Since 1981 a worldwide network of social scientists have conducted representative national surveys as part of WVS in almost 100 countries.
The WVS measures, monitors and analyzes: support for democracy, tolerance of foreigners and ethnic minorities, support for gender equality, the role of religion and changing levels of religiosity, the impact of globalization, attitudes toward the environment, work, family, politics, national identity, culture, diversity, insecurity, and subjective well-being.
Romano Prodi, former Prime Minister of Italy and the tenth President of the European Commission, said about WVS work:
The growing globalization of the world makes it increasingly important to understand ... diversity. People with varying beliefs and values can live together and work together productively, but for this to happen it is crucial to understand and appreciate their distinctive worldviews.
Insights
The WVS has over the years demonstrated that people's beliefs play a key role in economic development, the emergence and flourishing of democratic institutions, the rise of gender equality, and the extent to which societies have effective government.
Inglehart–Welzel cultural map
Analysis of WVS data made by political scientists Ronald Inglehart and Christian Welzel asserts that there are two major dimensions of cross cultural variation in the world:
Traditional values versus secular-rational values and
Survival values versus self-expression values.
The global cultural map shows how scores of societies are located on these two dimensions. Moving upward on this map reflects the shift from Traditional values to Secular-rational and moving rightward reflects the shift from Survival values to Self-expression values.
Traditional values emphasize the importance of religion, parent-child ties, deference to authority and traditional family values. People who embrace these values also reject divorce, abortion, euthanasia and suicide. These societies have high levels of national pride and a nationalistic outlook.
Secular-rational values have the opposite preferences to the traditional values. These societies place less emphasis on religion, traditional family values and authority. Divorce, abortion, euthanasia and suicide are seen as relatively acceptable.
Survival values place emphasis on economic and physical security. It is linked with a relatively ethnocentric outlook and low levels of trust and tolerance.
Self-expression values give high priority to environmental protection, growing tolerance of foreigners, gays and lesbians and gender equality, and rising demands for participation in decision-making in economic and political life.
Christian Welzel introduced the concepts of emancipative values and secular values. He provided measurements for those values using World Values Survey data. Emancipative values are an updated version of self-expression values. Secular values are an updated version of traditional versus secular rational values. The survival versus self-expression values and the traditional versus secular rational values were factors extracted with an orthogonal technique of factor analysis, which forbids the two scales from correlating with each other. The emancipative and secular values are measured in such a way as to represent the data as faithfully as possible even if this results in a correlation between the scales. The secular and emancipative values indices are positively correlated with each other.
Culture variations
A somewhat simplified analysis is that following an increase in standards of living, and a transit from development country via industrialization to post-industrial knowledge society, a country tends to move diagonally in the direction from lower-left corner (poor) to upper-right corner (rich), indicating a transit in both dimensions.
However, the attitudes among the population are also highly correlated with the philosophical, political and religious ideas that have been dominating in the country. Secular-rational values and materialism were formulated by philosophers and the left-wing politics side in the French Revolution, and can consequently be observed especially in countries with a long history of social democratic or socialistic policy, and in countries where a large portion of the population have studied philosophy and science at universities. Survival values are characteristic for eastern-world countries and self-expression values for western-world countries. In a liberal post-industrial economy, an increasing share of the population has grown up taking survival and freedom of thought for granted, resulting in that self-expression is highly valued.
Examples
Societies that have high scores in Traditional and Survival values: Zimbabwe, Morocco, Jordan, Bangladesh.
Societies with high scores in Traditional and Self-expression values: Most of Latin America, Ireland.
Societies with high scores in Secular-rational and Survival values: Russia, Bulgaria, Ukraine, Estonia.
Societies with high scores in Secular-rational and Self-expression values: Japan, Nordic countries, Benelux, Germany, Switzerland, Czechia, Slovenia, France.
Gender values
Findings from the WVS indicate that support for gender equality is not just a consequence of democratization. It is part of a broader cultural change that is transforming industrialized societies with mass demands for increasingly democratic institutions. Although a majority of the world's population still believes that men make better political leaders than women, this view is fading in advanced industrialized societies, and also among young people in less prosperous countries.
World Values Survey data is used by the United Nation Development Programme in order to calculate the gender social norms index. The index measures attitudes toward gender equality worldwide and was introduced in the Human Development Report starting from 2019. The index has four components, measuring gender attitudes in politics, education and economy as well as social norms related to domestic violence.
Religion
The data from the World Values Survey cover several important aspects of people's religious orientation. One of them tracks how involved people are in religious services and how much importance they attach to their religious beliefs. In the data from 2000, 98% of the public in Indonesia said that religion was very important in their lives while in China only three percent considered religion very important. Another aspect concerns people's attitudes towards the relation between religion and politics and whether they approve of religious spokesmen who try to influence government decisions and people's voting preferences.
In a factor analysis of the latest wave (6) of World Values Survey data, Arno Tausch (Corvinus University of Budapest) found that family values in the tradition of Joseph Schumpeter and religious values in the research tradition of Robert Barro can be an important positive asset for society. Negative phenomena, like the distrust in the state of law; the shadow economy; the distance from altruistic values; a growing fatigue of democracy; and the lack of entrepreneurial spirit are all correlated with the loss of religiosity. Tausch based his results on a factor analysis with promax rotation of 78 variables from 45 countries with complete data, and also calculated performance indices for the 45 countries with complete data and the nine main global religious denominations. On this account, Judaism and also Protestantism emerge as most closely combining religion and the traditions of the Enlightenment.
Happiness and life satisfaction
The WVS has shown that from 1981 to 2007 happiness rose in 45 of the 52 countries for which long-term data are available. Since 1981, economic development, democratization, and rising social tolerance have increased the extent to which people perceive that they have free choice, which in turn has led to higher levels of happiness around the world, which supports the human development theory.
Findings
Some of the survey's basic findings are:
Much of the variation in human values between societies boils down to two broad dimensions: a first dimension of “traditional vs. secular-rational values” and a second dimension of “survival vs. self-expression values.”
On the first dimension, traditional values emphasize religiosity, national pride, respect for authority, obedience and marriage. Secular-rational values emphasize the opposite on each of these accounts.
On the second dimension, survival values involve a priority of security over liberty, non-acceptance of homosexuality, abstinence from political action, distrust in outsiders and a weak sense of happiness. Self-expression values imply the opposite on all these accounts.
Following the 'revised theory of modernization,' values change in predictable ways with certain aspects of modernity. People's priorities shift from traditional to secular-rational values as their sense of existential security increases (or backwards from secular-rational values to traditional values as their sense of existential security decreases).
The largest increase in existential security occurs with the transition from agrarian to industrial societies. Consequently, the largest shift from traditional towards secular-rational values happens in this phase.
People's priorities shift from survival to self-expression values as their sense of individual agency increases (or backwards from self-expression values to survival as the sense of individual agency decreases).
The largest increase in individual agency occurs with the transition from industrial to knowledge societies. Consequently, the largest shift from survival to self-expression values happens in this phase.
The value differences between societies around the world show a pronounced culture zone pattern. The strongest emphasis on traditional values and survival values is found in the Islamic societies of the Middle East. By contrast, the strongest emphasis on secular-rational values and self-expression values is found in the Protestant societies of Northern Europe.
These culture zone differences reflect different historical pathways of how entire groups of societies entered modernity. These pathways account for people's different senses of existential security and individual agency, which in turn account for their different emphases on secular-rational values and self-expression values.
Values also differ within societies along such cleavage lines as gender, generation, ethnicity, religious denomination, education, income and so forth.
Generally speaking, groups whose living conditions provide people with a stronger sense of existential security and individual agency nurture a stronger emphasis on secular-rational values and self-expression values.
However, the within-societal differences in people's values are dwarfed by a factor five to ten by the between-societal differences. On a global scale, basic living conditions differ still much more between than within societies, and so do the experiences of existential security and individual agency that shape people's values.
A specific subset of self-expression values—emancipative values—combines an emphasis on freedom of choice and equality of opportunities. Emancipative values, thus, involve priorities for lifestyle liberty, gender equality, personal autonomy and the voice of the people.
Emancipative values constitute the key cultural component of a broader process of human empowerment. Once set in motion, this process empowers people to exercise freedoms in their course of actions.
If set in motion, human empowerment advances on three levels. On the socio-economic level, human empowerment advances as growing action resources increase people's capabilities to exercise freedoms. On the socio-cultural level, human empowerment advances as rising emancipative values increase people's aspirations to exercise freedoms. On the legal-institutional level, human empowerment advances as widened democratic rights increase people's entitlements to exercise freedoms.
Human empowerment is an entity of empowering capabilities, aspirations, and entitlements. As an entity, human empowerment tends to advance in virtuous spirals or to recede in vicious spirals on each of its three levels.
As the cultural component of human empowerment, emancipative values are highly consequential in manifold ways. For one, emancipative values establish a civic form of modern individualism that favours out-group trust and cosmopolitan orientations towards others.
Emancipative values encourage nonviolent protest, even against the risk of repression. Thus, emancipative values provide social capital that activates societies, makes publics more self-expressive, and vitalizes civil society. Emancipative values advance entire societies' civic agency.
If emancipative values grow strong in countries that are democratic, they help to prevent movements away from democracy.
If emancipative values grow strong in countries that are undemocratic, they help to trigger movements towards democracy.
Emancipative values exert these effects because they encourage mass actions that put power holders under pressures to sustain, substantiate or establish democracy, depending on what the current challenge for democracy is.
Objective factors that have been found to favour democracy (including economic prosperity, income equality, ethnic homogeneity, world market integration, global media exposure, closeness to democratic neighbours, a Protestant heritage, social capital and so forth) exert an influence on democracy mostly insofar as these factors favour emancipative values.
Emancipative values do not strengthen people's desire for democracy, for the desire for democracy is universal at this point in history. But emancipative values do change the nature of the desire for democracy. And they do so in a double way.
For one, emancipative values make people's understanding of democracy more liberal: people with stronger emancipative values emphasize the empowering features of democracy rather than bread-and-butter and law-and-order issues.
Next, emancipative values make people assess the level of their country's democracy more critically: people with stronger emancipative values rather underrate than overrate their country's democratic performance.
Together, then, emancipative values generate a critical-liberal desire for democracy. The critical-liberal desire for democracy is a formidable force of democratic reforms. And, it is the best available predictor of a country's effective level of democracy and of other indicators of good governance. Neither democratic traditions nor cognitive mobilization account for the strong positive impact of emancipative values on the critical-liberal desire for democracy.
Emancipative values constitute the single most important factor in advancing the empowerment of women. Economic, religious, and institutional factors that have been found to advance women's empowerment, do so for the most part because they nurture emancipative values.
Emancipative values change people's life strategy from an emphasis on securing a decent subsistence level to enhancing human agency. As the shift from subsistence to agency affects entire societies, the overall level of subjective well-being rises.
The emancipative consequences of the human empowerment process are not a culture-specific peculiarity of the 'West.' The same empowerment processes that advance emancipative values and a critical-liberal desire for democracy in the 'West,' do the same in the 'East' and in other culture zones.
The social dominance of Islam and individual identification as Muslim both weaken emancipative values. But among young Muslims with high education, and especially among young Muslim women with high education, the Muslim/Non-Muslim gap over emancipative values closes.
A 2013 analysis noted the number of people in various countries responding that they would prefer not to have neighbors of the different race ranged from below 5% in many countries to 51.4% in Jordan, with wide variation in Europe.
According to the 2017-2020 world values survey, 95% of Chinese respondents have significant confidence in their government, compared with the world average of 45% government satisfaction.
History
The World Values Surveys were designed to test the hypothesis that economic and technological changes are transforming the basic values and motivations of the publics of industrialized societies. The surveys build on the European Values Study (EVS) first carried out in 1981. The EVS was conducted under the aegis of Jan Kerkhofs and Ruud de Moor and continues to be based in the Netherlands at the Tilburg University. The 1981 study was largely limited to developed societies, but interest in this project spread so widely that surveys were carried out in more than twenty countries, located on all six inhabited continents. Ronald Inglehart of the University of Michigan played a leading role in extending these surveys to be carried out in countries around the world. Today the network includes hundreds of social scientist from more than 100 countries.
The surveys are repeated in waves with intervals of 5 to 10 years. The waves have been carried out in the years listed in this table:
Findings from the first wave of surveys pointed to the conclusion that intergenerational changes were taking place in basic values relating to politics, economic life, religion, gender roles, family norms and sexual norms. The values of younger generations differed consistently from those prevailing among older generations, particularly in societies that had experienced rapid economic growth. To examine whether changes were actually taking place in these values and to analyze the underlying causes, a second wave of WVS surveys was carried out in 1990–91. Because these changes seem to be linked with economic and technological development, it was important to include societies across the entire range of development, from low income societies to rich societies.
A third wave of surveys was carried out in 1995–97, this time in 55 societies and with increased attention being given to analysing the cultural conditions for democracy. A fourth wave of surveys was carried out in 1999–2001 in 65 societies. A key goal was to obtain better coverage of African and Islamic societies, which had been under-represented in previous surveys. A fifth wave was carried out in 2005–07 and a sixth wave was carried out during 2011–12.
Due to the European origin of the project, the early waves of the WVS were eurocentric in emphasis, with little representation in Africa and South-East Asia. To expand, the WVS adopted a decentralised structure, in which social scientists from countries throughout the world participated in the design, execution and analysis of the data, and in publication of findings. In return for providing the data from a survey in their own society, each group obtained immediate access to the data from all participating societies enabling them to analyse social change in a broader perspective.
The WVS network has produced over 300 publications in 20 languages and secondary users have produced several thousand additional publications. The database of the WVS has been published on the internet with free access.
The official archive of the World Values Survey is located in [ASEP/JDS] Madrid, Spain.
Methodology
The World Values Survey uses the sample survey as its mode of data collection, a systematic and standardized approach to collect information through interviewing representative national samples of individuals. The basic stages of a sample survey are Questionnaire design; Sampling; Data collection and Analysis.
Questionnaire design
For each wave, suggestions for questions are solicited by social scientists from all over the world and a final master questionnaire is developed in English. Since the start in 1981 each successive wave has covered a broader range of societies than the previous one. Analysis of the data from each wave has indicated that certain questions tapped interesting and important concepts while others were of little value. This has led to the more useful questions or themes being replicated in future waves while the less useful ones have been dropped making room for new questions.
The questionnaire is translated into the various national languages and in many cases independently translated back to English to check the accuracy of the translation. In most countries, the translated questionnaire is pre-tested to help identify questions for which the translation is problematic. In some cases certain problematic questions are omitted from the national questionnaire.
Sampling
Samples are drawn from the entire population of 18 years and older. The minimum sample is 1000. In most countries, no upper age limit is imposed and some form of stratified random sampling is used to obtain representative national samples. In the first stages, a random selection of sampling points is made based on the given society statistical regions, districts, census units, election sections, electoral roll or polling place and central population registers. In most countries the population size and/or degree of urbanization of these Primary Sampling Units are taken into account. In some countries, individuals are drawn from national registers.
Data collection and field work
Following the sampling, each country is left with a representative national sample of its public. These persons are then interviewed during a limited time frame decided by the executive committee of the World Values Survey using the uniformly structured questionnaires. The survey is carried out by professional organizations using face-to-face interviews or phone interviews for remote areas. Each country has a Principal Investigator (social scientists working in academic institutions) who is responsible for conducting the survey in accordance with the fixed rules and procedures. During the field work, the agency has to report in writing according to a specific check-list. Internal consistency checks are made between the sampling design and the outcome and rigorous data cleaning procedures are followed at the WVS data archive. No country is included in a wave before full documentation has been delivered. This means a data set with the completed methodological questionnaire. and a report of country-specific information (for example important political events during the fieldwork, problems particular to the country). Once all the surveys are completed, the Principal Investigator has access to all surveys and data.
Analysis
The World Values Survey group works with leading social scientists, recruited from each society studied. They represent a wide range of cultures and perspectives which makes it possible to draw on the insights of well-informed insiders in interpreting the findings. It also helps disseminate social science techniques to new countries.
Each research team that has contributed to the survey analyses the findings according to its hypotheses. Because all researchers obtain data from all of the participating societies, they are also able to compare the values and beliefs of the people of their own society with those from scores of other societies and to test alternative hypotheses. In addition, the participants are invited to international meetings at which they can compare findings and interpretations with other members of the WVS network. The findings are then disseminated through international conferences and joint publications.
Usage
The World Values Survey data has been downloaded by over 100,000 researchers, journalists, policy-makers and others. The data is available on the WVS website which contains tools developed for online analysis.
Governance and funding
The World Values Survey is organised as a network of social scientists coordinated by a central body - the World Values Survey Association. It is established as a non-profit organization seated in Stockholm, Sweden, with a constitution and mission statement. The project is guided by an executive committee representing all regions of the world. The committee is also supported by a Scientific Advisory Committee, a Secretariat and an Archive. The WVS Executive Committee provides leadership and strategic planning for the association. It is responsible for the recruitment of new members, the organization of meetings and workshops, data processing and distribution, capacity building and the promotion of publications and dissemination of results. The WVS Executive Committee also raises funds for central functions and assists member groups in their fundraising.
Each national team is responsible for its own expenses and most surveys are financed by local sources. However, central funding has been obtained in cases where local funding is not possible. Presently, the activities of the WVS Secretariat and WVS Executive Committee are funded by the Bank of Sweden Tercentenary Foundation. Other funding has been obtained from the U.S. National Science Foundation, the Swedish International Development Cooperation Agency (SIDA), the Volkswagen Foundation, he German Science Foundation (DFG) and the Dutch Ministry of Education, Culture and Science.
Media coverage
The World Values Survey data has been used in a large number of scholarly publications and the findings have been reported in media such as BBC News, Bloomberg Businessweek, China Daily, Chinadialogue.net, CNN, Der Spiegel, Der Standard, Rzeczpospolita, Gazeta Wyborcza, Le Monde, Neue Zürcher Zeitung, Newsweek, Süddeutsche Zeitung, Time, The Economist, The Guardian, The New Yorker, The New York Times, The Sydney Morning Herald, The Washington Post, and the World Development Report.
World Values Paper Series: World Values Research
World Values Research (WVR), registered as , is the official online paper series of the World Values Survey Association. The series is edited by the executive committee of the Association. WVR publishes research papers of high scientific standards based on evidence from World Values Surveys data. Papers in WVR follow good academic practice and abide to ethical norms in line with the mission of the World Values Survey Association. Publication of submitted papers is pending on an internal review by the executive committee of the World Values Survey Association. WVR papers present original research based on data from the World Values Surveys, providing new evidence and novel insights of theoretical relevance to the theme of human values. An archive of published WVR papers is available on the project's website.
See also
Afrobarometer
Eurobarometer
Latin American Public Opinion Project
Broad measures of economic progress
Happiness economics
Post-materialism
World Social Capital Monitor
Notes
Bibliography
.
.
.
.
.
.
.
see https://www.researchgate.net/publication/290349218_The_political_algebra_of_global_value_change_General_models_and_implications_for_the_Muslim_world
.
.
.
.
.
.
.
External links
1981 establishments
Belief
Epistemology
Global ethics
Political science
Social statistics data
Statistical data sets
Surveys (human research)
Value (ethics) | 0.773049 | 0.989029 | 0.764568 |
Art of Europe | The art of Europe, also known as Western art, encompasses the history of visual art in Europe. European prehistoric art started as mobile Upper Paleolithic rock and cave painting and petroglyph art and was characteristic of the period between the Paleolithic and the Iron Age. Written histories of European art often begin with the Aegean civilizations, dating from the 3rd millennium BC. However a consistent pattern of artistic development within Europe becomes clear only with Ancient Greek art, which was adopted and transformed by Rome and carried; with the Roman Empire, across much of Europe, North Africa and Western Asia.
The influence of the art of the Classical period waxed and waned throughout the next two thousand years, seeming to slip into a distant memory in parts of the Medieval period, to re-emerge in the Renaissance, suffer a period of what some early art historians viewed as "decay" during the Baroque period, to reappear in a refined form in Neo-Classicism and to be reborn in Post-Modernism.
Before the 1800s, the Christian church was a major influence on European art, and commissions from the Church provided the major source of work for artists. In the same period there was also a renewed interest in classical mythology, great wars, heroes and heroines, and themes not connected to religion. Most art of the last 200 years has been produced without reference to religion and often with no particular ideology at all, but art has often been influenced by political issues, whether reflecting the concerns of patrons or the artist.
European art is arranged into a number of stylistic periods, which, historically, overlap each other as different styles flourished in different areas. Broadly the periods are, Classical, Byzantine, Medieval, Gothic, Renaissance, Baroque, Rococo, Neoclassical, Modern, Postmodern and New European Painting.
Prehistoric art
European prehistoric art is an important part of the European cultural heritage. Prehistoric art history is usually divided into four main periods: Stone Age, Neolithic, Bronze Age, and Iron Age. Most of the remaining artifacts of this period are small sculptures and cave paintings.
Much surviving prehistoric art is small portable sculptures, with a small group of female Venus figurines such as the Venus of Willendorf (24,000–22,000 BC) found across central Europe; the 30 cm tall Löwenmensch figurine of about 30,000 BCE has hardly any pieces that can be related to it. The Swimming Reindeer of about 11,000 BCE is one of the finest of a number of Magdalenian carvings in bone or antler of animals in the art of the Upper Paleolithic, though they are outnumbered by engraved pieces, which are sometimes classified as sculpture. With the beginning of the Mesolithic in Europe figurative sculpture greatly reduced, and remained a less common element in art than relief decoration of practical objects until the Roman period, despite some works such as the Gundestrup cauldron from the European Iron Age and the Bronze Age Trundholm sun chariot.
The oldest European cave art dates back 40,800, and can be found in the El Castillo Cave in Spain. Other cave painting sites include Lascaux, Cave of Altamira, Grotte de Cussac, Pech Merle, Cave of Niaux, Chauvet Cave, Font-de-Gaume, Creswell Crags, Nottinghamshire, England, (Cave etchings and bas-reliefs discovered in 2003), Coliboaia cave from Romania (considered the oldest cave painting in central Europe) and Magura, Belogradchik, Bulgaria. Rock painting was also performed on cliff faces, but fewer of those have survived because of erosion. One well-known example is the rock paintings of Astuvansalmi in the Saimaa area of Finland. When Marcelino Sanz de Sautuola first encountered the Magdalenian paintings of the Altamira cave, Cantabria, Spain in 1879, the academics of the time considered them hoaxes. Recent reappraisals and numerous additional discoveries have since demonstrated their authenticity, while at the same time stimulating interest in the artistry of Upper Palaeolithic peoples. Cave paintings, undertaken with only the most rudimentary tools, can also furnish valuable insight into the culture and beliefs of that era.
The Rock art of the Iberian Mediterranean Basin represents a very different style, with the human figure the main focus, often seen in large groups, with battles, dancing and hunting all represented, as well as other activities and details such as clothing. The figures are generally rather sketchily depicted in thin paint, with the relationships between the groups of humans and animals more carefully depicted than individual figures. Other less numerous groups of rock art, many engraved rather than painted, show similar characteristics. The Iberian examples are believed to date from a long period perhaps covering the Upper Paleolithic, Mesolithic and early Neolithic.
Prehistoric Celtic art comes from much of Iron Age Europe and survives mainly in the form of high-status metalwork skillfully decorated with complex, elegant and mostly abstract designs, often using curving and spiral forms. There are human heads and some fully represented animals, but full-length human figures at any size are so rare that their absence may represent a religious taboo. As the Romans conquered Celtic territories, it almost entirely vanishes, but the style continued in limited use in the British Isles, and with the coming of Christianity revived there in the Insular style of the Early Middle Ages.
Ancient
Minoan
The Minoan civilization of Crete is regarded as the oldest civilization in Europe. Minoan art is marked by imaginative images and exceptional workmanship. Sinclair Hood described an "essential quality of the finest Minoan art, the ability to create an atmosphere of movement and life although following a set of highly formal conventions". It forms part of the wider grouping of Aegean art, and in later periods came for a time to have a dominant influence over Cycladic art. Wood and textiles have decomposed, so most surviving examples of Minoan art are pottery, intricately-carved Minoan seals, .palace frescos which include landscapes), small sculptures in various materials, jewellery, and metalwork.
The relationship of Minoan art to that of other contemporary cultures and later Ancient Greek art has been much discussed. It clearly dominated Mycenaean art and Cycladic art of the same periods, even after Crete was occupied by the Mycenaeans, but only some aspects of the tradition survived the Greek Dark Ages after the collapse of Mycenaean Greece.
Minoan art has a variety of subject-matter, much of it appearing across different media, although only some styles of pottery include figurative scenes. Bull-leaping appears in painting and several types of sculpture, and is thought to have had a religious significance; bull's heads are also a popular subject in terracotta and other sculptural materials. There are no figures that appear to be portraits of individuals, or are clearly royal, and the identities of religious figures is often tentative, with scholars uncertain whether they are deities, clergy or devotees. Equally, whether painted rooms were "shrines" or secular is far from clear; one room in Akrotiri has been argued to be a bedroom, with remains of a bed, or a shrine.
Animals, including an unusual variety of marine fauna, are often depicted; the "Marine Style" is a type of painted palace pottery from MM III and LM IA that paints sea creatures including octopus spreading all over the vessel, and probably originated from similar frescoed scenes; sometimes these appear in other media. Scenes of hunting and warfare, and horses and riders, are mostly found in later periods, in works perhaps made by Cretans for a Mycenaean market, or Mycenaean overlords of Crete.
While Minoan figures, whether human or animal, have a great sense of life and movement, they are often not very accurate, and the species is sometimes impossible to identify; by comparison with Ancient Egyptian art they are often more vivid, but less naturalistic. In comparison with the art of other ancient cultures there is a high proportion of female figures, though the idea that Minoans had only goddesses and no gods is now discounted. Most human figures are in profile or in a version of the Egyptian convention with the head and legs in profile, and the torso seen frontally; but the Minoan figures exaggerate features such as slim male waists and large female breasts.
Classical Greek and Hellenistic
Ancient Greece had great painters, great sculptors, and great architects. The Parthenon is an example of their architecture that has lasted to modern days. Greek marble sculpture is often described as the highest form of Classical art. Painting on the pottery of Ancient Greece and ceramics gives a particularly informative glimpse into the way society in Ancient Greece functioned. Black-figure vase painting and Red-figure vase painting gives many surviving examples of what Greek painting was. Some famous Greek painters on wooden panels who are mentioned in texts are Apelles, Zeuxis and Parrhasius, however no examples of Ancient Greek panel painting survive, only written descriptions by their contemporaries or by later Romans. Zeuxis lived in 5–6 BC and was said to be the first to use sfumato. According to Pliny the Elder, the realism of his paintings was such that birds tried to eat the painted grapes. Apelles is described as the greatest painter of Antiquity for perfect technique in drawing, brilliant color and modeling.
Roman
Roman art was influenced by Greece and can in part be taken as a descendant of ancient Greek painting and sculpture, but was also strongly influenced by the more local Etruscan art of Italy. Roman sculpture, is primarily portraiture derived from the upper classes of society as well as depictions of the gods. However, Roman painting does have important unique characteristics. Among surviving Roman paintings are wall paintings, many from villas in Campania, in Southern Italy, especially at Pompeii and Herculaneum. Such painting can be grouped into four main "styles" or periods and may contain the first examples of trompe-l'œil, pseudo-perspective, and pure landscape.
Almost all of the surviving painted portraits from the Ancient world are a large number of coffin-portraits of bust form found in the Late Antique cemetery of Al-Fayum. They give an idea of the quality that the finest ancient work must have had. A very small number of miniatures from Late Antique illustrated books also survive, and a rather larger number of copies of them from the Early Medieval period. Early Christian art grew out of Roman popular, and later Imperial, art and adapted its iconography from these sources.
Medieval
Most surviving art from the Medieval period was religious in focus, often funded by the Church, powerful ecclesiastical individuals such as bishops, communal groups such as abbeys, or wealthy secular patrons. Many had specific liturgical functions—processional crosses and altarpieces, for example.
One of the central questions about Medieval art concerns its lack of realism. A great deal of knowledge of perspective in art and understanding of the human figure was lost with the fall of Rome. But realism was not the primary concern of Medieval artists. They were simply trying to send a religious message, a task which demands clear iconic images instead of precisely rendered ones.
Time Period: 6th century to 15th century
Early Medieval art
Migration period art is a general term for the art of the "barbarian" peoples who moved into formerly Roman territories. Celtic art in the 7th and 8th centuries saw a fusion with Germanic traditions through contact with the Anglo-Saxons creating what is called the Hiberno-Saxon style or Insular art, which was to be highly influential on the rest of the Middle Ages. Merovingian art describes the art of the Franks before about 800, when Carolingian art combined insular influences with a self-conscious classical revival, developing into Ottonian art. Anglo-Saxon art is the art of England after the Insular period. Illuminated manuscripts contain nearly all the surviving painting of the period, but architecture, metalwork and small carved work in wood or ivory were also important media.
Byzantine
Byzantine art overlaps with or merges with what we call Early Christian art until the iconoclasm period of 730-843 when the vast majority of artwork with figures was destroyed; so little remains that today any discovery sheds new understanding. After 843 until 1453 there is a clear Byzantine art tradition. It is often the finest art of the Middle Ages in terms of quality of material and workmanship, with production centered on Constantinople. Byzantine art's crowning achievement were the monumental frescos and mosaics inside domed churches, most of which have not survived due to natural disasters and the appropriation of churches to mosques.
Romanesque
Romanesque art refers to the period from about 1000 to the rise of Gothic art in the 12th century. This was a period of increasing prosperity, and the first to see a coherent style used across Europe, from Scandinavia to Sicily. Romanesque art is vigorous and direct, was originally brightly coloured, and is often very sophisticated. Stained glass and enamel on metalwork became important media, and larger sculptures in the round developed, although high relief was the principal technique. Its architecture is dominated by thick walls, and round-headed windows and arches, with much carved decoration.
Gothic
Gothic art is a variable term depending on the craft, place and time. The term originated with Gothic architecture in 1140, but Gothic painting did not appear until around 1200 (this date has many qualifications), when it diverged from Romanesque style. Gothic sculpture was born in France in 1144 with the renovation of the Abbey Church of S. Denis and spread throughout Europe, by the 13th century it had become the international style, replacing Romanesque. International Gothic describes Gothic art from about 1360 to 1430, after which Gothic art merges into Renaissance art at different times in different places. During this period forms such as painting, in fresco and on panel, become newly important, and the end of the period includes new media such as prints.
Renaissance
The Renaissance is characterized by a focus on the arts of Ancient Greece and Rome, which led to many changes in both the technical aspects of painting and sculpture, as well as to their subject matter. It began in Italy, a country rich in Roman heritage as well as material prosperity to fund artists. During the Renaissance, painters began to enhance the realism of their work by using new techniques in perspective, thus representing three dimensions more authentically. Artists also began to use new techniques in the manipulation of light and darkness, such as the tone contrast evident in many of Titian's portraits and the development of sfumato and chiaroscuro by Leonardo da Vinci. Sculptors, too, began to rediscover many ancient techniques such as contrapposto. Following with the humanist spirit of the age, art became more secular in subject matter, depicting ancient mythology in addition to Christian themes. This genre of art is often referred to as Renaissance Classicism. In the North, the most important Renaissance innovation was the widespread use of oil paints, which allowed for greater colour and intensity.
From Gothic to the Renaissance
During the late 13th century and early 14th century, much of the painting in Italy was Byzantine in character, notably that of Duccio of Siena and Cimabue of Florence, while Pietro Cavallini in Rome was more Gothic in style. During the 13th century, Italian sculptors began to draw inspiration not only from medieval prototypes, but also from ancient works.
In 1290, Giotto began painting in a manner that was less traditional and more based upon observation of nature. His famous cycle at the Scrovegni Chapel, Padua, is seen as the beginnings of a Renaissance style.
Other painters of the 14th century were carried the Gothic style to great elaboration and detail. Notable among these painters are Simone Martini and Gentile da Fabriano.
In the Netherlands, the technique of painting in oils rather than tempera, led itself to a form of elaboration that was not dependent upon the application of gold leaf and embossing, but upon the minute depiction of the natural world. The art of painting textures with great realism evolved at this time. Dutch painters such as Jan van Eyck and Hugo van der Goes were to have great influence on Late Gothic and Early Renaissance painting.
Early Renaissance
The ideas of the Renaissance first emerged in the city-state of Florence, Italy. The sculptor Donatello returned to classical techniques such as contrapposto and classical subjects like the unsupported nude—his second sculpture of David was the first free-standing bronze nude created in Europe since the Roman Empire. The sculptor and architect Brunelleschi studied the architectural ideas of ancient Roman buildings for inspiration. Masaccio perfected elements like composition, individual expression, and human form to paint frescoes, especially those in the Brancacci Chapel, of surprising elegance, drama, and emotion.
A remarkable number of these major artists worked on different portions of the Florence Cathedral. Brunelleschi's dome for the cathedral was one of the first truly revolutionary architectural innovations since the Gothic flying buttress. Donatello created many of its sculptures. Giotto and Lorenzo Ghiberti also contributed to the cathedral.
High Renaissance
High Renaissance artists include such figures as Leonardo da Vinci, Michelangelo Buonarroti, and Raffaello Sanzio.
The 15th-century artistic developments in Italy (for example, the interest in perspectival systems, in depicting anatomy, and in classical cultures) matured during the 16th century, accounting for the designations "Early Renaissance" for the 15th century and "High Renaissance" for the 16th century. Although no singular style characterizes the High Renaissance, the art of those most closely associated with this period—Leonardo da Vinci, Raphael, Michelangelo, and Titian—exhibits an astounding mastery, both technical and aesthetic. High Renaissance artists created works of such authority that generations of later artists relied on these artworks for instruction.
These exemplary artistic creations further elevated the prestige of artists. Artists could claim divine inspiration, thereby raising visual art to a status formerly given only to poetry. Thus, painters, sculptors, and architects came into their own, successfully claiming for their work a high position among the fine arts. In a sense, 16th- century masters created a new profession with its own rights of expression and its own venerable character.
Northern art up to the Renaissance
Early Netherlandish painting developed (but did not strictly invent) the technique of oil painting to allow greater control in painting minute detail with realism—Jan van Eyck (1366–1441) was a figure in the movement from illuminated manuscripts to panel paintings.
Hieronymus Bosch (1450?–1516), a Dutch painter, is another important figure in the Northern Renaissance. In his paintings, he used religious themes, but combined them with grotesque fantasies, colorful imagery, and peasant folk legends. His paintings often reflect the confusion and anguish associated with the end of the Middle Ages.
Albrecht Dürer introduced Italian Renaissance style to Germany at the end of the 15th century, and dominated German Renaissance art.
Time Period:
Italian Renaissance: Late 14th century to Early 16th century
Northern Renaissance: 16th century
Mannerism, Baroque, and Rococo
In European art, Renaissance Classicism spawned two different movements—Mannerism and the Baroque. Mannerism, a reaction against the idealist perfection of Classicism, employed distortion of light and spatial frameworks in order to emphasize the emotional content of a painting and the emotions of the painter. The work of El Greco is a particularly clear example of Mannerism in painting during the late 16th, early 17th centuries. Northern Mannerism took longer to develop, and was largely a movement of the last half of the 16th century. Baroque art took the representationalism of the Renaissance to new heights, emphasizing detail, movement, lighting, and drama in their search for beauty. Perhaps the best known Baroque painters are Caravaggio, Rembrandt, Peter Paul Rubens, and Diego Velázquez.
A rather different art developed out of northern realist traditions in 17th-century Dutch Golden Age painting, which had very little religious art, and little history painting, instead playing a crucial part in developing secular genres such as still life, genre paintings of everyday scenes, and landscape painting. While the Baroque nature of Rembrandt's art is clear, the label is less use for Vermeer and many other Dutch artists. Flemish Baroque painting shared a part in this trend, while also continuing to produce the traditional categories.
Baroque art is often seen as part of the Counter-Reformation—the artistic element of the revival of spiritual life in the Roman Catholic Church. Additionally, the emphasis that Baroque art placed on grandeur is seen as Absolutist in nature. Religious and political themes were widely explored within the Baroque artistic context, and both paintings and sculptures were characterised by a strong element of drama, emotion and theatricality. Famous Baroque artists include Caravaggio or Rubens. Artemisia Gentileschi was another noteworthy artist, who was inspired by Caravaggio's style. Baroque art was particularly ornate and elaborate in nature, often using rich, warm colours with dark undertones. Pomp and grandeur were important elements of the Baroque artistic movement in general, as can be seen when Louis XIV said, "I am grandeur incarnate"; many Baroque artists served kings who tried to realize this goal. Baroque art in many ways was similar to Renaissance art; as a matter of fact, the term was initially used in a derogative manner to describe post-Renaissance art and architecture which was over-elaborate. Baroque art can be seen as a more elaborate and dramatic re-adaptation of late Renaissance art.
By the 18th century, however, Baroque art was falling out of fashion as many deemed it too melodramatic and also gloomy, and it developed into the Rococo, which emerged in France. Rococo art was even more elaborate than the Baroque, but it was less serious and more playful. Whilst the Baroque used rich, strong colours, Rococo used pale, creamier shades. The artistic movement no longer placed an emphasis on politics and religion, focusing instead on lighter themes such as romance, celebration, and appreciation of nature. Rococo art also contrasted the Baroque as it often refused symmetry in favor of asymmetrical designs. Furthermore, it sought inspiration from the artistic forms and ornamentation of Far Eastern Asia, resulting in the rise in favour of porcelain figurines and chinoiserie in general. The 18th-century style flourished for a short while; nevertheless, the Rococo style soon fell out of favor, being seen by many as a gaudy and superficial movement emphasizing aesthetics over meaning. Neoclassicism in many ways developed as a counter movement of the Rococo, the impetus being a sense of disgust directed towards the latter's florid qualities.
Mannerism (16th century)
Baroque (early 17th century to mid-early 18th century)
Rococo (early to mid-18th century)
Neoclassicism, Romanticism, Academism, and Realism
Throughout the 18th century, a counter movement opposing the Rococo sprang up in different parts of Europe, commonly known as Neoclassicism. It despised the perceived superficiality and frivolity of Rococo art, and desired for a return to the simplicity, order and 'purism' of classical antiquity, especially ancient Greece and Rome. The movement was in part also influenced by the Renaissance, which itself was strongly influenced by classical art. Neoclassicism was the artistic component of the intellectual movement known as the Enlightenment; the Enlightenment was idealistic, and put its emphasis on objectivity, reason and empirical truth. Neoclassicism had become widespread in Europe throughout the 18th century, especially in the United Kingdom, which saw great works of Neoclassical architecture spring up during this period; Neoclassicism's fascination with classical antiquity can be seen in the popularity of the Grand Tour during this decade, where wealthy aristocrats travelled to the ancient ruins of Italy and Greece. Nevertheless, a defining moment for Neoclassicism came during the French Revolution in the late 18th century; in France, Rococo art was replaced with the preferred Neoclassical art, which was seen as more serious than the former movement. In many ways, Neoclassicism can be seen as a political movement as well as an artistic and cultural one. Neoclassical art places an emphasis on order, symmetry and classical simplicity; common themes in Neoclassical art include courage and war, as were commonly explored in ancient Greek and Roman art. Ingres, Canova, and Jacques-Louis David are among the best-known neoclassicists.
Just as Mannerism rejected Classicism, so did Romanticism reject the ideas of the Enlightenment and the aesthetic of the Neoclassicists. Romanticism rejected the highly objective and ordered nature of Neoclassicism, and opted for a more individual and emotional approach to the arts. Romanticism placed an emphasis on nature, especially when aiming to portray the power and beauty of the natural world, and emotions, and sought a highly personal approach to art. Romantic art was about individual feelings, not common themes, such as in Neoclassicism; in such a way, Romantic art often used colours in order to express feelings and emotion. Similarly to Neoclassicism, Romantic art took much of its inspiration from ancient Greek and Roman art and mythology, yet, unlike Neoclassical, this inspiration was primarily used as a way to create symbolism and imagery. Romantic art also takes much of its aesthetic qualities from medievalism and Gothicism, as well as mythology and folklore. Among the greatest Romantic artists were Eugène Delacroix, Francisco Goya, J. M. W. Turner, John Constable, Caspar David Friedrich, Thomas Cole, and William Blake.
Most artists attempted to take a centrist approach which adopted different features of Neoclassicist and Romanticist styles, in order to synthesize them. The different attempts took place within the French Academy, and collectively are called Academic art. Adolphe William Bouguereau is considered a chief example of this stream of art.
In the early 19th century the face of Europe, however, became radically altered by industrialization. Poverty, squalor, and desperation were to be the fate of the new working class created by the "revolution". In response to these changes going on in society, the movement of Realism emerged. Realism sought to accurately portray the conditions and hardships of the poor in the hopes of changing society. In contrast with Romanticism, which was essentially optimistic about mankind, Realism offered a stark vision of poverty and despair. Similarly, while Romanticism glorified nature, Realism portrayed life in the depths of an urban wasteland. Like Romanticism, Realism was a literary as well as an artistic movement. The great Realist painters include Jean-Baptiste-Siméon Chardin, Gustave Courbet, Jean-François Millet, Camille Corot, Honoré Daumier, Édouard Manet, Edgar Degas (both considered as Impressionists), and Thomas Eakins, among others.
The response of architecture to industrialisation, in stark contrast to the other arts, was to veer towards historicism. Although the railway stations built during this period are often considered the truest reflections of its spirit – they are sometimes called "the cathedrals of the age" – the main movements in architecture during the Industrial Age were revivals of styles from the distant past, such as the Gothic Revival. Related movements were the Pre-Raphaelite Brotherhood, who attempted to return art to its state of "purity" prior to Raphael, and the Arts and Crafts Movement, which reacted against the impersonality of mass-produced goods and advocated a return to medieval craftsmanship.
Time Period:
Neoclassicism: mid-early 18th century to early 19th century
Romanticism: late 18th century to mid-19th century
Realism: 19th century
Modern art
Out of the naturalist ethic of Realism grew a major artistic movement, Impressionism. The Impressionists pioneered the use of light in painting as they attempted to capture light as seen from the human eye. Edgar Degas,
Édouard Manet, Claude Monet, Camille Pissarro, and Pierre-Auguste Renoir, were all involved in the Impressionist movement. As a direct outgrowth of Impressionism came the development of Post-Impressionism. Paul Cézanne, Vincent van Gogh, Paul Gauguin, Georges Seurat are the best known Post-Impressionists.
Following the Impressionists and the Post-Impressionists came Fauvism, often considered the first "modern" genre of art. Just as the Impressionists revolutionized light, so did the fauvists rethink color, painting their canvases in bright, wild hues. After the Fauvists, modern art began to develop in all its forms, ranging from Expressionism, concerned with evoking emotion through objective works of art, to Cubism, the art of transposing a four-dimensional reality onto a flat canvas, to Abstract art. These new art forms pushed the limits of traditional notions of "art" and corresponded to the similar rapid changes that were taking place in human society, technology, and thought.
Surrealism is often classified as a form of Modern Art. However, the Surrealists themselves have objected to the study of surrealism as an era in art history, claiming that it oversimplifies the complexity of the movement (which they say is not an artistic movement), misrepresents the relationship of surrealism to aesthetics, and falsely characterizes ongoing surrealism as a finished, historically encapsulated era. Other forms of Modern art (some of which border on Contemporary art) include:
Abstract expressionism
Art Deco
Art Nouveau
Bauhaus
Color Field painting
Conceptual Art
Constructivism
Cubism
Dada
Der Blaue Reiter
De Stijl
Die Brücke
Body art
Expressionism
Fauvism
Fluxus
Futurism
Happening
Surrealism
Lettrisme
Lyrical Abstraction
Land art
Minimalism
Naive art
Op art
Performance art
Photorealism
Pop art
Suprematism
Video art
Vorticism
Time Period:
Impressionism: late 19th Century
Others: First half of the 20th century
Contemporary art and Postmodern art
Modern art foreshadowed several characteristics of what would later be defined as postmodern art; as a matter of fact, several modern art movements can often be classified as both modern and postmodern, such as pop art. Postmodern art, for instance, places a strong emphasis on irony, parody and humour in general; modern art started to develop a more ironic approach to art which would later advance in a postmodern context. Postmodern art sees the blurring between the high and fine arts with low-end and commercial art; modern art started to experiment with this blurring.
Recent developments in art have been characterised by a significant expansion of what can now deemed to be art, in terms of materials, media, activity and concept. Conceptual art in particular has had a wide influence. This started literally as the replacement of concept for a made object, one of the intentions of which was to refute the commodification of art. However, it now usually refers to an artwork where there is an object, but the main claim for the work is made for the thought process that has informed it. The aspect of commercialism has returned to the work.
There has also been an increase in art referring to previous movements and artists, and gaining validity from that reference.
Postmodernism in art, which has grown since the 1960s, differs from Modernism in as much as Modern art movements were primarily focused on their own activities and values, while Postmodernism uses the whole range of previous movements as a reference point. This has by definition generated a relativistic outlook, accompanied by irony and a certain disbelief in values, as each can be seen to be replaced by another. Another result of this has been the growth of commercialism and celebrity. Postmodern art has questioned common rules and guidelines of what is regarded as 'fine art', merging low art with the fine arts until none is fully distinguishable. Before the advent of postmodernism, the fine arts were characterised by a form of aesthetic quality, elegance, craftsmanship, finesse and intellectual stimulation which was intended to appeal to the upper or educated classes; this distinguished high art from low art, which, in turn, was seen as tacky, kitsch, easily made and lacking in much or any intellectual stimulation, art which was intended to appeal to the masses. Postmodern art blurred these distinctions, bringing a strong element of kitsch, commercialism and campness into contemporary fine art; what is nowadays seen as fine art may have been seen as low art before postmodernism revolutionised the concept of what high or fine art truly is. In addition, the postmodern nature of contemporary art leaves a lot of space for individualism within the art scene; for instance, postmodern art often takes inspiration from past artistic movements, such as Gothic or Baroque art, and both juxtaposes and recycles styles from these past periods in a different context.
Some surrealists in particular Joan Miró, who called for the "murder of painting" (In numerous interviews dating from the 1930s onwards, Miró expressed contempt for conventional painting methods and his desire to "kill", "murder", or "rape" them in favor of more contemporary means of expression). have denounced or attempted to "supersede" painting, and there have also been other anti-painting trends among artistic movements, such as that of Dada and conceptual art. The trend away from painting in the late 20th century has been countered by various movements, for example the continuation of Minimal Art, Lyrical Abstraction, Pop Art, Op Art, New Realism, Photorealism, Neo Geo, Neo-expressionism, New European Painting, Stuckism, Excessivism and various other important and influential painterly directions.
See also
History of art
History of painting
Lives of the Most Excellent Painters, Sculptors, and Architects (16th century book)
Modernism
Painting in the Americas before European colonization
Western European paintings in Ukrainian museums
List of time periods
References
Bibliography
Chapin, Anne P., "Power, Privilege and Landscape in Minoan Art", in Charis: Essays in Honor of Sara A. Immerwahr, Hesperia (Princeton, N.J.) 33, 2004, ASCSA, , 9780876615331, google books
Gates, Charles, "Pictorial Imagery in Minoan Wall Painting", in Charis: Essays in Honor of Sara A. Immerwahr, Hesperia (Princeton, N.J.) 33, 2004, ASCSA, , 9780876615331, google books
Hood, Sinclair, The Arts in Prehistoric Greece, 1978, Penguin (Penguin/Yale History of Art),
Sandars, Nancy K., Prehistoric Art in Europe, Penguin (Pelican, now Yale, History of Art), 1968 (nb 1st edn.; early datings now superseded)
External links
Web Gallery of Art
Postmodernism
European artists community
Panopticon Virtual Art Gallery
Europe
Western culture | 0.770345 | 0.992485 | 0.764555 |
Interregnum | An interregnum (plural interregna or interregnums) is a period of discontinuity or "gap" in a government, organization, or social order. Archetypally, it was the period of time between the reign of one monarch and the next (coming from Latin inter-, "between" and rēgnum, "reign" [from rex, rēgis, "king"]), and the concepts of interregnum and regency therefore overlap. Historically, longer and heavier interregna have been typically accompanied by widespread unrest, civil and succession wars between warlords, and power vacuums filled by foreign invasions or the emergence of a new power. A failed state is usually in interregnum.
The term also refers to the periods between the election of a new parliament and the establishment of a new government from that parliament in parliamentary democracies, usually ones that employ some form of proportional representation that allows small parties to elect significant numbers, requiring time for negotiations to form a government. In the UK, Canada and other electoral systems with single-member districts, this period is usually very brief, except in the rare occurrence of a hung parliament as occurred both in the UK in 2017 and in Australia in 2010. In parliamentary interregnums, the previous government usually stands as a caretaker government until the new government is established. Additionally, the term has been applied to the United States presidential transition, the period of time between the election of a new U.S. president and his or her inauguration, during which the outgoing president remains in power, but as a lame duck.
Similarly, in some Christian denominations, "interregnum" (interim) describes the time between vacancy and appointment of priest or pastors to various roles.
Historical periods of interregnum
Particular historical periods known as interregna include:
The Chu–Han Contention of 206–202 BC in China, after the death of Emperor Qin Er Shi, when there was a contest to the throne. It ended with the accession of Liu Bang, ushering in the Han dynasty and ending the Qin dynasty.
The Crisis of the Third Century (235–284) in the Roman Empire, when it was split into multiple realms and chaos (invasion, civil war, Cyprian Plague, and economic depression) was a constant threat until Aurelian and Diocletian restored the empire. The crisis forced Diocletian to partition the Empire and marked the beginning of the fall of the Western Roman Empire.
From 423 to 425 in the Roman Empire, between the death of Honorius and the accession of Valentinian III. A usurper called Joannes seized power.
The ten-year period after the death of King Cleph from 574/575 to 584/585 in the Kingdom of the Lombards, known as the Rule of the Dukes. Marked by increasing domination of the Italian Peninsula by the Franks and the Byzantine Empire. Ended with the election of Authari as king.
The Sasanian Interregnum (628–632), a conflict that broke out after the death of Khosrau II between the Sasanian nobles of different factions. Ended with the victory of Yazdegerd III and contributed to the fall of the Sasanian Empire.
The 1022–1072 period in Ireland, between the death of Máel Sechnaill mac Domnaill and the accession of Toirdhealbhach Ua Briain, is sometimes regarded as an interregnum, as the High Kingship of Ireland was disputed throughout these decades. The interregnum may even have extended to 1121, when Toirdhealbhach Ua Conchobhair acceded to the title.
From 1089 to 1102 in the Kingdom of Croatia, between death of Croatian king Demetrius Zvonimir and when Coloman, king of Hungary is crowned king of Croatia in 1102.
From 13 April 1204 to 25 July 1261 in the Byzantine Empire. Following the Sack of Constantinople during the Fourth Crusade, the Byzantine Empire was dissolved, to be replaced by several Crusader states and several Byzantine states. It was re-established by Nicean general Alexios Strategopoulos who placed Michael VIII Palaiologos back on the throne of a united Byzantine Empire.
From 21 May 1254 to 29 September 1273, The Great Interregnum in the Holy Roman Empire after the deposition of the last Hohenstaufen emperor Frederick II by Pope Innocent IV and the death of his son King Conrad IV of Germany until the election of the Habsburg scion Rudolph as Rex Romanorum.
First Interregnum in Scotland, which lasted from either 19 March 1286 or 26 September 1290 until 17 November 1292. The exact dating of the interregnum depends on whether the uncrowned Margaret, Maid of Norway was officially queen before her death in 1290. It lasted until John Balliol was crowned King of Scots.
Second Interregnum in Scotland, from 10 July 1296, when John Balliol was deposed, until 25 March 1306, when Robert the Bruce was crowned.
From 14 January 1301 until 1308 in the Kingdom of Hungary between the extinction of the Árpád dynasty and when Charles I of Hungary secured the throne for himself against several pretenders.
From 5 June 1316 to 15 November 1316 in France and Navarre, between the death of Louis X and the birth of his posthumous son John I.
From 2 August 1332 until 21 June 1340 in Denmark when the country was mortgaged to a few German counts.
The Portuguese Interregnum, from 22 October 1383 until 6 April 1385, a result of the succession crisis caused by the death of Ferdinand I without a legitimate heir. Ended when John I's forces won the Battle of Aljubarrota, beginning the Aviz dynasty.
The Ottoman Interregnum, from 20 July 1402 until 1413, a result of the capture of Sultan Bayezid I at the hands of Central Asian warlord Timur (Tamerlane) in the Battle of Ankara. A period of civil war ensued as none of Bayezid's sons could establish primacy. The crisis was resolved when one of his sons, Mehmed, defeated and killed his brothers and reestablished the Empire.
From 20 January 1410 to 1412 in the Crown of Aragon. The death of King Martin without heir led to a succession crisis and a period of civil war, resolved ultimately by the Compromise of Caspe.
The 1453–1456 period of civil war in Kingdom of Majapahit (now in Java, Indonesia)
From 1481 until 1483 in the Kingdom of Norway, during a conflict over the succession of John, during the period of the Kalmar Union. The Norwegian Council of the Realm initially refused to accept the hereditary succession of John; as they asserted that Norway was an elective monarchy. When no serious opposition candidate emerged, the Council relented and elected John. There was also an interregnum from 1533 to 1537, after the death of Frederick I and the interregnum ended with a coup d'état by his son Christian III.
From 6 April 1490 to 15 July 1490 in the Kingdom of Hungary between the death of Matthias Corvinus and election of Vladislaus II.
The Time of Troubles in Russia (1598–1613) between the Rurikid and Romanov dynasties, which caused a famine and an invasion by Poland-Lithuania as numerous usurpers and false Dmitrys claimed to be the legitimate successor to the dead Fyodor I. Ended when the Zemsky Sobor elected Michael Romanov as the new tsar, beginning the Romanov dynasty.
The Interregnum of 1649–1660, a republican period in the three kingdoms of England, Ireland and Scotland. Government was carried out by the Commonwealth and the Protectorate of Oliver Cromwell after the execution of Charles I and before the restoration of Charles II.
A second English interregnum occurred between 23 December 1688, when James II was deposed in the Glorious Revolution, and the installation of William III and Mary II as joint sovereigns on 13 February 1689 pursuant to the Declaration of Right.
French and British interregnum in the Dutch East Indies between 1806 and 1815 was a period of French and then British rule on the Dutch East Indies after the collapse of the Dutch East Indies Company. The First French Empire ruled between 1806 and 1811. The British Empire took over for 1811 to 1815, and transferred control back to the Dutch in 1815.
The brief Russian interregnum of 1825, caused by uncertainty over who succeeded the deceased Emperor Alexander I, only lasted between 1 December and 26 December 1825, but was used to stage the highly resonant Decembrist revolt. It ended when Grand Duke Konstantin Pavlovich renounced his claim to throne, allowing Nicholas I to declare himself Tsar.
After World War I, the Habsburg ruler of the Kingdom of Hungary was disposed. On 1 March 1920, the Kingdom of Hungary was re-established. However, restoration of a Habsburg king was deemed unacceptable by to the Entente powers. Instead, the National Assembly of Hungary appointed Miklós Horthy as regent. Charles IV of Hungary made two unsuccessful attempts to retake the throne. Horthy remained as the Regent of Hungary until German invasion on 15 October 1944.
A brief interregnum occurred in Thailand between 13 October and 1 December 2016 upon the death of King Bhumibol Adulyadej. The crown prince Vajiralongkorn, in an unprecedented move, did not assume the throne immediately after the death of the previous monarch, as he asked for time to mourn while he continued functioning in his role as the crown prince. During this period, Prem Tinsulanonda served as the regent pro tempore.
In some monarchies, such as the United Kingdom, an interregnum is usually avoided due to a rule described as "The King is dead. Long live the King", i.e. the heir to the throne becomes a new monarch immediately on his predecessor's death or abdication. This famous phrase signifies the continuity of sovereignty, attached to a personal form of power named Auctoritas. This is not so in other monarchies where the new monarch's reign begins only with coronation or some other formal or traditional event. In the Polish–Lithuanian Commonwealth for instance, kings were elected, which often led to relatively long interregna. During that time it was the Polish primate who served as an interrex (ruler between kings). In Belgium the heir only becomes king upon swearing an oath of office before the parliament.
Christianity
Catholicism
A Papal interregnum occurs upon the death or resignation of the Pope of the Catholic Church, though this particular form is called a sede vacante (literally "when the seat is vacant"). The interregnum ends immediately upon the election of a new Pope by the College of Cardinals.
Anglicanism
"Interregnum" is the term used in the Anglican Communion to describe the period before a new parish priest is appointed to fill a vacancy. During an interregnum, the administration of the parish is the responsibility of the churchwardens.
Mormonism
In the Church of Jesus Christ of Latter-day Saints, when the President of The Church dies, the First Presidency is dissolved and the Quorum of the Twelve Apostles (the Twelve) becomes the Church's presiding body. Any members of the First Presidency who were formerly members of the Twelve rejoin that quorum. The period between the death of the President and the reorganization of the First Presidency is known as an "Apostolic Interregnum".
Chess
FIDE, the world governing body of international chess competition, has had two Interregnum periods of having no chess champions, both during the 1940s.
Men
1946–1948 — Men's World Chess Champion Alexander Alekhine died of natural causes in 1946. Interregnum lasted until 1948, when Mikhail Botvinnik won a FIDE-held chess tournament to decide on a successor.
Women
1944–1950 — Women's World Chess Champion Vera Menchik was killed in an air-raid during World War II in Britain in 1944. Interregnum lasted until 1950, when Lyudmila Rudenko won a FIDE-held chess tournament to decide on a successor.
In fiction
The events of Isaac Asimov's Foundation Trilogy take place during the galactic interregnum in his Foundation Universe, taking place in the 25th millennium. Foundation begins at the end of the Galactic Empire and notes in the novels from the Encyclopedia Galactica imply that a Second Galactic Empire follows the 1000 year interregnum.
In J. R. R. Tolkien's legendarium set in Middle-earth, the disappearance of the King Eärnur of Gondor is followed by a 968-year interregnum (the Steward years), which ends with the return of Aragorn in The Lord of the Rings.
The Old Kingdom Trilogy takes place after 200 years of interregnum, where the reigning Queen and her two daughters were murdered by Kerrigor, 180 years of regency first and 20 years of anarchy following the death of the last Regent.
The Vlad Taltos series is set in a fantastical world of magic, at a time directly following a 250-year interregnum wherein traditional sorcery was impossible due to the orb being destroyed.
In the Elder Scrolls video games, there was an Interregnum in the Second Era when the Second Cyrodillic Empire collapsed. It led to just over four centuries of bickering between small kingdoms and petty states. The Interregnum ended when Tiber Septim, or Talos, formed the Third Empire after a decade of war. Similarly, with the sacrifice of Martin Septim during the Oblivion Crisis in the Third Era, the Septim dynasty came to an end, and a seven-year interregnum occurred before Titus Mede I restored the throne and ushered in the Fourth Era.
In Poland by James A. Michener, 1983, an historical novel that spent 38 weeks on The New York Times Best Seller List, interregnum is mentioned numerous times in the ever-shifting power struggles that plagued that country, even up to the 1980s.
In the film A Christmas Prince, the Kingdom of Aldovia limits interregna to a maximum of one year. This becomes a central plot point when it appears Crown Prince Richard may not accept the throne prior to the Christmas deadline.
In the television series Babylon 5, the Centauri Republic goes through a brief interregnum upon Emperor Cartagia's assassination, as Cartagia never named an heir apparent due to a narcissistic plot to achieve apotheosis through the destruction of Centauri Prime by the Vorlons. Instead, Londo Mollari, who orchestrated the assassination after realizing Cartagia's insanity, is named Prime Minister and granted emergency powers; following the end of the Shadow War, the Centaurum, hoping to avoid a repeat of Cartagia, appoints Milo Virini as Regent.
In media
The television game show Jeopardy! has been regarded as being in two interregnums, during Season 37 after the death of Alex Trebek following the taping of Episode 75 (aired January 8, 2021), and lasting until Episode 230 (aired August 13, 2021). The second interregnum, in Season 38, came following the resignation of Mike Richards following the taping of Episode 5 (aired September 17, 2021). Mayim Bialik and Ken Jennings hosted in both interregnums.
See also
Giorgio Agamben
Geoffrey of Monmouth
Imperial Vicar
Interrex (Poland)
Argentina presidential transition
United States presidential transition
Reign
Notes
References
Giorgio Agamben's State of Exception (2005)
Ernst Kantorowicz's The King's Two Bodies (1957).
Koptev, Aleksandr. "The Five-Day Interregnum in The Roman Republic." The Classical Quarterly 66.1 (2016): 205–21.
Theophanidis, Philippe "Interregnum as a Legal and Political Concept: A Brief Contextual Survey", Synthesis, Issue 9 (Fall 2016): 109–124.
Emergency laws
Monarchy
Roman law
Political philosophy
Philosophy of law | 0.768436 | 0.994939 | 0.764546 |
Cradle of civilization | A cradle of civilization is a location and a culture where civilization was developed independent of other civilizations in other locations. A civilization is any complex society characterized by the development of the state, social stratification, urbanization, and symbolic systems of communication beyond signed or spoken languages (namely, writing systems and graphic arts).
Scholars generally acknowledge six cradles of civilization: Mesopotamia, Ancient Egypt, Ancient India and Ancient China are believed to be the earliest in Afro-Eurasia (previously called the Old World), while the Caral–Supe civilization of coastal Peru and the Olmec civilization of Mexico are believed to be the earliest in the Americas – previously known in Western literature as the New World. All of the cradles of civilization depended upon agriculture for sustenance (except possibly Caral–Supe which may have depended initially on marine resources). All depended upon farmers producing an agricultural surplus to support the centralized government, political leaders, religious leaders, and public works of the urban centers of the early civilizations.
Less formally, the term "cradle of civilization" is often used to refer to other historic ancient civilizations, such as Greece or Rome, which have both been called the "cradle of Western civilization".
Rise of civilization
The earliest signs of a process leading to sedentary culture can be seen in the Levant to as early as 12,000 BC, when the Natufian culture became sedentary; it evolved into an agricultural society by 10,000 BC. The importance of water to safeguard an abundant and stable food supply, due to favourable conditions for hunting, fishing and gathering resources including cereals, provided an initial wide spectrum economy that triggered the creation of permanent villages.
The earliest proto-urban settlements with several thousand inhabitants emerged in the Neolithic which began in Western Asia in 10,000 BC. The first cities to house several tens of thousands were Uruk, Ur, Kish and Eridu in Mesopotamia, followed by Susa in Elam and Memphis in Egypt, all by the 31st century BC (see Historical urban community sizes).
Historic times are marked apart from prehistoric times when "records of the past begin to be kept for the benefit of future generations"—in written or oral form. If the rise of civilization is taken to coincide with the development of writing out of proto-writing, then the Near Eastern Chalcolithic (the transitional period between the Neolithic and the Bronze Age during the 4th millennium BC) and the development of proto-writing in Harappa in the Indus Valley of South Asia around 3,300 BC are the earliest instances, followed by Chinese proto-writing evolving into the oracle bone script, and again by the emergence of Mesoamerican writing systems from about 900 BC.
In the absence of written documents, most aspects of the rise of early civilizations are contained in archaeological assessments that document the development of formal institutions and the material culture. A "civilized" way of life is ultimately linked to conditions coming almost exclusively from intensive agriculture. Gordon Childe defined the development of civilization as the result of two successive revolutions: the Neolithic Revolution of Western Asia, triggering the development of settled communities, and the urban revolution which also first emerged in Western Asia, which enhanced tendencies towards dense settlements, specialized occupational groups, social classes, exploitation of surpluses, monumental public buildings and writing. Few of those conditions, however, are unchallenged by the records: dense cities were not attested in Egypt's Old Kingdom (unlike Mesopotamia) and cities had a dispersed population in the Maya area; the Incas lacked writing although they could keep records with Quipus which might also have had literary uses; and often monumental architecture preceded any indication of village settlement. For instance, in present-day Louisiana, researchers have determined that cultures that were primarily nomadic organized over generations to build earthwork mounds at seasonal settlements as early as 3400 BC. Rather than a succession of events and preconditions, the rise of civilization could equally be hypothesized as an accelerated process that started with incipient agriculture and culminated in the Oriental Bronze Age.
Single or multiple cradles
Scholars once thought that civilization began in the Fertile Crescent and spread out from there by influence. Scholars now believe that civilizations arose independently at several locations in both hemispheres. They have observed that sociocultural developments occurred along different timeframes. "Sedentary" and "nomadic" communities continued to interact considerably; they were not strictly divided among widely different cultural groups. The concept of a cradle of civilization has a focus where the inhabitants came to build cities, to create writing systems, to experiment in techniques for making pottery and using metals, to domesticate animals, and to develop complex social structures involving class systems.
Today, scholarship generally identifies six areas where civilization emerged independently: the Fertile Crescent, including Mesopotamia and the Levant; the Nile Valley; the Indo-Gangetic Plain; the North China Plain; the Andean Coast; and the Mesoamerican Gulf Coast.
Cradles of civilization
Fertile Crescent
The Fertile Crescent comprises a crescent-shaped region of elevated terrain in West Asia, encompassing regions of modern-day Egypt, Israel, the Palestinian territories, Lebanon, Syria, Jordan, Turkey, and Iraq, extending to the Zagros Mountains in Iran. It stands as one of the earliest regions globally where agricultural practices emerged, marking the advent of sedentary farming communities.
By 10,200 BC, fully developed Neolithic cultures, characterized by the Pre-Pottery Neolithic A (PPNA) and Pre-Pottery Neolithic B (7600 to 6000 BC) phases, emerged within the Fertile Crescent. These cultures diffused eastward into South Asia and westward into Europe and North Africa. Among the notable PPNA settlements is Jericho, located in the Jordan Valley, believed to be the world's earliest established city, with initial settlement dating back to around 9600 BC and fortification occurring around 6800 BC.
Current theories and findings identify the Fertile Crescent as the first and oldest cradle of civilization. Examples of sites in this area are the early Neolithic site of Göbekli Tepe (9500–8000 BC) and Çatalhöyük (7500–5700 BC).
Mesopotamia
In Mesopotamia (a region encompassing modern Iraq and bordering regions of Southeast Turkey, Northeast Syria and Northwest Iran), the convergence of the Tigris and Euphrates rivers produced rich fertile soil and a supply of water for irrigation. Neolithic cultures emerged in the region from 8000 BC onwards. The civilizations that emerged around these rivers are the earliest known non-nomadic agrarian societies. It is because of this that the Fertile Crescent region, and Mesopotamia in particular, are often referred to as the cradle of civilization. The period known as the Ubaid period (c. 6500 to 3800 BC) is the earliest known period on the alluvial plain, although it is likely earlier periods exist obscured under the alluvium. It was during the Ubaid period that the movement toward urbanization began. Agriculture and animal husbandry were widely practiced in sedentary communities, particularly in Northern Mesopotamia (later Assyria), and intensive irrigated hydraulic agriculture began to be practiced in the south.
Around 6000 BC, Neolithic settlements began to appear all over Egypt. Studies based on morphological, genetic, and archaeological data have attributed these settlements to migrants from the Fertile Crescent in the Near East arriving in Egypt and North Africa during the Egyptian and North African Neolithic Revolution and bringing agriculture to the region. Tell el-'Oueili is the oldest Sumerian site settled during this period, around 5400 BC, and the city of Ur also first dates to the end of this period. In the south, the Ubaid period lasted from around 6500 to 3800 BC.
Sumerian civilization coalesced in the subsequent Uruk period (4000 to 3100 BC). Named after the Sumerian city of Uruk, this period saw the emergence of urban life in Mesopotamia and, during its later phase, the gradual emergence of the cuneiform script. Proto-writing in the region dates to around 3800 BC, with the earliest texts dating to 3300 BC; early cuneiform writing emerged in 3000 BC. It was also during this period that pottery painting declined as copper started to become popular, along with cylinder seals. Sumerian cities during the Uruk period were probably theocratic and were most likely headed by a priest-king (ensi), assisted by a council of elders, including both men and women. It is quite possible that the later Sumerian pantheon was modeled upon this political structure.
The Jemdet Nasr period, which is generally dated from 3100 to 2900 BC and succeeds the Uruk period, is known as one of the formative stages in the development of the cuneiform script. The oldest clay tablets come from Uruk and date to the late fourth millennium BC, slightly earlier than the Jemdet Nasr Period. By the time of the Jemdet Nasr Period, the script had already undergone a number of significant changes. It originally consisted of pictographs, but by the time of the Jemdet Nasr Period it was already adopting simpler and more abstract designs. It is also during this period that the script acquired its iconic wedge-shaped appearance.
Uruk trade networks started to expand to other parts of Mesopotamia and as far as North Caucasus, and strong signs of governmental organization and social stratification began to emerge, leading to the Early Dynastic Period (c. 2900 BC). After the Early Dynastic period began, there was a shift in control of the city-states from the temple establishment headed by council of elders led by a priestly "En" (a male figure when it was a temple for a goddess, or a female figure when headed by a male god) towards a more secular Lugal (Lu = man, Gal = great). The Lugals included such legendary patriarchal figures as Enmerkar, Lugalbanda and Gilgamesh, who supposedly reigned shortly before the historic record opens around 2700 BC, when syllabic writing started to develop from the early pictograms. The center of Sumerian culture remained in southern Mesopotamia, even though rulers soon began expanding into neighboring areas. Neighboring Semitic groups, including the Akkadian speaking Semites (Assyrians, Babylonians) who lived alongside the Sumerians in Mesopotamia, adopted much of Sumerian culture for their own. The earliest ziggurats began near the end of the Early Dynastic Period, although architectural precursors in the form of raised platforms date back to the Ubaid period. The Sumerian King List dates to the early second millennium BC. It consists of a succession of royal dynasties from different Sumerian cities, ranging back into the Early Dynastic Period. Each dynasty rises to prominence and dominates the region, only to be replaced by the next. The document was used by later Mesopotamian kings to legitimize their rule. While some of the information in the list can be checked against other texts such as economic documents, much of it is probably purely fictional, and its use as a historical document is limited.
Eannatum, the Sumerian king of Lagash, established the first verifiable empire in history in 2500 BC. The neighboring Elam, in modern Iran, was also part of the early urbanization during the Chalcolithic period. Elamite states were among the leading political forces of the Ancient Near East. The emergence of Elamite written records from around 3000 BC also parallels Sumerian history, where slightly earlier records have been found. During the 3rd millennium BC, there developed a very intimate cultural symbiosis between the Sumerians and the Akkadians. Akkadian gradually replaced Sumerian as a spoken language somewhere between the 3rd and the 2nd millennia BC. The Semitic-speaking Akkadian empire emerged around 2350 BC under Sargon the Great. The Akkadian Empire reached its political peak between the 24th and 22nd centuries BC. Under Sargon and his successors, the Akkadian language was briefly imposed on neighboring conquered states such as Elam and Gutium. After the fall of the Akkadian Empire and the overthrow of the Gutians, there was a brief reassertion of Sumerian dominance in Mesopotamia under the Third Dynasty of Ur. After the final collapse of Sumerian hegemony in Mesopotamia around 2004 BC, the Semitic Akkadian people of Mesopotamia eventually coalesced into two major Akkadian-speaking nations: Assyria in the north (whose earliest kings date to the 25th century BC), and, a few centuries later, Babylonia in the south, both of which (Assyria in particular) would go on to form powerful empires between the 20th and 6th centuries BC. The Sumerians were eventually absorbed into the Semitic Assyrian-Babylonian population.
Ancient Egypt
The developed Neolithic cultures belonging to the phases Pre-Pottery Neolithic A (10,200 BC) and Pre-Pottery Neolithic B (7600 to 6000 BC) appeared in the fertile crescent and from there spread eastwards and westwards. Contemporaneously, a grain-grinding culture using the earliest type of sickle blades had replaced the culture of hunters, fishers, and gathering people using stone tools along the Nile. Geological evidence and computer climate modeling studies also suggest that natural climate changes around 8000 BC began to desiccate the extensive pastoral lands of northern Africa, eventually forming the Sahara. Continued desiccation forced the early ancestors of the Egyptians to settle around the Nile more permanently and to adopt a more sedentary lifestyle. The oldest fully developed neolithic culture in Egypt is Fayum A culture that began around 5500 B.C.
By about 5500 BC, small tribes living in the Nile valley had developed into a series of inter-related cultures as far south as Sudan, demonstrating firm control of agriculture and animal husbandry, and identifiable by their pottery and personal items, such as combs, bracelets, and beads. The largest of these early cultures in northern Upper Egypt was the Badari, which probably originated in the Western Desert; it was known for its high quality ceramics, stone tools, and use of copper. The oldest known domesticated bovine in Africa are from Fayum dating to around 4400 BC. The Badari cultures was followed by the Naqada culture, which brought a number of technological improvements. As early as the first Naqada Period, Amratia, Egyptians imported obsidian from Ethiopia, used to shape blades and other objects from flakes. By 3300 BC, just before the first Egyptian dynasty, Egypt was divided into two kingdoms, known as Upper Egypt to the south, and Lower Egypt to the north.
Egyptian civilization begins during the second phase of the Naqada culture, known as the Gerzeh period, around 3500 BC and coalesces with the unification of Upper and Lower Egypt around 3150 BC. Farming produced the vast majority of food; with increased food supplies, the populace adopted a much more sedentary lifestyle, and the larger settlements grew to cities of about 5,000 residents. It was in this time that the city dwellers started using mud brick to build their cities, and the use of the arch and recessed walls for decorative effect became popular. Copper instead of stone was increasingly used to make tools and weaponry. Symbols on Gerzean pottery also resemble nascent Egyptian hieroglyphs. Early evidence also exists of contact with the Near East, particularly Canaan and the Byblos coast, during this time. Concurrent with these cultural advances, a process of unification of the societies and towns of the upper Nile River, or Upper Egypt, occurred. At the same time the societies of the Nile Delta, or Lower Egypt, also underwent a unification process. During his reign in Upper Egypt, King Narmer defeated his enemies on the Delta and merged both the Kingdom of Upper and Lower Egypt under his single rule.
The Early Dynastic Period of Egypt immediately followed the unification of Upper and Lower Egypt. It is generally taken to include the First and Second Dynasties, lasting from the Naqada III archaeological period until about the beginning of the Old Kingdom, c. 2686 BC. With the First Dynasty, the capital moved from Thinis to Memphis with a unified Egypt ruled by a god-king. The hallmarks of ancient Egyptian civilization, such as art, architecture and many aspects of religion, took shape during the Early Dynastic period. The strong institution of kingship developed by the pharaohs served to legitimize state control over the land, labor, and resources that were essential to the survival and growth of ancient Egyptian civilization.
Major advances in architecture, art, and technology were made during the subsequent Old Kingdom, fueled by the increased agricultural productivity and resulting population, made possible by a well-developed central administration. Some of ancient Egypt's crowning achievements, the Giza pyramids and Great Sphinx, were constructed during the Old Kingdom. Under the direction of the vizier, state officials collected taxes, coordinated irrigation projects to improve crop yield, drafted peasants to work on construction projects, and established a justice system to maintain peace and order. Along with the rising importance of a central administration there arose a new class of educated scribes and officials who were granted estates by the pharaoh in payment for their services. Pharaohs also made land grants to their mortuary cults and local temples, to ensure that these institutions had the resources to worship the pharaoh after his death. Scholars believe that five centuries of these practices slowly eroded the economic power of the pharaoh, and that the economy could no longer afford to support a large centralized administration. As the power of the pharaoh diminished, regional governors called nomarchs began to challenge the supremacy of the pharaoh. This, coupled with severe droughts between 2200 and 2150 BC, is assumed to have caused the country to enter the 140-year period of famine and strife known as the First Intermediate Period.
Ancient India
One of the earliest Neolithic sites in the Indian subcontinent is Bhirrana along the ancient Ghaggar-Hakra riverine system in the present day state of Haryana in India, dating to around 7600 BC. Other early sites include Lahuradewa in the Middle Ganges region and Jhusi near the confluence of Ganges and Yamuna rivers, both dating to around 7000 BC.
The aceramic Neolithic at Mehrgarh in present-day Pakistan lasts from 7000 to 5500 BC, with the ceramic Neolithic at Mehrgarh lasting up to 3300 BC; blending into the Early Bronze Age. Mehrgarh is one of the earliest sites with evidence of farming and herding in the Indian subcontinent. It is likely that the culture centered around Mehrgarh migrated into the Indus Valley in present-day Pakistan and became the Indus Valley Civilisation. The earliest fortified town in the region is found at Rehman Dheri, dated 4000 BC in Khyber Pakhtunkhwa close to River Zhob Valley in present-day Pakistan . Other fortified towns found to date are at Amri (3600–3300 BC), Kot Diji in Sindh, and at Kalibangan (3000 BC) at the Hakra River.
The Indus Valley Civilization starts around 3300 BC with what is referred to as the Early Harappan Phase (3300 to 2600 BC), although at the start this was still a village-based culture, leaving mostly pottery for archaeologists. The earliest examples of the Indus script date to this period, as well as the emergence of citadels representing centralised authority and an increasingly urban quality of life. Trade networks linked this culture with related regional cultures and distant sources of raw materials, including lapis lazuli and other materials for bead-making. By around 2600 BC, villagers had domesticated numerous crops, including peas, sesame seeds, dates, and cotton, as well as animals, including the water buffalo.
2600 to 1900 BC marks the Mature Harappan Phase during which Early Harappan communities turned into large urban centers including Harappa, Dholavira, Mohenjo-daro, Lothal, Rupar, and Rakhigarhi, and more than 1,000 towns and villages, often of relatively small size. Mature Harappans evolved new techniques in metallurgy and produced copper, bronze, lead, and tin and displayed advanced levels of engineering. As seen in Harappa, Mohenjo-daro and the recently partially excavated Rakhigarhi, this urban plan included the world's first known urban sanitation systems: see hydraulic engineering of the Indus Valley civilization. Within the city, individual homes or groups of homes obtained water from wells. From a room that appears to have been set aside for bathing, waste water was directed to covered drains, which lined the major streets. Houses opened only to inner courtyards and smaller lanes. The housebuilding in some villages in the region still resembles in some respects the housebuilding of the Harappans. The advanced architecture of the Harappans is shown by their impressive dockyards, granaries, warehouses, brick platforms, and protective walls. The massive walls of Indus cities most likely protected the Harappans from floods and may have dissuaded military conflicts.
The people of the Indus Civilization achieved great accuracy in measuring length, mass, and time. They were among the first to develop a system of uniform weights and measures. A comparison of available objects indicates large scale variation across the Indus territories. Their smallest division, which is marked on an ivory scale found in Lothal in Gujarat, was approximately 1.704 mm, the smallest division ever recorded on a scale of the Bronze Age. Harappan engineers followed the decimal division of measurement for all practical purposes, including the measurement of mass as revealed by their hexahedron weights. These chert weights were in a ratio of 5:2:1 with weights of 0.05, 0.1, 0.2, 0.5, 1, 2, 5, 10, 20, 50, 100, 200, and 500 units, with each unit weighing approximately 28 grams, similar to the English Imperial ounce or Greek uncia, and smaller objects were weighed in similar ratios with the units of 0.871. However, as in other cultures, actual weights were not uniform throughout the area. The weights and measures later used in Kautilya's Arthashastra (4th century BC) are the same as those used in Lothal.
Around 1800 BC, signs of a gradual decline began to emerge, and by around 1700 BC most of the cities had been abandoned. Suggested contributory causes for the localisation of the IVC include changes in the course of the river, and climate change that is also signalled for the neighbouring areas of the Middle East. many scholars believe that drought led to a decline in trade with Egypt and Mesopotamia contributing to the collapse of the Indus Civilization. The Ghaggar-Hakra system was rain-fed, and water-supply depended on the monsoons. The Indus Valley climate grew significantly cooler and drier from about 1800 BC, linked to a general weakening of the monsoon at that time. The Indian monsoon declined and aridity increased, with the Ghaggar-Hakra retracting its reach towards the foothills of the Himalaya, leading to erratic and less extensive floods that made inundation agriculture less sustainable. Aridification reduced the water supply enough to cause the civilization's demise, and to scatter its population eastward. As the monsoons kept shifting south, the floods grew too erratic for sustainable agricultural activities. The residents then migrated away into smaller communities. However trade with the old cities did not flourish. The small surplus produced in these small communities did not allow development of trade, and the cities died out. The Indo-Aryan peoples migrated into the Indus River Valley during this period and began the Vedic age of India. The Indus Valley Civilization did not disappear suddenly and many elements of the civilization continued in later Indian subcontinent and Vedic cultures.
Ancient China
Drawing on archaeology, geology and anthropology, modern scholars do not see the origins of the Chinese civilization or history as a linear story but rather the history of the interactions of different and distinct cultures and ethnic groups that influenced each other's development. The specific cultural regions that developed Chinese civilization were the Yellow River civilization, the Yangtze civilization, and Liao civilization. Early evidence for Chinese millet agriculture is dated to around 7000 BC, with the earliest evidence of cultivated rice found at Chengtoushan near the Yangtze River, dated to 6500 BC. Chengtoushan may also be the site of the first walled city in China. By the beginning of the Neolithic Revolution, the Yellow River valley began to establish itself as a center of the Peiligang culture, which flourished from 7000 to 5000 BC, with evidence of agriculture, constructed buildings, pottery, and burial of the dead. With agriculture came increased population, the ability to store and redistribute crops, and the potential to support specialist craftsmen and administrators. Its most prominent site is Jiahu. Some scholars have suggested that the Jiahu symbols (6600 BC) are the earliest form of proto-writing in China. However, it is likely that they should not be understood as writing itself, but as features of a lengthy period of sign-use, which led eventually to a fully-fledged system of writing. Archaeologists believe that the Peiligang culture was egalitarian, with little political organization.
It eventually evolved into the Yangshao culture (5000 to 3000 BC), and their stone tools were polished and highly specialized. They may also have practiced an early form of silkworm cultivation. The main food of the Yangshao people was millet, with some sites using foxtail millet and others broomcorn millet, though some evidence of rice has been found. The exact nature of Yangshao agriculture, small-scale slash-and-burn cultivation versus intensive agriculture in permanent fields, is currently a matter of debate. Once the soil was exhausted, residents picked up their belongings, moved to new lands, and constructed new villages. However, Middle Yangshao settlements such as Jiangzhi contain raised-floor buildings that may have been used for the storage of surplus grains. Grinding stones for making flour were also found.
Later, Yangshao culture was superseded by the Longshan culture, which was also centered on the Yellow River from about 3000 to 1900 BC, its most prominent site being Taosi. The population expanded dramatically during the 3rd millennium BC, with many settlements having rammed earth walls. It decreased in most areas around 2000 BC until the central area evolved into the Bronze Age Erlitou culture. The earliest bronze artifacts have been found in the Majiayao culture site (3100 to 2700 BC).
Chinese civilization begins during the second phase of the Erlitou period (1900 to 1500 BC), with Erlitou considered the first state level society of East Asia. There is considerable debate whether Erlitou sites correlate to the semi-legendary Xia dynasty. The Xia dynasty (2070 to 1600 BC) is the first dynasty to be described in ancient Chinese historical records such as the Bamboo Annals, first published more than a millennium later during the Western Zhou period. Although Xia is an important element in Chinese historiography, there is to date no contemporary written evidence to corroborate the dynasty. Erlitou saw an increase in bronze metallurgy and urbanization and was a rapidly growing regional center with palatial complexes that provide evidence for social stratification. The Erlitou civilization is divided into four phases, each of roughly 50 years. During Phase I, covering , Erlitou was a rapidly growing regional center with estimated population of several thousand but not yet an urban civilization or capital. Urbanization began in Phase II, expanding to with a population around 11,000. A palace area of was demarcated by four roads. It contained the 150x50 m Palace 3, composed of three courtyards along a 150-meter axis, and Palace 5. A bronze foundry was established to the south of the palatial complex that was controlled by the elite who lived in palaces. The city reached its peak in Phase III, and may have had a population of around 24,000. The palatial complex was surrounded by a two-meter-thick rammed-earth wall, and Palaces 1, 7, 8, 9 were built. The earthwork volume of rammed earth for the base of largest Palace 1 is 20,000 m³ at least. Palaces 3 and 5 were abandoned and replaced by Palace 2 and Palace 4. In Phase IV, the population decreased to around 20,000, but building continued. Palace 6 was built as an extension of Palace 2, and Palaces 10 and 11 were built. Phase IV overlaps with the Lower phase of the Erligang culture (1600–1450 BC). Around 1600 to 1560 BC, about 6 km northeast of Erlitou, a culturally Erligang walled city was built at Yanshi, which coincides with an increase in production of arrowheads at Erlitou. This situation might indicate that the Yanshi city was competing for power and dominance with Erlitou. Production of bronzes and other elite goods ceased at the end of Phase IV, at the same time as the Erligang city of Zhengzhou was established to the east. There is no evidence of destruction by fire or war, but, during the Upper Erligang phase (1450–1300 BC), all the palaces were abandoned, and Erlitou was reduced to a village of .
The earliest traditional Chinese dynasty for which there is both archeological and written evidence is the Shang dynasty (1600 to 1046 BC). Shang sites have yielded the earliest known body of Chinese writing, the oracle bone script, mostly divinations inscribed on bones. These inscriptions provide critical insight into many topics from the politics, economy, and religious practices to the art and medicine of this early stage of Chinese civilization. Some historians argue that Erlitou should be considered an early phase of the Shang dynasty. The U.S. National Gallery of Art defines the Chinese Bronze Age as the period between about 2000 and 771 BC; a period that begins with the Erlitou culture and ends abruptly with the disintegration of Western Zhou rule. The Sanxingdui culture is another Chinese Bronze Age society, contemporaneous to the Shang dynasty, however they developed a different method of bronze-making from the Shang.
Ancient Andes
The earliest evidence of agriculture in the Andean region dates to around 9000 BC in Ecuador at sites of the Las Vegas culture. The bottle gourd may have been the first plant cultivated. The oldest evidence of canal irrigation in South America dates to 4700 to 2500 BC in the Zaña Valley of northern Peru. The earliest urban settlements of the Andes, as well as North and South America, are dated to 3500 BC at Huaricanga, in the Fortaleza area, and Sechin Bajo near the Sechin River. Both sites are in Peru.
The Caral–Supe or Norte Chico civilization is understood to have emerged around 3200 BC, as it is at that point that large-scale human settlement and communal construction across multiple sites becomes clearly apparent. In the early 21st century, Peruvian archaeologist Ruth Shady established Caral–Supe as the oldest known civilization in the Americas. The civilization flourished near the Pacific coast in the valleys of three small rivers, the Fortaleza, the Pativilca, and the Supe. These river valleys each have large clusters of sites. Further south, there are several associated sites along the Huaura River. Notable settlements include the cities of Caral, the largest and most complex Preceramic site, and Aspero. Norte Chico is distinguished by its density of large sites with immense architecture. Haas argues that the density of sites in such a small area is globally unique for a nascent civilization. During the third millennium BC, Norte Chico may have been the most densely populated area of the world (excepting, possibly, northern China). The Supe, Pativilca, Fortaleza, and Huaura River valleys each have several related sites.
Norte Chico is unusual in that it completely lacked ceramics and apparently had almost no visual art. Nevertheless, the civilization exhibited impressive architectural feats, including large earthwork platform mounds and sunken circular plazas, and an advanced textile industry. The platform mounds, as well as large stone warehouses, provide evidence for a stratified society and a centralized authority necessary to distribute resources such as cotton. However, there is no evidence of warfare or defensive structures during this period. Originally, it was theorized that, unlike other early civilizations, Norte Chico developed by relying on maritime food sources in place of a staple cereal. This hypothesis, the Maritime Foundation of Andean Civilization, is still hotly debated; however, most researches now agree that agriculture played a central role in the civilization's development while still acknowledging a strong supplemental reliance on maritime proteins.
The Norte Chico chiefdoms were "...almost certainly theocratic, though not brutally so," according to Mann. Construction areas show possible evidence of feasting, which would have included music and likely alcohol, suggesting an elite able to both mobilize and reward the population. The degree of centralized authority is difficult to ascertain, but architectural construction patterns are indicative of an elite that, at least in certain places at certain times, wielded considerable power: while some of the monumental architecture was constructed incrementally, other buildings, such as the two main platform mounds at Caral, appear to have been constructed in one or two intense construction phases. As further evidence of centralized control, Haas points to remains of large stone warehouses found at Upaca, on the Pativilca, as emblematic of authorities able to control vital resources such as cotton. Economic authority would have rested on the control of cotton and edible plants and associated trade relationships, with power centered on the inland sites. Haas tentatively suggests that the scope of this economic power base may have extended widely: there are only two confirmed shore sites in the Norte Chico (Aspero and Bandurria) and possibly two more, but cotton fishing nets and domesticated plants have been found up and down the Peruvian coast. It is possible that the major inland centers of Norte Chico were at the center of a broad regional trade network centered on these resources.
Discover magazine, citing Shady, suggests a rich and varied trade life: "[Caral] exported its own products and those of Aspero to distant communities in exchange for exotic imports: Spondylus shells from the coast of Ecuador, rich dyes from the Andean highlands, hallucinogenic snuff from the Amazon." (Given the still limited extent of Norte Chico research, such claims should be treated circumspectly.) Other reports on Shady's work indicate Caral traded with communities in the Andes and in the jungles of the Amazon basin on the opposite side of the Andes.
Leaders' ideological power was based on apparent access to deities and the supernatural. Evidence regarding Norte Chico religion is limited: an image of the Staff God, a leering figure with a hood and fangs, has been found on a gourd dated to 2250 BC. The Staff God is a major deity of later Andean cultures, and Winifred Creamer suggests the find points to worship of common symbols of gods. As with much other research at Norte Chico, the nature and significance of the find has been disputed by other researchers. The act of architectural construction and maintenance may also have been a spiritual or religious experience: a process of communal exaltation and ceremony. Shady has called Caral "the sacred city" (la ciudad sagrada): socio-economic and political focus was on the temples, which were periodically remodeled, with major burnt offerings associated with the remodeling.
Bundles of strings uncovered at Norte Chico sites have been identified as quipu, a type of pre-writing recording device. Quipu are thought to encode numeric information, but some have conjectured that quipu have been used to encode other forms of data, possibly including literary or musical applications. However, the exact use of quipu by the Norte Chico and later Andean cultures has been widely debated. The presence of quipu and the commonality of religious symbols suggests a cultural link between Norte Chico and later Andean cultures.
Circa 1800 BC, the Norte Chico civilization began to decline, with more powerful centers appearing to the south and north along the coast and to the east inside the belt of the Andes. Pottery eventually developed in the Amazon Basin and spread to the Andean culture region around 2000 BC. The next major civilization to arise in the Andes would be the Chavín culture at Chavín de Huantar, located in the Andean highlands of the present-day Ancash Region. It is believed to have been built around 900 BC and was the religious and political center of the Chavín people.
Mesoamerica
Maize is believed to have been first domesticated in southern Mexico about 7000 BC. The Coxcatlan Caves in the Valley of Tehuacán provide evidence for agriculture in components dated between 5000 and 3400 BC. Similarly, sites such as Sipacate in Guatemala provide maize pollen samples dating to 3500 BC. Around 1900 BC, the Mokaya domesticated one of the dozen species of cacao. A Mokaya archaeological site provides evidence of cacao beverages dating to this time. The Mokaya are also thought to have been among the first cultures in Mesoamerica to develop a hierarchical society. What would become the Olmec civilization had its roots in early farming cultures of Tabasco, which began around 5100 to 4600 BC.
The emergence of the Olmec civilization has traditionally been dated to around 1600 to 1500 BC. Olmec features first emerged in the city of San Lorenzo Tenochtitlán, fully coalescing around 1400 BC. The rise of civilization was assisted by the local ecology of well-watered alluvial soil, as well as by the transportation network provided by the Coatzacoalcos River basin. This environment encouraged a densely concentrated population, which in turn triggered the rise of an elite class and an associated demand for the production of the symbolic and sophisticated luxury artifacts that define Olmec culture. Many of these luxury artifacts were made from materials such as jade, obsidian, and magnetite, which came from distant locations and suggest that early Olmec elites had access to an extensive trading network in Mesoamerica. The aspect of Olmec culture perhaps most familiar today is their artwork, particularly the Olmec colossal heads. San Lorenzo was situated in the midst of a large agricultural area. San Lorenzo seems to have been largely a ceremonial site, a town without city walls, centered in the midst of a widespread medium-to-large agricultural population. The ceremonial center and attendant buildings could have housed 5,500 while the entire area, including hinterlands, could have reached 13,000. It is thought that while San Lorenzo controlled much or all of the Coatzacoalcos basin, areas to the east (such as the area where La Venta would rise to prominence) and north-northwest (such as the Tuxtla Mountains) were home to independent polities. San Lorenzo was all but abandoned around 900 BC at about the same time that La Venta rose to prominence. A wholesale destruction of many San Lorenzo monuments also occurred circa 950 BC, which may indicate an internal uprising or, less likely, an invasion. The latest thinking, however, is that environmental changes may have been responsible for this shift in Olmec centers, with certain important rivers changing course.
La Venta became the cultural capital of the Olmec concentration in the region until its abandonment around 400 BC; constructing monumental architectural achievements such as the Great Pyramid of La Venta. It contained a "concentration of power", as reflected by the sheer enormity of the architecture and the extreme value of the artifacts uncovered. La Venta is perhaps the largest Olmec city and it was controlled and expanded by an extremely complex hierarchical system with a king, as the ruler and the elites below him. Priests had power and influence over life and death and likely great political sway as well. Unfortunately, not much is known about the political or social structure of the Olmec, though new dating techniques might, at some point, reveal more information about this elusive culture. It is possible that the signs of status exist in the artifacts recovered at the site such as depictions of feathered headdresses or of individuals wearing a mirror on their chest or forehead. "High-status objects were a significant source of power in the La Venta polity political power, economic power, and ideological power. They were tools used by the elite to enhance and maintain rights to rulership". It has been estimated that La Venta would need to be supported by a population of at least 18,000 people during its principal occupation. To add to the mystique of La Venta, the alluvial soil did not preserve skeletal remains, so it is difficult to observe differences in burials. However, colossal heads provide proof that the elite had some control over the lower classes, as their construction would have been extremely labor-intensive. "Other features similarly indicate that many laborers were involved". In addition, excavations over the years have discovered that different parts of the site were likely reserved for elites and other parts for non-elites. This segregation of the city indicates that there must have been social classes and therefore social inequality.
The exact cause of the decline of the Olmec culture is uncertain. Between 400 and 350 BC, the population in the eastern half of the Olmec heartland dropped precipitously. This depopulation was probably the result of serious environmental changes that rendered the region unsuited for large groups of farmers, in particular changes to the riverine environment that the Olmec depended upon for agriculture, hunting and gathering, and transportation. These changes may have been triggered by tectonic upheavals or subsidence, or the silting up of rivers due to agricultural practices. Within a few hundred years of the abandonment of the last Olmec cities, successor cultures became firmly established. The Tres Zapotes site, on the western edge of the Olmec heartland, continued to be occupied well past 400 BC, but without the hallmarks of the Olmec culture. This post-Olmec culture, often labeled Epi-Olmec, has features similar to those found at Izapa, some 550 km (330 miles) to the southeast.
The Olmecs are sometimes referred to as the mother culture of Mesoamerica, as they were the first Mesoamerican civilization and laid many of the foundations for the civilizations that followed. However, the causes and degree of Olmec influences on Mesoamerican cultures has been a subject of debate over many decades. Practices introduced by the Olmec include ritual bloodletting and the Mesoamerican ballgame; hallmarks of subsequent Mesoamerican societies such as the Maya and Aztec. Although the Mesoamerican writing system would fully develop later, early Olmec ceramics show representations that may be interpreted as codices.
Cradle of Western civilization
The origins of Western civilization can be traced back to the ancient Mediterranean world. There is academic consensus that Classical Greece was a major culture that provided the foundation of modern Western culture, philosophy, democracy, art, science, aesthetics, theatre, as well as building designs and proportions and architecture.
Along with Greece, Ancient Rome has sometimes been described as a birthplace or as the cradle of Western Civilization because of the role the city had in politics, republicanism, law, architecture, warfare and Western Christianity.
Western Civilization is also closely associated with Christianity, the predominant religion in the West, which has its origins in Judaism—the ethnic religion of the Jewish people—and Greco-Roman philosophy. Christianity emerged as a sect within Judaism and inherited many of its foundational beliefs, scriptures, and ethical principles from Jewish tradition. Christian ethics, influenced by its Jewish roots, has significantly influenced the foundational principles of Western societies.
The blending of Greco-Roman and Judeo-Christian traditions in shaping Western civilization has led scholars to describe it as emerging from the legacies of Athens and Jerusalem, or Athens, Jerusalem, and Rome.
Other uses
The phrase "cradle of civilization".... plays a certain role in national mysticism. It has been used in Eastern as well as Western cultures, for instance, in Indian nationalism (In Search of the Cradle of Civilization 1995) and Taiwanese nationalism (Taiwan;— The Cradle of Civilization 2002). The terms also appear in esoteric pseudohistory, such as the Urantia Book, claiming the title for "the second Eden", or the pseudoarchaeology related to Megalithic Britain (Civilization One 2004,
Ancient Britain: The Cradle of Civilization 1921).
Timeline
The following timeline shows a timeline of cultures, with the approximate dates of the emergence of civilization (as discussed in the article) in the featured areas, the primary cultures associated with these early civilizations. It is important to note that the timeline is not indicative of the beginning of human habitation, the start of a specific ethnic group, or the development of Neolithic cultures in the area – any of which often occurred significantly earlier than the emergence of civilization proper.
The dates given are only approximate as the development of civilization was incremental and the exact date when "civilization" began for a given culture is subject to interpretation.
See also
Chronology of the ancient Near East
Cradle of Humankind
Four Great Ancient Civilizations
River valley civilization
Human history
Civilization state
Skara Brae and Barnhouse Settlement
Old Europe (archaeology)
Notes
References
Citations
Sources
External links
Ancient history
Ancient Near East
Civilizations by time
History of the Mediterranean
Archaeology of the Near East
Archaeological terminology
Civilizations | 0.764953 | 0.999445 | 0.764529 |
Medieval medicine of Western Europe | In the Middle Ages, the medicine of Western Europe was composed of a mixture of existing ideas from antiquity. In the Early Middle Ages, following the fall of the Western Roman Empire, standard medical knowledge was based chiefly upon surviving Greek and Roman texts, preserved in monasteries and elsewhere. Medieval medicine is widely misunderstood, thought of as a uniform attitude composed of placing hopes in the church and God to heal all sicknesses, while sickness itself exists as a product of destiny, sin, and astral influences as physical causes. On the other hand, medieval medicine, especially in the second half of the medieval period (c. 1100–1500 AD), became a formal body of theoretical knowledge and was institutionalized in the universities. Medieval medicine attributed illnesses, and disease, not to sinful behavior, but to natural causes, and sin was connected to illness only in a more general sense of the view that disease manifested in humanity as a result of its fallen state from God. Medieval medicine also recognized that illnesses spread from person to person, that certain lifestyles may cause ill health, and some people have a greater predisposition towards bad health than others.
Influences
Hippocratic medicine
The Western medical tradition often traces its roots directly to the Ancient Greek civilization, much like the foundation of all of Western society. The Greeks certainly laid the foundation for Western medical practice but much more of Western medicine can be traced to the Near East, Germanic, and Celtic cultures. The Greek medical foundation comes from a collection of writings known today as the Hippocratic Corpus. Remnants of the Hippocratic Corpus survive in modern medicine in forms like the "Hippocratic Oath" as in to "Do No Harm".
The Hippocratic Corpus, popularly attributed to an Ancient Greek medical practitioner known as Hippocrates, lays out the basic approach to health care. Ancient Greek philosophers viewed the human body as a system that reflects the workings of nature and Hippocrates applied this belief to medicine. The body, as a reflection of natural forces, contained four elemental properties expressed to the Greeks as the four humors. The humors represented fire, air, earth, and water through the properties of hot, cold, dry, and moist, respectively. Health in the human body relied on keeping these humors in balance within each person.
Maintaining the balance of humors within a patient occurred in several ways. An initial examination took place as standard for a physician to properly evaluate the patient. The patient's home climate, their usual diet, and astrological charts were regarded during a consultation. The heavens influenced each person in different ways by influencing elements connected to certain humors, important information in reaching a diagnosis. After the examination, the physician could determine which humor was unbalanced in the patient and prescribe a new diet to restore that balance. Diet included not only food to eat or avoid but also an exercise regimen and medication.
Hippocratic medicine was written down within the Hippocratic Corpus, therefore medical practitioners were required to be literate. The written treatises within the Corpus are varied, incorporating medical doctrine from any source the Greeks came into contact with. At Alexandria in Ancient Egypt, the Greeks learned the art of surgery and dissection; the Egyptian skill in these arenas far surpassed those of Greeks and Romans due to social taboos regarding treatment of the dead. The early Hippocratic practitioner Herophilus engaged in dissection and added new knowledge to human anatomy in the realms of the human nervous system, the inner workings of the eye, differentiating arteries from veins, and using pulses as a diagnostic tool in treatment. Surgery and dissection yielded much knowledge of the human body that Hippocratic physicians employed alongside their methods of balancing humors in patients. The combination of knowledge in diet, surgery, and medication formed the foundation of medical learning upon which Galen would later build upon with his own works.
Temple healing
The Greeks had been influenced by their Egyptian neighbors, in terms of medical practice in surgery and medication. However, the Greeks also absorbed many folk healing practices, including incantations and dream healing. In Homer's epic poems Iliad and Odyssey, the Greek gods are implicated as the cause of plagues or widespread disease and that those maladies could be cured by praying to them. The religious side of Greek medical practice is clearly manifested in the cult of Asclepius, whom Homer regarded as a great physician, and was deified in the 4th and 3rd centuries BCE. Hundreds of temples devoted to Asclepius were founded throughout the Hellenistic and Roman Empire to which untold numbers of people flocked for cures. Healing visions and dreams formed the foundation for the curing process as the person seeking treatment from Asclepius slept in a special dormitory. The healing occurred either in the person's dream or advice from the dream could be used to seek out the proper treatment for their illness elsewhere. Afterwards the visitor to the temple bathed, offered prayers and sacrifice, and received other forms of treatment like medication, dietary restrictions, and an exercise regiment, keeping with the Hippocratic tradition.
Pagan and folk medicine
Some of the medicine in the Middle Ages had its roots in Pagan and folk practices. This influence was highlighted by the interplay between Christian theologians who adopted aspects of Pagan and folk practices and chronicled them in their own works. The practices adopted by Christian medical practitioners around the 2nd century CE, and their attitudes toward Pagan and folk traditions, reflected an understanding of these practices, especially humoralism and herbalism.
The practice of medicine in the early Middle Ages was empirical and pragmatic. It focused mainly on curing diseases rather than discovering the cause of diseases. Often it was believed the cause of disease was supernatural. Nevertheless, secular approaches to curing diseases existed. People in the Middle Ages understood medicine by adopting the ancient Greek medical theory of humors. Since it was clear that the fertility of the earth depended on the proper balance of the elements, it followed that the same was true for the body, within which the various humors had to be in balance. This approach greatly influenced medical theory throughout the Middle Ages.
Folk medicine of the Middle Ages dealt with the use of herbal remedies for ailments. The practice of keeping physic gardens teeming with various herbs with medicinal properties was influenced by the gardens of Roman antiquity. Many early medieval manuscripts have been noted for containing practical descriptions for the use of herbal remedies. These texts, such as the Pseudo-Apuleius, included illustrations of various plants that would have been easily identifiable and familiar to Europeans at the time. Monasteries later became centers of medical practice in the Middle Ages, and carried on the tradition of maintaining medicinal gardens. These gardens became specialized and capable of maintaining plants from the Southern Hemisphere as well as maintaining plants during winter.
Hildegard of Bingen was an example of a medieval medical practitioner who, while educated in classical Greek medicine, also utilized folk medicine remedies. Her understanding of the plant based medicines informed her commentary on the humors of the body and the remedies she described in her medical text Causae et curae were influenced by her familiarity with folk treatments of disease. In the rural society of Hildegard's time, much of the medical care was provided by women, along with their other domestic duties. Kitchens were stocked with herbs and other substances required in folk remedies for many ailments. Causae et curae illustrated a view of symbiosis of the body and nature, that the understanding of nature could inform medical treatment of the body. However, Hildegard maintained the belief that the root of disease was a compromised relationship between a person and God. Many parallels between pagan and Christian ideas about disease existed during the early Middle Ages. Christian views of disease differed from those held by pagans because of a fundamental difference in belief: Christians' belief in a personal relationship with God greatly influenced their views on medicine.
Evidence of pagan influence on emerging Christian medical practice was provided by many prominent early Christian thinkers, such as Origen, Clement of Alexandria, and Augustine, who studied natural philosophy and held important aspects of secular Greek philosophy that were in line with Christian thought. They believed faith supported by sound philosophy was superior to simple faith. The classical idea of the physician as a selfless servant who had to endure unpleasant tasks and provide necessary, often painful treatment was of great influence on early Christian practitioners. The metaphor was not lost on Christians who viewed Christ as the ultimate physician. Pagan philosophy had previously held that the pursuit of virtue should not be secondary to bodily concerns. Similarly, Christians felt that, while caring for the body was important, it was second to spiritual pursuits. The relationship between faith and the bodies ailments explains why most medieval medical practice was performed by Christian monks.
Monasteries
Monasteries developed not only as spiritual centers, but also centers of intellectual learning and medical practice. Locations of the monasteries were secluded and designed to be self-sufficient, which required the monastic inhabitants to produce their own food and also care for their sick. Prior to the development of hospitals, people from the surrounding towns looked to the monasteries for help with their sick.
A combination of both spiritual and natural healing was used to treat the sick. Herbal remedies, known as Herbals, along with prayer and other religious rituals were used in treatment by the monks and nuns of the monasteries. Herbs were seen by the monks and nuns as one of God’s creations for the natural aid that contributed to the spiritual healing of the sick individual. An herbal textual tradition also developed in the medieval monasteries. Older herbal Latin texts were translated and also expanded in the monasteries. The monks and nuns reorganized older texts so that they could be utilized more efficiently, adding a table of contents for example to help find information quickly. Not only did they reorganize existing texts, but they also added or eliminated information. New herbs that were discovered to be useful or specific herbs that were known in a particular geographic area were added. Herbs that proved to be ineffective were eliminated. Drawings were also added or modified in order for the reader to effectively identify the herb. The Herbals that were being translated and modified in the monasteries were some of the first medical texts produced and used in medical practice in the Middle Ages.
Not only were herbal texts being produced, but also other medieval texts that discussed the importance of the humors. Monasteries in Medieval Europe gained access to Greek medical works by the middle of the 6th century. Monks translated these works into Latin, after which they were gradually disseminated across Europe. Monks such as Arnald of Villanova also translated the works of Galen and other classical Greek scholars from Arabic to Latin during the Middle Ages. By producing these texts and translating them into Latin, Christian monks both preserved classical Greek medical information and allowed for its use by European medical practitioners. By the early 1300s these translated works would become available at medieval universities and form the foundation of the universities medical teaching programs.
Hildegard of Bingen, a well known abbess, wrote about Hippocratic Medicine using humoral theory and how balance and imbalance of the elements affected the health of an individual, along with other known sicknesses of the time, and ways in which to combine both prayer and herbs to help the individual become well. She discusses different symptoms that were common to see and the known remedies for them.
In exchanging the herbal texts among monasteries, monks became aware of herbs that could be very useful but were not found in the surrounding area. The monastic clergy traded with one another or used commercial means to obtain the foreign herbs. Inside most of the monastery grounds there had been a separate garden designated for the plants that were needed for the treatment of the sick. A serving plan of St. Gall depicts a separate garden to be developed for strictly medical herbals. Monks and nuns also devoted a large amount of their time in the cultivation of the herbs they felt were necessary in the care of the sick. Some plants were not native to the local area and needed special care to be kept alive. The monks used a form of science, what we would today consider botany, to cultivate these plants. Foreign herbs and plants determined to be highly valuable were grown in gardens in close proximity to the monastery in order for the monastic clergy to hastily have access to the natural remedies.
Medicine in the monasteries was concentrated on assisting the individual to return to normal health. Being able to identify symptoms and remedies was the primary focus. In some instances identifying the symptoms led the monastic clergy to have to take into consideration the cause of the illness in order to implement a solution. Research and experimental processes were continuously being implemented in monasteries to be able to successfully fulfill their duties to God to take care of all God's people.
Christian charity
Christian practice and attitudes toward medicine drew on Middle Eastern (particularly from local Jews) and Greek influences. The Jews took their duty to care for their fellow Jews seriously. This duty extended to lodging and medical treatment of pilgrims to the temple at Jerusalem. Temporary medical assistance had been provided in classical Greece for visitors to festivals and the tradition extended through the Roman Empire, especially after Christianity became the state religion prior to the empire's decline. In the early Medieval period, hospitals, poor houses, hostels, and orphanages began to spread from the Middle East, each with the intention of helping those most in need.
Charity, the driving principle behind these healing centers, encouraged the early Christians to care for others. The cities of Jerusalem, Constantinople, and Antioch contained some of the earliest and most complex hospitals, with many beds to house patients and staff physicians with emerging specialties. Some hospitals were large enough to provide education in medicine, surgery and patient care. St. Basil (AD 330–79) argued that God put medicines on the Earth for human use, while many early church fathers agreed that Hippocratic medicine could be used to treat the sick and satisfy the charitable need to help others.
Medicine
Medieval European medicine became more developed during the Renaissance of the 12th century, when many medical texts both on Ancient Greek medicine and on Islamic medicine were translated from Greek and Arabic during the 13th century. The most influential among these texts was Avicenna's The Canon of Medicine, a medical encyclopedia written in circa 1030 which summarized the medicine of Greek, Indian and Muslim physicians until that time. The Canon became an authoritative text in European medical education until the early modern period. Other influential texts from Jewish authors include the Liber pantegni by Isaac Israeli ben Solomon, while Arabic authors contributed De Gradibus by Alkindus and Al-Tasrif by Abulcasis.
At Schola Medica Salernitana in Southern Italy, medical texts from Byzantium and the Arab world (see Medicine in medieval Islam) were readily available, translated from the Greek and Arabic at the nearby monastic centre of Monte Cassino. The Salernitan masters gradually established a canon of writings, known as the ars medicinae (art of medicine) or articella (little art), which became the basis of European medical education for several centuries.
During the Crusades the influence of Islamic medicine became stronger. The influence was mutual and Islamic scholars such as Usamah ibn Munqidh also described their positive experience with European medicine – he describes a European doctor successfully treating infected wounds with vinegar and recommends a treatment for scrofula demonstrated to him by an unnamed "Frank".
Classical medicine
Anglo-Saxon translations of classical works like Dioscorides Herbal survive from the 10th century, showing the persistence of elements of classical medical knowledge. Other influential translated medical texts at the time included the Hippocratic Corpus attributed to Hippocrates, and the writings of Galen.
Galen of Pergamon, a Greek, was one of the most influential ancient physicians. Galen described the four classic symptoms of inflammation (redness, pain, heat, and swelling) and added much to the knowledge of infectious disease and pharmacology. Galen also found that an excess of the fluids could make someone sanguine, phlegmatic, choleric, or melancholic. His anatomic knowledge of humans was defective because it was based on dissection of animals, mainly apes, sheep, goats and pigs. Some of Galen's teachings held back medical progress. His theory, for example, that the blood carried the pneuma, or life spirit, which gave it its red colour, coupled with the erroneous notion that the blood passed through a porous wall between the ventricles of the heart, delayed the understanding of circulation and did much to discourage research in physiology. His most important work, however, was in the field of the form and function of muscles and the function of the areas of the spinal cord. He also excelled in diagnosis and prognosis. Through Galen, knowledge of Greek medicine was transferred to the Western world by Arabs.
Medieval surgery
Medieval surgery arose from a foundation created from ancient Egyptian, Greek and Arabic medicine. An example of such influence would be Galen, the most influential practitioner of surgical or anatomical practices that he performed while attending to gladiators at Pergamon. The accomplishments and the advancements in medicine made by the Arabic world were translated and made available to the Latin world. This new wealth of knowledge allowed for a greater interest in surgery.
In Paris, in the late thirteenth century, it was deemed that surgical practices were extremely disorganized, and so the Parisian provost decided to enlist six of the most trustworthy and experienced surgeons and have them assess the performance of other surgeons. The emergence of universities allowed for surgery to be a discipline that should be learned and be communicated to others as a uniform practice. The University of Padua was one of the "leading Italian universities in teaching medicine, identification and treating of diseases and ailments, specializing in autopsies and workings of the body." The most prestigious and famous part of the university, the Anatomical Theatre of Padua, is the oldest surviving anatomical theater, in which students studied anatomy by observing their teachers perform public dissections.
Surgery was formally taught in Italy even though it was initially looked down upon as a lower form of medicine. The most important figure of the formal learning of surgery was Guy de Chauliac. He insisted that a proper surgeon should have a specific knowledge of the human body such as anatomy, food and diet of the patient, and other ailments that may have affected the patients. Not only should surgeons have knowledge about the body but they should also be well versed in the liberal arts. In this way, surgery was no longer regarded as a lower practice, but instead began to be respected and gain esteem and status.
During the Crusades, one of the duties of surgeons was to travel around a battlefield, assessing soldiers' wounds and declaring whether or not the soldier was deceased. Because of this task, surgeons were deft at removing arrowheads from their patients' bodies. Another class of surgeons that existed were barber surgeons. They were expected not only to be able to perform formal surgery, but also to be deft at cutting hair and trimming beards. Some of the surgical procedures they would conduct were bloodletting and treating sword and arrow wounds.
In the mid-fourteenth century, there were restrictions placed on London surgeons as to what types of injuries they were able to treat and the types of medications that they could prescribe or use, because surgery was still looked at as an incredibly dangerous procedure that should only be used appropriately. Some of the wounds that were allowed to be performed on were external injuries, such as skin lacerations caused by a sharp edge, such as by a sword, dagger and axe or through household tools such as knives. During this time, it was also expected that the surgeons were extremely knowledgeable on human anatomy and would be held accountable for any consequences as a result of the procedure.
Advances
The Middle Ages contributed a great deal to medical knowledge. This period contained progress in surgery, medical chemistry, dissection, and practical medicine. The Middle Ages laid the ground work for later, more significant discoveries. There was a slow but constant progression in the way that medicine was studied and practiced. It went from apprenticeships to universities and from oral traditions to documenting texts. The most well-known preservers of texts, not only medical, would be the monasteries. The monks were able to copy and revise any medical texts that they were able to obtain.
Besides documentation the Middle Ages also had one of the first well known female physicians, Hildegard of Bingen. Hildegard was born in 1098 and at the age of fourteen she entered the double monastery of Dissibodenberg. She wrote the medical text Causae et curae, in which many medical practices of the time were demonstrated. This book contained diagnosis, treatment, and prognosis of many different diseases and illnesses. This text sheds light on medieval medical practices of the time. It also demonstrates the vast amount of knowledge and influences that she built upon. In this time period medicine was taken very seriously, as is shown with Hildegard's detailed descriptions on how to perform medical tasks. The descriptions are nothing without their practical counterpart, and Hildegard was thought to have been an infirmarian in the monastery where she lived. An infirmarian treated not only other monks but pilgrims, workers, and the poor men, women, and children in the monastery's hospice. Because monasteries were located in rural areas the infirmarian was also responsible for the care of lacerations, fractures, dislocations, and burns. Along with typical medical practice the text also hints that the youth (such as Hildegard) would have received hands-on training from the previous infirmarian. Beyond routine nursing this also shows that medical remedies from plants, either grown or gathered, had a significant impact of the future of medicine. This was the beginnings of the domestic pharmacy.
Although plants were the main source of medieval remedies, around the sixteenth century medical chemistry became more prominent. "Medical chemistry began with the adaptation of chemical processes to the preparation of medicine". Previously medical chemistry was characterized by any use of inorganic materials, but it was later refined to be more technical, like the processes of distillation. John of Rupescissa's works in alchemy and the beginnings of medical chemistry is recognized for the bounds in chemistry. His works in making the philosopher's stone, also known as the fifth essence, were what made he became known for. Distillation techniques were mostly used, and it was said that by reaching a substance's purest form the person would find the fifth essence, and this is where medicine comes in. Remedies were able to be made more potent because there was now a way to remove nonessential elements. This opened many doors for medieval physicians as new, different remedies were made. Medical chemistry provided an "increasing body of pharmacological literature dealing with the use of medicines derived from mineral sources". Medical chemistry also shows the use of alcohols in medicine. Though these events were not huge bounds for the field, they were influential in determining the course of science. It was the start of differentiation between alchemy and chemistry.
The Middle Ages brought a new way of thinking and a lessening on the taboo of dissection. Dissection for medical purposes became more prominent around 1299. During this time the Italians were practicing anatomical dissection and the first record of an autopsy dates from 1286. Dissection was first introduced in the educational setting at the university of Bologna, to study and teach anatomy. The fourteenth century saw a significant spread of dissection and autopsy in Italy, and was not only taken up by medical faculties, but by colleges for physicians and surgeons.
Roger Frugardi of Parma composed his treatise on Surgery around about 1180. Between 1250 and 1265 Theodoric Borgognoni produced a systematic four volume treatise on surgery, the Cyrurgia, which promoted important innovations as well as early forms of antiseptic practice in the treatment of injury, and surgical anaesthesia using a mixture of opiates and herbs.
Compendiums like Bald's Leechbook (circa 900), include citations from a variety of classical works alongside local folk remedies.
Theories of medicine
Although each of these theories has distinct roots in different cultural and religious traditions, they were all intertwined in the general understanding and practice of medicine. For example, the Benedictine abbess and healer, Hildegard of Bingen, claimed that black bile and other humour imbalances were directly caused by presence of the Devil and by sin. Another example of the fusion of different medicinal theories is the combination of Christian and pre-Christian ideas about elf-shot (elf- or fairy-caused diseases) and their appropriate treatments. The idea that elves caused disease was a pre-Christian belief that developed into the Christian idea of disease-causing demons or devils. Treatments for this and other types of illness reflected the coexistence of Christian and pre-Christian or pagan ideas of medicine.
Humours
The theory of humours was derived from the ancient medical works and was accepted until the 19th century. The theory stated that within every individual there were four humours, or principal fluids – black bile, yellow bile, phlegm, and blood, these were produced by various organs in the body, and they had to be in balance for a person to remain healthy. Too much phlegm in the body, for example, caused lung problems; and the body tried to cough up the phlegm to restore a balance. The balance of humours in humans could be achieved by diet, medicines, and by blood-letting, using leeches. Leeches were usually starved the day before application to a patient in order to increase their efficiency. The four humours were also associated with the four seasons; black bile with autumn, yellow bile with summer, phlegm with winter and blood with spring.
The astrological signs of the zodiac were also thought to be associated with certain humours
. Even now, some still use words "choleric", "sanguine", "phlegmatic" and "melancholic" to describe personalities.
Herbalism and botany
Herbs were commonly used in salves and drinks to treat a range of maladies. The particular herbs used depended largely on the local culture and often had roots in pre-Christian religion. The success of herbal remedies was often ascribed to their action upon the humours within the body. The use of herbs also drew upon the medieval Christian doctrine of signatures which stated that God had provided some form of alleviation for every ill, and that these things, be they animal, vegetable or mineral, carried a mark or a signature upon them that gave an indication of their usefulness. For example, skullcap seeds (used as a headache remedy) can appear to look like miniature skulls; and the white spotted leaves of lungwort (used for tuberculosis) bear a similarity to the lungs of a diseased patient. A large number of such resemblances were believed to exist.
Many monasteries developed herb gardens for use in the production of herbal cures, and these remained a part of folk medicine, as well as being used by some professional physicians. Books of herbal remedies were produced, one of the most famous being the Welsh, Red Book of Hergest, dating from around 1400.
During the early Middle Ages, botany had undergone drastic changes from that of its antiquity predecessor (Greek practice). An early medieval treatise in the West on plants known as the Ex herbis femininis was largely based on Dioscorides Greek text: De material medica. The Ex herbis was a lot more popular during this time because it was not only easier to read, but contained plants and their remedies that related to the regions of southern Europe, where botany was being studied. It also provided better medical direction on how to create remedies, and how to properly use them. This book was also highly illustrated, where its former was not, making the practice of botany easier to comprehend.
The re-emergence of Botany in the medieval world came about during the sixteenth century. As part of the revival of classical medicine, one of the biggest areas of interest was materia medica: the study of remedial substances. “Italian humanists in the fifteenth century had recovered and translated ancient Greek botanical texts which had been unknown in the West in the Middle Ages or relatively ignored”. Soon after the rise in interest in botany, universities such as Padua and Bologna started to create programs and fields of study; some of these practices including setting up gardens so that students were able to collect and examine plants. “Botany was also a field in which printing made a tremendous impact, through the development of naturalistic illustrated herbals”. During this time period, university practices were highly concerned with the philosophical matters of study in sciences and the liberal arts, “but by the sixteenth century both scholastic discussion of plants and reliance upon intermediary compendia for plant names and descriptions were increasingly abandoned in favor of direct study of the original texts of classical authors and efforts to reconcile names, descriptions, and plants in nature”. Botanist expanded their knowledge of different plant remedies, seeds, bulbs, uses of dried and living plants through continuous interchange made possible by printing. In sixteenth century medicine, botany was rapidly becoming a lively and fast-moving discipline that held wide universal appeal in the world of doctors, philosophers, and pharmacists.
Mental disorders
Those with mental disorders in medieval Europe were treated using a variety of different methods, depending on the beliefs of the physician they would go to. Some doctors at the time believed that supernatural forces such as witches, demons or possession caused mental disorders. These physicians believed that prayers and incantations, along with exorcisms, would cure the afflicted and relieve them of their suffering. Another form of treatment existed to help expel evil spirits from the body of a patient, known as trephining. Trephining was a means of treating epilepsy by opening a hole in the skull through drilling or cutting. It was believed that any evil spirit or evil air would flow out of the body through the hole and leave the patient in peace. Contrary to the common belief that most physicians in Medieval Europe believed that mental illness was caused by supernatural factors, it is believed that these were only the minority of cases related to the diagnosis and treatment of those suffering from mental disorders. Most physicians believed that these disorders were caused by physical factors, such as the malfunction of organs or an imbalance of the humors. One of the most well-known and reported examples was the belief that an excess amount of black bile was the cause of melancholia, which would now be classified as schizophrenia or depression. Medieval physicians used various forms of treatment to try to fix any physical problems that were causing mental disorders in their patients. When the cause of the disorder being examined was believed to be caused by an imbalance of the four humors, doctors attempted to rebalance the body. They did so through a combination of emetics, laxatives and different methods of bloodletting, in order to remove excess amounts of bodily fluids.
Christian interpretation
Medicine in the Middle Ages was rooted in Christianity through not only the spread of medical texts through monastic tradition but also through the beliefs of sickness in conjunction with medical treatment and theory. Christianity, throughout the medieval period, did not set medical knowledge back or forwards. The church taught that God sometimes sent illness as a punishment, and that in these cases, repentance could lead to a recovery. This led to the practice of penance and pilgrimage as a means of curing illness. In the Middle Ages, some people did not consider medicine a profession suitable for Christians, as disease was often considered God-sent. God was considered to be the "divine physician" who sent illness or healing depending on his will. From a Christian perspective, disease could be seen either as a punishment from God or as an affliction of demons (or elves, see first paragraph under Theories of Medicine). The ultimate healer in this interpretation is of course God, but medical practitioners cited both the Bible and Christian history as evidence that humans could and should attempt to cure diseases. For example, the Lorsch Book of Remedies or the Lorsch Leechbook contains a lengthy defense of medical practice from a Christian perspective. Christian treatments focused on the power of prayer and holy words, as well as liturgical practice.
However, many monastic orders, particularly the Benedictines, were very involved in healing and caring for the sick and dying. In many cases, the Greek philosophy that early Medieval medicine was based upon was compatible with Christianity. Though the widespread Christian tradition of sickness being a divine intervention in reaction to sin was popularly believed throughout the Middle Ages, it did not rule out natural causes. For example, the Black Death was thought to have been caused by both divine and natural origins. The plague was thought to have been a punishment from God for sinning, however because it was believed that God was the reason for all natural phenomena, the physical cause of the plague could be scientifically explained as well. One of the more widely accepted scientific explanations of the plague was the corruption of air in which pollutants such as rotting matter or anything that gave the air an unpleasant scent caused the spread of the plague.
Hildegard of Bingen (1098–1179) played an important role in how illness was interpreted through both God and natural causes through her medical texts as well. As a nun, she believed in the power of God and prayer to heal, however she also recognized that there were natural forms of healing through the humors as well. Though there were cures for illness outside of prayer, ultimately the patient was in the hands of God. One specific example of this comes from her text Causae et Curae in which she explains the practice of bleeding:
Bleeding, says Hildegard, should be done when the moon is waning, because then the "blood is low" (77:23–25). Men should be bled from the age of twelve (120:32) to eighty (121:9), but women, because they have more of the detrimental humors, up to the age of one hundred (121:24). For therapeutic bleeding, use the veins nearest the diseased part (122:19); for preventive bleeding, use the large veins in the arms (121:35–122:11), because they are like great rivers whose tributaries irrigate the body (123:6–9, 17–20). 24 From a strong man, take "the amount that a thirsty person can swallow in one gulp" (119:20); from a weak one, "the amount that an egg of moderate size can hold" (119:22–23). Afterward, let the patient rest for three days and give him undiluted wine (125:30), because "wine is the blood of the earth" (141:26). This blood can be used for prognosis; for instance, "if the blood comes out turbid like a man's breath, and if there are black spots in it, and if there is a waxy layer around it, then the patient will die, unless God restore him to life" (124:20–24).
Monasteries were also important in the development of hospitals throughout the Middle Ages, where the care of sick members of the community was an important obligation. These monastic hospitals were not only for the monks who lived at the monasteries but also the pilgrims, visitors and surrounding population. The monastic tradition of herbals and botany influenced Medieval medicine as well, not only in their actual medicinal uses but in their textual traditions. Texts on herbal medicine were often copied in monasteries by monks, but there is substantial evidence that these monks were also practicing the texts that they were copying. These texts were progressively modified from one copy to the next, with notes and drawings added into the margins as the monks learned new things and experimented with the remedies and plants that the books supplied. Monastic translations of texts continued to influence medicine as many Greek medical works were translated into Arabic. Once these Arabic texts were available, monasteries in western Europe were able to translate them, which in turn would help shape and redirect Western medicine in the later Middle Ages. The ability for these texts to spread from one monastery or school in adjoining regions created a rapid diffusion of medical texts throughout western Europe.
The influence of Christianity continued into the later periods of the Middle Ages as medical training and practice moved out of the monasteries and into cathedral schools, though more for the purpose of general knowledge rather than training professional physicians. The study of medicine was eventually institutionalized into the medieval universities. Even within the university setting, religion dictated a lot of the medical practice being taught. For instance, the debate of when the spirit left the body influenced the practice of dissection within the university setting. The universities in the south believed that the soul only animated the body and left immediately upon death. Because of this, the body while still important, went from being a subject to an object. However, in the north they believed that it took longer for the soul to leave as it was an integral part of the body. Though medical practice had become a professional and institutionalized field, the argument of the soul in the case of dissection shows that the foundation of religion was still an important part of medical thought in the late Middle Ages.
Medical universities in medieval Europe
Medicine was not a formal area of study in the early medieval era, but it grew in response to the proliferation of translated Greek and Arabic medical texts in the 11th century. Western Europe also experienced economic, population and urban growth in the 12th and 13th centuries leading to the ascent of medieval medical universities. The University of Salerno was considered to be a renowned provenance of medical practitioners in the 9th and 10th centuries, but was not recognized as an official medical university until 1231. The founding of the Universities of Bologna (1088), Paris (1150), Oxford (1167), Montpellier (1181), Padua (1222) and Lleida (1297) extended the initial work of Salerno across Europe, and by the 13th century, medical leadership had passed to these newer institutions. Despite Salerno's important contributions to the foundation of the medical curriculum, scholars do not consider Salerno to be one of the medieval medical universities. This is because the formal establishment of a medical curriculum occurred after the decline of Salerno's grandeur of being a center for academic medicine.
The medieval medical universities' central concept concentrated on the balance between the humors and "in the substances used for therapeutic purposes". The curriculum's secondary concept focused on medical astrology, where celestial events were thought to influence health and disease. The medical curriculum was designed to train practitioners. Teachers of medical students were often successful physicians, practicing in conjunction with teaching. The curriculum of academic medicine was fundamentally based on translated texts and treatises attributed to Hippocrates and Galen as well as Arabic medical texts. At Montpellier's Faculty of Medicine professors were required in 1309 to possess Galen's books which described humors, De complexionibus, De virtutibus naturalibus, De criticis diebu so that they could teach students about Galen's medical theory. The translated works of Hippocrates and Galen were often incomplete, and were mediated with Arabic medical texts for their "independent contributions to treatment and to herbal pharmacology". Although anatomy was taught in academic medicine through the dissection of cadavers, surgery was largely independent from medical universities. The University of Bologna was the only university to grant degrees in surgery. Academic medicine also focused on actual medical practice where students would study individual cases and observe the professor visiting patients.
The required number of years to become a licensed physician varied among universities. Montpellier required students without their masters of arts to complete three and a half years of formal study and six months of outside medical practice. In 1309, the curriculum of Montpellier was changed to six years of study and eight months of outside medical practice for those without a masters of arts, whereas those with a masters of arts were only subjected to five years of study with eight months of outside medical practice. The university of Bologna required three years of philosophy, three years of astrology, and four years of attending medical lectures.
Medical practitioners
Members of religious orders were major sources of medical knowledge and cures. There appears to have been some controversy regarding the appropriateness of medical practice for members of religious orders. The Decree of the Second Lateran Council of 1139 advised the religious to avoid medicine because it was a well-paying job with higher social status than was appropriate for the clergy. However, this official policy was not often enforced in practice and many religious continued to practice medicine.
There were many other medical practitioners besides clergy. Academically trained doctors were particularly important in cities with universities. Medical faculty at universities figured prominently in defining medical guilds and accepted practices as well as the required qualifications for physicians. Beneath these university-educated physicians there existed a whole hierarchy of practitioners. Wallis suggests a social hierarchy with these university educated physicians on top, followed by "learned surgeons; craft-trained surgeons; barber surgeons, who combined bloodletting with the removal of "superfluities" from the skin and head; itinerant specialist such as dentist and oculists; empirics; midwives; clergy who dispensed charitable advice and help; and, finally, ordinary family and neighbors". Each of these groups practiced medicine in their own capacity and contributed to the overall culture of medicine.
Hospital system
In the Medieval period the term hospital encompassed hostels for travellers, dispensaries for poor relief, clinics and surgeries for the injured, and homes for the blind, lame, elderly, and mentally ill. Monastic hospitals developed many treatments, both therapeutic and spiritual.
During the thirteenth century an immense number of hospitals were built. The Italian cities were the leaders of the movement. Milan had no fewer than a dozen hospitals and Florence before the end of the fourteenth century had some thirty hospitals. Some of these were very beautiful buildings. At Milan a portion of the general hospital was designed by Bramante and another part of it by Michelangelo. The Hospital in Sienna, built in honor of St. Catherine, has been famous ever since. Everywhere throughout Europe this hospital movement spread. Virchow, the great German pathologist, in an article on hospitals, showed that every city of Germany of five thousand inhabitants had its hospital. He traced all of this hospital movement to Pope Innocent III, and though he was least papistically inclined, Virchow did not hesitate to give extremely high praise to this pontiff for all that he had accomplished for the benefit of children and suffering mankind.
Hospitals began to appear in great numbers in France and England. Following the French Norman invasion into England, the explosion of French ideals led most Medieval monasteries to develop a hospitium or hospice for pilgrims. This hospitium eventually developed into what we now understand as a hospital, with various monks and lay helpers providing the medical care for sick pilgrims and victims of the numerous plagues and chronic diseases that afflicted Medieval Western Europe. Benjamin Gordon supports the theory that the hospital – as we know it – is a French invention, but that it was originally developed for isolating lepers and plague victims, and only later undergoing modification to serve the pilgrim.
Owing to a well-preserved 12th-century account of the monk Eadmer of the Canterbury cathedral, there is an excellent account of Bishop Lanfranc's aim to establish and maintain examples of these early hospitals:
But I must not conclude my work by omitting what he did for the poor outside the walls of the city Canterbury. In brief, he constructed a decent and ample house of stone…for different needs and conveniences. He divided the main building into two, appointing one part for men oppressed by various kinds of infirmities and the other for women in a bad state of health. He also made arrangements for their clothing and daily food, appointing ministers and guardians to take all measures so that nothing should be lacking for them.
Later developments
High medieval surgeons like Mondino de Liuzzi pioneered anatomy in European universities and conducted systematic human dissections. Unlike pagan Rome, high medieval Europe did not have a complete ban on human dissection. However, Galenic influence was still so prevalent that Mondino and his contemporaries attempted to fit their human findings into Galenic anatomy.
During the period of the Renaissance from the mid 1450s onward, there were many advances in medical practice. The Italian Girolamo Fracastoro (1478–1553) was the first to propose that epidemic diseases might be caused by objects outside the body that could be transmitted by direct or indirect contact. He also proposed new treatments for diseases such as syphilis.
In 1543 the Flemish Scholar Andreas Vesalius wrote the first complete textbook on human anatomy: "De Humani Corporis Fabrica", meaning "On the Fabric of the Human Body". Much later, in 1628, William Harvey explained the circulation of blood through the body in veins and arteries. It was previously thought that blood was the product of food and was absorbed by muscle tissue.
During the 16th century, Paracelsus, like Girolamo, discovered that illness was caused by agents outside the body such as bacteria, not by imbalances within the body.
The French army doctor Ambroise Paré, born in 1510, revived the ancient Greek method of tying off blood vessels. After amputation the common procedure was to cauterize the open end of the amputated appendage to stop the haemorrhaging. This was done by heating oil, water, or metal and touching it to the wound to seal off the blood vessels. Pare also believed in dressing wounds with clean bandages and ointments, including one he made himself composed of eggs, oil of roses, and turpentine. He was the first to design artificial hands and limbs for amputation patients. On one of the artificial hands, the two pairs of fingers could be moved for simple grabbing and releasing tasks and the hand looked perfectly natural underneath a glove.
Medical catastrophes were more common in the late Middle Ages and the Renaissance than they are today. During the Renaissance, trade routes were the perfect means of transportation for disease. Eight hundred years after the Plague of Justinian, the bubonic plague returned to Europe. Starting in Asia, the Black Death reached Mediterranean and western Europe in 1348 (possibly from Italian merchants fleeing fighting in Crimea), and killed 25 million Europeans in six years, approximately 1/3 of the total population and up to a 2/3 level in the worst-affected urban areas. Before Mongols left Crimean Kaffa in the siege of Kaffa, the dead or dying bodies of the infected soldiers were loaded onto catapults and launched over Kaffa's walls to infect those inside. This incident was among the earliest known examples of biological warfare and is credited as being the source of the spread of the Black Death into Europe.
The plague repeatedly returned to haunt Europe and the Mediterranean from 14th through 17th centuries. Notable later outbreaks include the Italian Plague of 1629–1631, the Great Plague of Seville (1647–1652), the Great Plague of London (1665–1666), the Great Plague of Vienna (1679), the Great Plague of Marseille in 1720–1722 and the 1771 plague in Moscow.
Before the Spanish discovered the New World (continental America), the deadly infections of smallpox, measles, and influenza were unheard of. The Native Americans did not have the immunities the Europeans developed through long contact with the diseases. Christopher Columbus ended the Americas' isolation in 1492 while sailing under the flag of Castile, Spain. Deadly epidemics swept across the Caribbean. Smallpox wiped out villages in a matter of months. The island of Hispaniola had a population of 250,000 Native Americans. 20 years later, the population had dramatically dropped to 6,000. 50 years later, it was estimated that approximately 500 Native Americans were left. Smallpox then spread to the area which is now Mexico where it then helped destroy the Aztec Empire. In the 1st century of Spanish rule in what is now Mexico, 1500–1600, Central and South Americans died by the millions. By 1650, the majority of the New Spain (now Mexico) population had perished.
Contrary to popular belief bathing and sanitation were not lost in Europe with the collapse of the Roman Empire. Bathing in fact did not fall out of fashion in Europe until shortly after the Renaissance, replaced by the heavy use of sweat-bathing and perfume, as it was thought in Europe that water could carry disease into the body through the skin. Medieval church authorities believed that public bathing created an environment open to immorality and disease. Roman Catholic Church officials even banned public bathing in an unsuccessful effort to halt syphilis epidemics from sweeping Europe.
Battlefield medicine
Camp and movement
In order for an army to be in good fighting condition, it must maintain the health of its soldiers. One way of doing this is knowing the proper location to set up camp. Military camps were not to be set up in any sort of marshy region. Marsh lands tend to have standing water, which can draw in mosquitos. Mosquitos, in turn, can carry deadly disease, such as malaria. As the camp and troops were needed to be moved, the troops would be wearing heavy soled shoes in order to prevent wear on soldiers' feet. Waterborne illness has also remained an issue throughout the centuries. When soldiers would look for water they would be searching for some sort of natural spring or other forms of flowing water. When water sources were found, any type of rotting wood, or plant material, would be removed before the water was used for drinking. If these features could not be removed, then water would be drawn from a different part of the source. By doing this the soldiers were more likely to be drinking from a safe source of water. Thus, water borne bacteria had less chance of making soldiers ill. One process used to check for dirty water was to moisten a fine white linen cloth with the water and leave it out to dry. If the cloth had any type of stain, it would be considered to be diseased. If the cloth was clean, the water was healthy and drinkable. Freshwater also assists with sewage disposal, as well as wound care. Thus, a source of fresh water was a preemptive measure taken to defeat disease and keep men healthy once they were wounded.
Physicians
Surgeons
In Medieval Europe the surgeon's social status improved greatly as their expertise was needed on the battlefield. Owing to the number of patients, warfare created a unique learning environment for these surgeons. The dead bodies also provided an opportunity for learning. The corpses provided a means to learn through hands on experience. As war declined, the need for surgeons declined as well. This would follow a pattern, where the status of the surgeon would flux in regards to whether or not there was actively a war going on.
First medical schools
Medical school also first appeared in the Medieval period. This created a divide between physicians trained in the classroom and physicians who learned their trade through practice. The divide created a shift leading to physicians trained in the classroom to be of higher esteem and more knowledgeable. Despite this, there was still a lack of knowledge by physicians in the militaries. The knowledge of the militaries' physicians was greatly acquired through first hand experience. In the Medical schools, physicians such as Galen were referenced as the ultimate source of knowledge. Thus, the education in the schools was aimed at proving these ancient physicians were correct. This created issues as Medieval knowledge surpassed the knowledge of these ancient physicians. In the scholastic setting it still became practice to reference ancient physicians or the other information being presented was not taken seriously.
Level of care
The soldiers that received medical attention was most likely from a physician who was not well trained. To add to this, a soldier did not have a good chance of surviving a wound that needed specific, specialized, or knowledgeable treatment. Surgery was oftentimes performed by a surgeon who knew it as a craft. There were a handful of surgeons such as Henry de Mondeville, who were very proficient and were employed by Kings such as King Phillip. However; this was not always enough to save kings’ lives, as King Richard I of England died of wounds at the siege of Chalus in AD 1199 due to an unskilled arrow extraction.
Wound treatment
Arrow extraction
Treating a wound was and remains the most crucial part of any battlefield medicine, as this is what keeps soldiers alive. As remains true on the modern battlefield, hemorrhaging and shock were the number one killers. Thus, the initial control of these two things were of the utmost importance in medieval medicine. Items such as the long bow were used widely throughout the medieval period, thus making arrow extracting a common practice among the armies of Medieval Europe. When extracting an arrow, there were three guidelines that were to be followed. The physicians should first examine the position of the arrow and the degree to which its parts are visible, the possibility of it being poisoned, the location of the wound, and the possibility of contamination with dirt and other debris. The second rule was to extract it delicately and swiftly. The third rule was to stop the flow of blood from the wound.
The arrowheads that were used against troops were typically not barbed or hooked, but were slim and designed to penetrate armor such as chain mail. Although this design may be useful as wounds were smaller, these arrows were more likely to embed in bone making them harder to extract. If the arrow happened to be barbed or hooked it made the removal more challenging. Physicians would then let the wound putrefy, thus making the tissue softer and easier for arrow extraction. After a soldier was wounded he was taken to a field hospital where the wound was assessed and cleaned, then if time permitted the soldier was sent to a camp hospital where his wound was closed for good and allowed to heal.
Blade and knife wounds
Another common injury faced was those caused by blades. If the wound was too advanced for simple stitch and bandage, it would often result in amputation of the limb. Surgeons of the Medieval battlefield had the practice of amputation down to an art. Typically it would have taken less than a minute for a surgeon to remove the damaged limb, and another three to four minutes to stop the bleeding. The surgeon would first place the limb on a block of wood and tie ligatures above and below the site of surgery. Then the soft tissue would be cut through, thus exposing the bone, which was then sawed through. The stump was then bandaged and left to heal. The rates of mortality among amputation patients was around 39%, that number grew to roughly 62% for those patients with a high leg amputation. Ideas of medieval surgery are often construed in modern minds as barbaric, as our view is diluted with our own medical knowledge. Surgery and medical practice in general was at its height of advancement for its time. All procedures were done with the intent to save lives, not to cause extra pain and suffering. The speed of the procedure by the surgeon was an important factor, as the limit of pain and blood loss lead to higher survival rates among these procedures.
Injuries to major arteries that caused mass blood loss were not usually treatable as shown in the evidence of archeological remains. We know this as wounds severe enough to sever major arteries left incisions on the bone which is excavated by archaeologists. Wounds were also taught to be covered to improve healing. Forms of antiseptics were also used in order to stave off infection. To dress wounds all sorts of dressing were used such as grease, absorbent dressings, spider webs, honey, ground shellfish, clay and turpentine. Some of these methods date back to Roman battlefield medicine.
Bone breakage
Sieges were a dangerous place to be, as broken bones became an issue, with soldiers falling while they scaled the wall amongst other methods of breakage. Typically, it was long bones that were fractured. These fractures were manipulated to get the bones back into their correct location. Once they were in their correct location, the wound was immobilized by either a splint or a plaster mold. The plaster mold (an early cast) was made of flour and egg whites and was applied to the injured area. Both of these methods left the bone immobilized and gave it a chance to heal.
Burn treatment
Burn treatment also required a specific approach by physicians of the time. This was due to burning oil and arrows or boiling water, which were used in combat. In the early stages of treatment there was an attempt to stop the formation of blisters. The burn was prevented from becoming dry by using anointments placed on the burn. These anointments typically consisted of vinegar, egg, rose oil, opium, and a multitude of different herbs. The ointment was applied to affected area, and then reapplied as needed.
See also
Byzantine medicine
Cucupha
History of hospitals
History of medicine
History of nursing
Ibn Sina Academy of Medieval Medicine and Sciences
Irish medical families
Life expectancy
Medicine in the medieval Islamic world
Medieval demography
Plague doctor
Plague doctor contract
Plague doctor costume
Tacuinum Sanitatis
Theriac
Timeline of medicine and medical technology
Treatise on Herbs
Footnotes
Further reading
Bowers, Barbara S. ed. The Medieval Hospital and Medical Practice (Ashgate, 2007); 258pp; essays by scholars
Getz, Faye. Medicine in the English Middle Ages. (Princeton University Press, 1998).
Mitchell, Piers D. Medicine in the Crusades: Warfare, Wounds, and the Medieval Surgeon (Cambridge University Press, 2004) 293 pp.
Porter, Roy.The Greatest Benefit to Mankind. A medical history of humanity from antiquity to the present. (HarperCollins 1997)
Primary sources
Wallis, Faith, ed. Medieval Medicine: A Reader (2010) excerpt and text search
External links
Medieval Medicine
"Index of Medieval Medical Images" UCLA Special Collections (accessed 2 September 2006).
"The Wise Woman" An overview of common ailments and their treatments from the Middle Ages presented in a slightly humorous light.
"MacKinney Collection of Medieval Medical Illustrations"
PODCAST: Professor Peregrine Horden (Royal Holloway University of London): 'What's wrong with medieval medicine?'
Walsh, James J. Medieval Medicine(1920), A & C Black, Ltd.
Interactive game with medieval diseases and cures
Encyclopedic manuscript containing allegorical and medical drawings From the Rare Book and Special Collections Division at the Library of Congress
Collection: "Death in the European Middle Ages" from the University of Michigan Museum of Art
Science in the Middle Ages
Humorism
Western Europe | 0.770661 | 0.992034 | 0.764522 |
13th century | The 13th century was the century which lasted from January 1, 1201 (represented by the Roman numerals MCCI) through December 31, 1300 (MCCC) in accordance with the Julian calendar.
The Mongol Empire was founded by Genghis Khan, which stretched from Eastern Asia to Eastern Europe. The conquests of Hulagu Khan and other Mongol invasions changed the course of the Muslim world, most notably the Siege of Baghdad (1258) and the destruction of the House of Wisdom. Other Muslim powers such as the Mali Empire and Delhi Sultanate conquered large parts of West Africa and the Indian subcontinent, while Buddhism witnessed a decline through the conquest led by Bakhtiyar Khilji. The earliest Islamic states in Southeast Asia formed during this century, most notably Samudera Pasai. The Kingdoms of Sukhothai and Hanthawaddy would emerge and go on to dominate their surrounding territories.
Europe entered the apex of the High Middle Ages, characterized by rapid legal, cultural, and religious evolution as well as economic dynamism. Crusades after the fourth, while mostly unsuccessful in rechristianizing the Holy Land, inspired the desire to expel Muslim presence from Europe that drove the Reconquista and solidified a sense of Christendom. To the north, the Teutonic Order Christianized and gained dominance of Prussia, Estonia, and Livonia. Inspired by new translations into Latin of classical works preserved in the Islamic World for over a thousand years, Thomas Aquinas developed Scholasticism, which dominated the curricula of the new universities. In England, King John signed the Magna Carta, beginning the tradition of Parliamentary advisement in England. This helped develop the principle of equality under law in European judisprudence.
The Southern Song dynasty began the century as a prosperous kingdom but were later invaded and annexed into the Yuan dynasty of the Mongols. The Kamakura Shogunate of Japan successfully resisted two Mongol invasion attempts in 1274 and 1281. The Korean state of Goryeo resisted a Mongol invasion, but eventually sued for peace and became a client state of the Yuan dynasty.
In North America, according to some population estimates, the population of Cahokia grew to be comparable to the population of 13th-century London. In Peru, the Kingdom of Cuzco began as part of the Late Intermediate Period. In Mayan civilization, the 13th century marked the beginning of the Late Postclassic period. The Kanem Empire in what is now Chad reached its apex. The Solomonic dynasty in Ethiopia and the Zimbabwe Kingdom were founded.
Events
1201–1209
1202: Introduction of by Fibonacci.
1202: Battle of Basian occurs on July 27, between Kingdom of Georgia and Seljuks.
1202: Battle of Mirebeau occurs on August 1, between Arthur I of Brittany and John of England.
1204: Islamization of Bengal by Bakhtiyar Khalji and oppression of Buddhism in East India.
1204: Fourth Crusade of 1202–1204 captures Zadar for Venice and sacks Byzantine Constantinople, creating the Latin Empire.
1204: Fall of Normandy from Angevin hands to the French King, Philip Augustus, end of Norman domination of France.
1205: The Battle of Adrianople occurred on April 14 between Bulgarians under Tsar Kaloyan of Bulgaria, and Crusaders under Baldwin I, (July 1172 – 1205), the first emperor of the Latin Empire of Constantinople.
1206: Genghis Khan is declared Great Khan of the Mongols.
1206: The Delhi Sultanate is established in Northern India under the Mamluk Dynasty.
1209: Francis of Assisi founds the Franciscan Order.
1209: The Albigensian Crusade is declared by Pope Innocent III.
1210s
1210: Qutb-ud-Din Aibak, the first ruler of the Delhi Sultanate, fell down from a horse while playing chovgan (a form of polo on horseback) in Lahore and died instantly when the pommel of the saddle pierced his ribs.
1212: The Battle of Las Navas de Tolosa in Iberia marks the beginning of a rapid Christian reconquest of the southern half of the Iberian Peninsula, mainly from 1230–1248, with the defeat of Moorish forces.
1213: The Kingdom of France defeats the Crown of Aragon at the Battle of Muret.
1214: France defeats the English and Imperial German forces at the Battle of Bouvines.
1215: King John signs Magna Carta at Runnymede.
1216: Battle of Lipitsa between Russian principalities.
1216: Maravarman Sundara I reestablishes the Pandya Dynasty in Southern India
1217–1221: Fifth Crusade captures Egyptian Ayyubid port city of Damietta; ultimately the Crusaders withdraw.
1220s
C. 1220: The Kingdom of Mapungubwe dissolves
1220: The Kingdom of Zimbabwe begins
1221: Merv, Herat and Nishapur are destroyed in the Mongol conquest of the Khwarazmian Empire.
1222: Andrew II of Hungary signs the Golden Bull which affirms the privileges of Hungarian nobility.
1223: The Signoria, of the Republic of Venice is formed and consists of the Doge, the Minor Council, and the three leaders of the Quarantia.
1223: The Mongol Empire defeats various Russian principalities at the Battle of the Kalka River.
1223: Volga Bulgaria defeats the army of the Mongol Empire at the Battle of Samara Bend.
1225: Trần dynasty of Vietnam was established by Emperor Trần Thái Tông ascended to the throne after his uncle Trần Thủ Độ orchestrated the overthrow of the Lý dynasty.
1227: Estonians are finally subjugated to German crusader rule during the Livonian Crusade.
1227: Genghis Khan dies.
1228–1229: Sixth Crusade under the excommunicated Frederick II Hohenstaufen, who returns Jerusalem to the Crusader States.
1228–1230: First clash between Gregory IX and Frederick II.
1226–1250: Dispute between the so-called second Lombard League and Frederick II.
1230s
1232: The Mongols besiege Kaifeng, the capital of the Jin dynasty, capturing it in the following year.
1233: Battle of Ganter, Ken Arok defeated Kertajaya, the last king of Kediri, thus established Singhasari kingdom Ken Arok ended the reign of Isyana Dynasty and started his own Rajasa dynasty.
1235: The Mandinka tribes unite to form the Mali Empire which leads to the downfall of Takrur in the 1280s.
1239–1250: Third conflict between the Holy Roman Empire and the Papacy.
1237–1240: Mongol Empire conquers Kievan Rus.
1238: Sukhothai becomes the first capital of Sukhothai Kingdom.
1240s
1240: Russians defeat the Swedish army at the Battle of the Neva.
1241: Mongol Empire defeats Hungary at the Battle of Mohi and defeats Poland at the Battle of Legnica. Hungary and Poland ravaged.
1242: Russians defeat the Teutonic Knights at the Battle of Lake Peipus.
1243–1250: Second Holy Roman Empire–Papacy War.
1244: Ayyubids and Khwarezmians defeat the Crusaders and their Arab allies at the Battle of La Forbie.
1249: End of the Portuguese Reconquista against the Moors, when King Afonso III of Portugal reconquers the Algarve.
1248–1254: Seventh Crusade captures Egyptian Ayyubid port city of Damietta, crusaders ultimately withdraw, after the capture of French king Louis IX. Mamelukes overthrow Ayyubid Dynasty.
1250s
By 1250, Pensacola culture, through trade, begins influencing Coastal Coles Creek culture.
1250: The Mamluk dynasty is founded in Egypt.
1257: Baab Mashur Malamo established the Sultanate of Ternate in Maluku.
1258: Baghdad captured and destroyed by the Mongols, effective conclusion of the Abbasid Caliphate in Baghdad.
1258: Pandayan Emperor Jatavarman Sundara I invades Eastern India and northern Sri Lanka.
1259: Treaty of Paris is signed between Louis IX and Henry III
1260s
1260: Mongols first major war defeat in the Battle of Ain Jalut against the Egyptians.
1260: Toluid Civil War begins between Kublai Khan and Ariq Böke for the title of Great Khan.
1261: Byzantines under Michael VIII retake Constantinople from the Crusaders and Venice.
1262: Iceland brought under Norwegian rule, with the Old Covenant.
1265: Dominican theologian Thomas Aquinas begins to write his Summa Theologiae.
1268: Fall of the Crusader State of Antioch to the Egyptians.
1270s
1270: Goryeo dynasty swears allegiance to the Yuan dynasty.
1270: The Zagwe dynasty is displaced by the Solomonic dynasty.
1271: Edward I of England and Charles of Anjou arrive in Acre, starting the Ninth Crusade against Baibars.
1272–1274: Second Council of Lyon attempts to unite the churches of the Eastern Roman Empire with the Church of Rome.
1274: The Mongols launch their first invasion of Japan, but they are repelled by the Samurai and the Kamikaze winds.
1274: The Tepanec give the Mexica permission to settle at the islet Cauhmixtitlan (Eagle's Place Between the Clouds)
1275: Sant Dnyaneshwar who wrote Dnyaneshwari (a commentary on the Bhagavad Gita) and Amrutanubhav was born.
1275: King Kertanegara of Singhasari launched Pamalayu expedition against Melayu Kingdom in Sumatra (ended in 1292).
1277: Passage of the last and most important of the Paris Condemnations by Bishop Tempier, which banned a number of Aristotelian propositions
1279: The Song dynasty ends after losing the Battle of Yamen to the Mongols.
1279: The Chola Dynasty in Southern India officially comes to an end.
1280s
1281: The Mongols launch their second invasion of Japan, but like their first invasion they are repelled by the Samurai and the Kamikaze winds.
1282: Aragon acquires Sicily after the Sicilian Vespers.
1284: Peterhouse, Cambridge founded by Hugo de Balsham, the Bishop of Ely.
1284: King Kertanegara launches the Pabali expedition to Bali, integrating Bali into the Singhasari territory.
1285: Second Mongol raid against Hungary, led by Nogai Khan.
1289: The County of Tripoli falls to the Bahri Mamluks led by Qalawun.
1289: Kertanegara insulted the envoy of Kublai Khan, who demanded that Java pay tribute to the Yuan Dynasty.
1290–1300
The Mamluk Dynasty comes to an end and is replaced by the Khalji dynasty.
1290: By the Edict of Expulsion, King Edward I of England orders all Jews to leave the Kingdom of England.
1291: The Swiss Confederation of Uri, Schwyz, and Unterwalden forms.
1291: Mamluk Sultan of Egypt al-Ashraf Khalil captures Acre, thus ending the Crusader Kingdom of Jerusalem (the last Christian state remaining from the Crusades).
1292: Jayakatwang, duke of Kediri, rebels and kills Kertanegara, ending the Singhasari kingdom.
1292: Marco Polo, on his voyage from China to Persia, visits Sumatra and reports that, on the northern part of Sumatra, there were six trading ports, including Ferlec, Samudera and Lambri.
1292: King Mangrai founds the Lanna kingdom.
1293: Mongol invasion of Java. Kublai Khan of Yuan dynasty China, sends punitive attack against Kertanegara of Singhasari, who repels the Mongol forces.
1293: On 10 November, the coronation of Nararya Sangramawijaya as monarch, marks the foundation of the Hindu Majapahit kingdom in eastern Java.
1296: First War of Scottish Independence begins.
1297: Membership in the Mazor Consegio or the Great Council of Venice of the Venetian Republic is sealed and limited in the future to only those families whose names have been inscribed therein.
1299: Ottoman Empire is established under Osman I.
1300: Islam is likely established in the Aceh region.
1300: Aji Batara Agung Dewa Sakti founds the Kingdom of Kutai Kartanegara/Sultanate of Kutai in the Tepian Batu or Kutai Lama.
1300: The Turku Cathedral was consecrated in Turku.
1300: Sri Rajahmura Lumaya, known in his shortened name Sri Lumay, a half-Tamil and half Malay minor prince of the Chola dynasty in Sumatra established the Indianized Rajahnate of Cebu in Cebu Island on the Philippine Archipelago.
Inventions, discoveries, introductions
Early 13th century – Xia Gui paints Twelve Views from a Thatched Hut, during the Southern Song dynasty (now in Nelson-Atkins Museum of Art, Kansas City, Missouri).
The motet form originates out of the Ars antiqua tradition of Western European music.
Manuscript culture develops out of this time period in cities in Europe, which denotes a shift from monasteries to cities for books.
Pecia system of copying books develops in Italian university-towns and was taken up by the University of Paris in the middle of the century.
Wooden movable type printing invented by Chinese governmental minister Wang Zhen in 1298.
The earliest known rockets, landmines, and handguns are made by the Chinese for use in warfare.
The Chinese adopt the windmill from the Islamic world.
Guan ware vase is made, Southern Song dynasty. It is now kept at Percival David Foundation of Chinese Art, London.
1250 – Cliff Palace, Mesa Verde, and other Ancestral Pueblo architectural complexes reach their apex
1280s – Eyeglasses are invented in Venice, Italy.
Late 13th century – Night Attack on the Sanjo Palace is made during the Kamakura period. It is now kept at Museum of Fine Arts, Boston.
Late 13th century – Descent of the Amida Trinity, raigo triptych, is made, Kamakura period. It is now kept at the Art Institute of Chicago.
The Neo-Aramaic languages begin to develop during the course of the century.
See also
Christianity in the 13th century
References
External links
2nd millennium
Centuries | 0.768677 | 0.994564 | 0.764498 |
Sea change (idiom) | Sea change or sea-change is an English idiomatic expression that denotes a substantial change in perspective, especially one that affects a group or society at large, on a particular issue. It is similar in usage and meaning to a paradigm shift, and may be viewed as a change to a society or community's zeitgeist, with regard to a specific issue. The phrase evolved from an older and more literal usage when the term referred to an actual "change wrought by the sea", a definition now remaining in very limited usage.
History
The term appears in William Shakespeare's The Tempest in the song Full fathom five sung by a supernatural spirit, Ariel, to Ferdinand, a prince of Naples, after Ferdinand's father's apparent death by drowning. The term sea change is used to mean a metamorphosis or alteration.
Usage
A literary character may transform over time into a better person after undergoing various trials or tragedies (e.g. "There is a sea change in Scrooge's personality towards the end of Charles Dickens' A Christmas Carol.") As with the term Potemkin village, sea change has also been used in business culture. In the United States, it is often used as a corporate or institutional buzzword. In this context, it need not refer to a substantial or significant transformation.
References
Further reading
Rich and Strange: Gender, History, Modernism. pp. 3- (preview page 4 not shown in preview)
The Absent Shakespeare. pp. 131–132.
Data Protection: Governance, Risk Management, and Compliance. p. xx.
Complexity, Management and the Dynamics of Change: Challenges for Practice. p. 78.
The Shakespeare Wars: Clashing Scholars, Public Fiascoes, Palace Coups. p. 509.
Shakespeare Survey, Volume 24. p. 106.
English-language idioms
The Tempest | 0.776743 | 0.984233 | 0.764497 |
Cultural nationalism | Cultural nationalism is a term used by scholars of nationalism to describe efforts among intellectuals to promote the formation of national communities through emphasis on a common culture. It is contrasted with "political" nationalism, which refers to specific movements for national self-determination through the establishment of a nation-state.
Definition
John Hutchinson's 1987 work The Dynamics of Cultural Nationalism argued against earlier scholarship that tended to conflate nationalism and state-seeking movements. Hutchinson developed a typography distinguishing cultural from political nationalists, describing how the former act as moral innovators, emerging at times of crisis, to engender movements that offer new maps of identity based on historical myths that - in turn - may inspire programmes of socio-political regeneration from the latter. He emphasises the dynamic role of historians and artists, showing how they interact with religious reformists and a discontented modernising intelligentsia to form national identities.
In his later work, Hutchinson admits his earlier distinction may be too simplistic and recognises:
What distinguishes these cultural "revivals" from earlier ones is their political dynamism, arising from the "coming together of neo-classical and pre-romantic European intellectual currents". These cultural nationalist movements aimed at cultural homogenisation and utilised the study of history as a resource for social innovation. Intellectuals aim to "present populations with new maps of identity and political prescriptions that claim to combine the virtues of historical tradition and modern progress at times of crisis".
History
Anthony D. Smith describes how intellectuals played a primary role in generating cultural perceptions of nationalism:
Smith posits the challenges posed to traditional religion and society in the Age of Revolution propelled many intellectuals to "discover alternative principles and concepts, and a new mythology and symbolism, to legitimate and ground human thought and action". The simultaneous concept of 'historicism' was characterised by an emerging belief in the birth, growth, and decay of specific peoples and cultures, which became "increasingly attractive as a framework for inquiry into the past and present and [...] an explanatory principle in elucidating the meaning of events, past and present".
Johann Gottfried Herder and Johann Gottlieb Fichte are considered key figures who argued for such a cultural definition of nationhood. They emphasised the distinctness of national cultures based predominantly around language, stressing its character as "the epitome of people’s unique historical memories and traditions and the central source of the national spirit".
Miroslav Hroch argues cultural nationalism laid the foundation for the emergence political nationalism.
For Yael Tamir, the right to national self-determination represents the embodiment of the "unique cultural essence of cultural groups" and their right to develop cultural distinctiveness, irrespective of whether these groups seek an independent nation-state.
Criticism
Some scholars, such as Craig Calhoun and Eric Hobsbawm, among others, criticize cultural definitions of nationhood for neglecting the role of the state in the formation of national identities and the role played by socio-political elites in constructing cultural identities. Similarly, Paul Brass argues national identities are not given but rather the product of the politics of socio-political elites.
Umut Ozkirimli rejects a sharp distinction between cultural and political nationalism, emphasising that nationalism is about both. He states it simultaneously involves "the ‘culturalization’ of politics and the ‘politicisation’ of culture".
Examples
Moderate manifestations of Flemish or Hindu nationalisms might be "cultural nationalism", while these same movements also include forms of ethnic nationalism and national mysticism.
See also
Minzu (anthropology)
References
Further reading
David Aberbach, 2008, Jewish Cultural Nationalism: Origins and Influences,
Kosaku Yoshino, 1992, Cultural Nationalism in Contemporary Japan: A Sociological Enquiry,
J. Ellen Gainor, 2001, Performing America: Cultural Nationalism in American Theater,
G. Gordon Betts, 2002, The Twilight of Britain: Cultural Nationalism, Multiculturalism, and the Politics of Toleration,
Yingjie Guo, 2004, Cultural Nationalism in Contemporary China: The Search for National Identity under Reform,
Mike Featherstone, 1990, Global Culture: Nationalism, Globalization and Modernity,
Starrs, Roy, 2004,
Vincent Martigny, 2016, Dire la France. Culture(s) et identités nationales,
Nationalism
Culture
de:Kulturnation | 0.773368 | 0.988505 | 0.764478 |
Parochialism | Parochialism is the state of mind whereby one focuses on small sections of an issue rather than considering its wider context. More generally, it consists of being narrow in scope. In that respect, it is a synonym of "provincialism". It may, particularly when used pejoratively, be contrasted to cosmopolitanism. The term insularity (related to an island) may be similarly used to connote limited exposure.
Parish order
The term originates from the idea of a parish (Late Latin: parochia), one of the smaller divisions within many Christian churches such as the Catholic, Eastern Orthodox, and Anglican churches.
Events, groups and decisions within a parish are based locally — sometimes taking little heed of what is going on in the wider Church. A parish can sometimes be excessively focused on the local scale (thus within a particular point of view), by having (too) little contact with the broader outside, showing meager interest for and possibly knowledge about the universal scale.
Subsidiarity is an organizing principle that matters ought to be handled by the smallest, lowest or least centralized competent authority. The Oxford English Dictionary defines subsidiarity as the idea that a central authority should have a subsidiary function, performing only those tasks which cannot be performed effectively at a more immediate or local level.
Terminology
The term "parochial" can be applied in both culture and economics if a local culture or geographic area's government makes decisions based on solely local interests that do not take into account the effect of the decision on the broader community. The term may also be applied to decisions and events that are considered to be trivial in the grand scheme of things but that may be overemphasized in a smaller community, such as disputes between neighbors.
Parochialism in politics
Parochialism can be found around the world and has sometimes been acknowledged by local institutions. For example, in a change of curriculum on February 7, 2007, Harvard University said that one of the main purposes of the major curriculum overhaul (the first in three decades) was to overcome American "parochialisms", referring in this case to a national point of view rather than one concerned with any particular small community.
The political principle of localism is that which supports local production and consumption of goods, local control of government, and local culture and identity. Localist politics have been approached from many directions by different groups. Nevertheless, localism can generally be described as related to regionalism, and in opposition to centralism.
As a pejorative, the term parish pump politics is used to describe political activity that is more evidently concerned with addressing the immediate needs of the local electorate than with strategy that might affect their long-term well-being. It is more often applied with the term Gombeenism which refers to an underhanded shady individual who is interested in making a profit for him/herself.
Cosmopolitanism versus parochialism
In 1969 Everett Carll Ladd published Ideology in America – his study of political attitudes in the Greater Hartford, Connecticut area. For context, he introduced the "conventional dichotomy" of liberal versus conservative in political thought, and contrasts this with an alternative dimension of cosmopolitanism versus parochialism. Ladd acknowledges the anticipation by Robert Merton of this localist versus cosmopolitan dichotomy.
Ladd describes a parochial leader in terms of their largely local attachments:
They are, typically, small businessmen and locally oriented professionals who have spent all or most of their lives in the community and whose horizons and connections are narrow and limited to it. Their orthodoxies – partly due to less formal training and partly because of their associations and contacts – are the older "prescientific" ones. They have influence not because of expertise or controlling positions in major corporate structures, but because of personal characteristics – their friendships and associations with common men (typically as voters) in the community. They reflect the hostility of their marginally "have" constituents to demands for change which threaten their economic position or social status.
He makes clear that he does not demonize the adherents of parochialism:
There will be a strong temptation to draw from my construction a picture of Parochials as the bad guys of the new ideological struggle. This is not intended. The response of Parochials probably is as "reasonable", given their sociopolitical position, as is that of Cosmopolitans in light of theirs. What I have tried to suggest is that however humanely inclined they may be as individuals, Hartford Parochials are fundamentally "reactionary", reacting against a new orthodoxy, a new expertise, a new complexity, and for them a new and diminished status. Parochialism is a "reactionary" ideology in a civilisation texchnicienne, one that has muffled traditional economic tensions, accumulated scientific knowledge about agonizing social problems, and acquired a staggering body of technical expertise.
The dichotomy between parochialism and cosmopolitanism, as well as provincialism and cosmopolitanism, has been challenged in recent debates aimed at highlighting the empowering value of the parochial and the local.
See also
All politics is local
Country (identity)
Groupthink
Intellectual inbreeding
Localism
Nationalism
NIMBY
Parochial school
Pork barrel
Xiaonong Yishi
Parochial altruism
References
Political ideologies
Political science terminology | 0.770951 | 0.991599 | 0.764475 |
Classicism | Classicism, in the arts, refers generally to a high regard for a classical period, classical antiquity in the Western tradition, as setting standards for taste which the classicists seek to emulate. In its purest form, classicism is an aesthetic attitude dependent on principles based in the culture, art and literature of ancient Greece and Rome, with the emphasis on form, simplicity, proportion, clarity of structure, perfection and restrained emotion, as well as explicit appeal to the intellect. The art of classicism typically seeks to be formal and restrained: of the Discobolus Sir Kenneth Clark observed, "if we object to his restraint and compression we are simply objecting to the classicism of classic art. A violent emphasis or a sudden acceleration of rhythmic movement would have destroyed those qualities of balance and completeness through which it retained until the present century its position of authority in the restricted repertoire of visual images." Classicism, as Clark noted, implies a canon of widely accepted ideal forms, whether in the Western canon that he was examining in The Nude (1956).
Classicism is a force which is often present in post-medieval European and European influenced traditions; however, some periods felt themselves more connected to the classical ideals than others, particularly the Age of Enlightenment, when Neoclassicism was an important movement in the visual arts.
General term
Classicism is a specific genre of philosophy, expressing itself in literature, architecture, art, and music, which has Ancient Greek and Roman sources and an emphasis on society. It was particularly expressed in the Neoclassicism of the Age of Enlightenment.
Classicism is a recurrent tendency in the Late Antique period, and had a major revival in Carolingian and Ottonian art. There was another, more durable revival in the Italian Renaissance when the fall of Byzantium and rising trade with the Islamic cultures brought a flood of knowledge about, and from, the antiquity of Europe. Until that time, the identification with antiquity had been seen as a continuous history of Christendom from the conversion of Roman Emperor Constantine I. Renaissance classicism introduced a host of elements into European culture, including the application of mathematics and empiricism into art, humanism, literary and depictive realism, and formalism. Importantly it also introduced Polytheism, or "paganism" , and the juxtaposition of ancient and modern.
The classicism of the Renaissance led to, and gave way to, a different sense of what was "classical" in the 16th and 17th centuries. In this period, classicism took on more overtly structural overtones of orderliness, predictability, the use of geometry and grids, the importance of rigorous discipline and pedagogy, as well as the formation of schools of art and music. The court of Louis XIV was seen as the center of this form of classicism, with its references to the gods of Olympus as a symbolic prop for absolutism, its adherence to axiomatic and deductive reasoning, and its love of order and predictability.
This period sought the revival of classical art forms, including Greek drama and music. Opera, in its modern European form, had its roots in attempts to recreate the combination of singing and dancing with theatre thought to be the Greek norm. Examples of this appeal to classicism included Dante, Petrarch, and Shakespeare in poetry and theatre. Tudor drama, in particular, modeled itself after classical ideals and divided works into Tragedy and Comedy. Studying Ancient Greek became regarded as essential for a well-rounded education in the liberal arts.
The Renaissance also explicitly returned to architectural models and techniques associated with Greek and Roman antiquity, including the golden rectangle as a key proportion for buildings, the classical orders of columns, as well as a host of ornament and detail associated with Greek and Roman architecture. They also began reviving plastic arts such as bronze casting for sculpture, and used the classical naturalism as the foundation of drawing, painting and sculpture.
The Age of Enlightenment identified itself with a vision of antiquity which, while continuous with the classicism of the previous century, was shaken by the physics of Sir Isaac Newton, the improvements in machinery and measurement, and a sense of liberation which they saw as being present in the Greek civilization, particularly in its struggles against the Persian Empire. The ornate, organic, and complexly integrated forms of the baroque were to give way to a series of movements that regarded themselves expressly as "classical" or "neo-classical", or would rapidly be labelled as such. For example, the painting of Jacques-Louis David was seen as an attempt to return to formal balance, clarity, manliness, and vigor in art.
The 19th century saw the classical age as being the precursor of academicism, including such movements as uniformitarianism in the sciences, and the creation of rigorous categories in artistic fields. Various movements of the Romantic period saw themselves as classical revolts against a prevailing trend of emotionalism and irregularity, for example the Pre-Raphaelites. By this point, classicism was old enough that previous classical movements received revivals; for example, the Renaissance was seen as a means to combine the organic medieval with the orderly classical. The 19th century continued or extended many classical programs in the sciences, most notably the Newtonian program to account for the movement of energy between bodies by means of exchange of mechanical and thermal energy.
The 20th century saw a number of changes in the arts and sciences. Classicism was used both by those who rejected, or saw as temporary, transfigurations in the political, scientific, and social world and by those who embraced the changes as a means to overthrow the perceived weight of the 19th century. Thus, both pre-20th century disciplines were labelled "classical" and modern movements in art which saw themselves as aligned with light, space, sparseness of texture, and formal coherence.
In the present day philosophy classicism is used as a term particularly in relation to Apollonian over Dionysian impulses in society and art; that is a preference for rationality, or at least rationally guided catharsis, over emotionalism.
In the theatre
Classicism in the theatre was developed by 17th century French playwrights from what they judged to be the rules of Greek classical theatre, including the "Classical unities" of time, place and action, found in the Poetics of Aristotle.
Unity of time referred to the need for the entire action of the play to take place in a fictional 24-hour period
Unity of place meant that the action should unfold in a single location
Unity of action meant that the play should be constructed around a single 'plot-line', such as a tragic love affair or a conflict between honour and duty.
Examples of classicist playwrights are Pierre Corneille, Jean Racine and Molière. In the period of Romanticism, Shakespeare, who conformed to none of the classical rules, became the focus of French argument over them, in which the Romantics eventually triumphed; Victor Hugo was among the first French playwrights to break these conventions.
The influence of these French rules on playwrights in other nations is debatable. In the English theatre, Restoration playwrights such as William Wycherley and William Congreve would have been familiar with them. William Shakespeare and his contemporaries did not follow this Classicist philosophy, in particular since they were not French and also because they wrote several decades prior to their establishment. Those of Shakespeare's plays that seem to display the unities, such as The Tempest, probably indicate a familiarity with actual models from classical antiquity.
Most famous 18th-century Italian playwright and libretist Carlo Goldoni created a hybrid style of playwriting (combining the model of Molière with the strengths of Commedia dell'arte and his own wit and sincerity).
In literature
The literary classicism drew inspiration from the qualities of proportion of the major works of ancient Greek and Latin literature.
The 17th–18th centuries significant Classical writers (principally, playwrights and poets) include Pierre Corneille, Jean Racine, John Dryden, William Wycherley, William Congreve, Jonathan Swift, Joseph Addison, Alexander Pope, Voltaire, Carlo Goldoni, and Friedrich Gottlieb Klopstock.
In architecture
Classicism in architecture developed during the Italian Renaissance, notably in the writings and designs of Leon Battista Alberti and the work of Filippo Brunelleschi. It places emphasis on symmetry, proportion, geometry and the regularity of parts as they are demonstrated in the architecture of Classical antiquity and, in particular, the architecture of Ancient Rome, of which many examples remained.
Orderly arrangements of columns, pilasters and lintels, as well as the use of semicircular arches, hemispherical domes, niches and aedicules replaced the more complex proportional systems and irregular profiles of medieval buildings. This style quickly spread to other Italian cities and then to France, Germany, England, Russia and elsewhere.
In the 16th century, Sebastiano Serlio helped codify the classical orders and Andrea Palladio's legacy evolved into the long tradition of Palladian architecture. Building off of these influences, the 17th-century architects Inigo Jones and Christopher Wren firmly established classicism in England.
For the development of classicism from the mid-18th-century onwards, see Neoclassical architecture.
In the fine arts
For Greek art of the 5th century B.C.E., see Classical art in ancient Greece and the Severe style
Italian Renaissance painting and sculpture are marked by their renewal of classical forms, motifs and subjects. In the 15th century Leon Battista Alberti was important in theorizing many of the ideas for painting that came to a fully realized product with Raphael's School of Athens during the High Renaissance. The themes continued largely unbroken into the 17th century, when artists such as Nicolas Poussin and Charles Le Brun represented of the more rigid classicism. Like Italian classicizing ideas in the 15th and 16th centuries, it spread through Europe in the mid to late 17th century.
Later classicism in painting and sculpture from the mid-18th and 19th centuries is generally referred to as Neoclassicism.
Political philosophy
Classicism in political philosophy dates back to the ancient Greeks. Western political philosophy is often attributed to the great Greek philosopher Plato. Although political theory of this time starts with Plato, it quickly becomes complex when Plato's pupil, Aristotle, formulates his own ideas. "The political theories of both philosophers are closely tied to their ethical theories, and their interest is in questions concerning constitutions or forms of government."
However, Plato and Aristotle are not the seedbed but simply the seeds that grew from a seedbed of political predecessors who had debated this topic for centuries before their time. For example, Herodotus sketched out a debate between Theseus, a king of the time, and Creon's messenger. The debate simply shows proponents of democracy, monarchy, and oligarchy and how they all feel about these forms of government. Herodotus' sketch is just one of the beginning seedbeds for which Plato and Aristotle grew their own political theories.
Another Greek philosopher who was pivotal in the development of Classical political philosophy was Socrates. Although he was not a theory-builder, he often stimulated fellow citizens with paradoxes that challenged them to reflect on their own beliefs. Socrates thought "the values that ought to determine how individuals live their lives should also shape the political life of the community." he believed the people of Athens involved wealth and money too much into the politics of their city. He judged the citizens for the way they amassed wealth and power over simple things like projects for their community.
Just like Plato and Aristotle, Socrates did not come up with these ideas alone. Socrates ideals stem back from Protagoras and other 'sophists'. These 'teachers of political arts' were the first to think and act as Socrates did. Where the two diverge is in the way they practiced their ideals. Protagoras' ideals were loved by Athens. Whereas Socrates challenged and pushed the citizens and he was not as loved.
In the end, ancient Greece is to be credited with the foundation of Classical political philosophy.
See also
Classical tradition
Quarrel of the Ancients and the Moderns
Weimar Classicism
References
Further reading
Essays by various authors on topics related to historical periods, places, and themes. Limited preview online.
External links
Renaissance & Classicism from encyclopedia
Art movements
Movements in aesthetics | 0.766671 | 0.997113 | 0.764458 |
Oriental studies | Oriental studies is the academic field that studies Near Eastern and Far Eastern societies and cultures, languages, peoples, history and archaeology. In recent years, the subject has often been turned into the newer terms of Middle Eastern studies and Asian studies. Traditional Oriental studies in Europe is today generally focused on the discipline of Islamic studies; the study of China, especially traditional China, is often called Sinology. The study of East Asia in general, especially in the United States, is often called East Asian studies.
The European study of the region formerly known as "the Orient" had primarily religious origins, which have remained an important motivation until recent times. That is partly since the Abrahamic religions in Europe (Christianity, Judaism, and Islam) originated in the Middle East and because of the rise of Islam in the 7th century. Consequently, there was much interest in the origin of those faiths and of Western culture in general. Learning from medieval Arabic medicine and philosophy and the Greek translations to Arabic was an important factor in the Middle Ages. Linguistic knowledge preceded a wider study of cultures and history, and as Europe began to expand its influence in the region, political, and economic factors, that encouraged growth in its academic study. In the late 18th century, archaeology became a link from the discipline to a wide European public, as artefacts brought back through a variety of means went on display in museums throughout Europe.
Modern study was influenced by imperialist attitudes and interests and by the fascination for the "exotic" East for Mediterranean and European writers and thinkers, and was captured in images by artists, which is embodied in a repeatedly-surfacing theme in the history of ideas in the West, called "Orientalism." Since the 20th century, scholars from the region itself have participated on equal terms in the discipline.
History
Before Islam
The original distinction between the "West" and the "East" was crystalized by the Greco-Persian Wars in the 5th century BC, when Athenian historians made a distinction between their "Athenian democracy" and the Persian monarchy. An institutional distinction between East and West did not exist as a defined polarity before the Oriens- and Occidens-divided administration of Roman Emperor Diocletian in the late 3rd century AD, and the division of the Roman Empire into portions that spoke Latin and Greek. The classical world had an intimate knowledge of its Ancient Persian neighbours (and usually enemies) but very imprecise knowledge of most of the world farther east, including the "Seres" (Chinese). However, there was a substantial direct Roman trade with India, unlike that with China, during the Roman Empire.
Middle Ages
The spread of Islam and the Muslim conquests in the 7th century established a sharp opposition or even a sense of polarity in the Middle Ages between European Christendom and the Islamic world, which stretched from the Middle East and Central Asia to North Africa and Andalusia. Popular medieval European knowledge of cultures farther east was poor and depended on the widely-fictionalized travels of Sir John Mandeville and the legends of Prester John, but the equally-famous account by Marco Polo was much longer and was more accurate.
Scholarly work was initially largely linguistic in nature, with primarily a religious focus on understanding both Biblical Hebrew and languages like Syriac with early Christian literature, but there was also a wish to understand Arabic works on medicine, philosophy, and science. That effort, also called the Studia Linguarum, existed sporadically throughout the Middle Ages, and the Renaissance of the 12th century witnessed a particular growth in translations of Arabic and Greek texts into Latin, with figures like Constantine the African, who translated 37 books, mostly medical texts, from Arabic to Latin, and Herman of Carinthia, one of the translators of the Qur'an. The earliest translation of the Qur'an into Latin was completed in 1143, but little use was made of it until it was printed in 1543. It was later translated into other European languages. Gerard of Cremona and others based themselves in Andalusia to take advantage of its Arabic libraries and scholars. However, as the Christian Reconquista in the Iberian Peninsula began to accelerate in the 11th century, such contacts became rarer in Spain. Chairs of Hebrew, Arabic, and Aramaic were briefly established at Oxford and in four other universities after the Council of Vienne (1312).
There was a vague but increasing knowledge of the complex civilisations of China and of India from which luxury goods (notably cotton and silk textiles as well as ceramics) were imported. Although the Crusades produced relatively little in the way of scholarly interchange, the eruption of the Mongol Empire had strategic implications for the Crusader kingdoms and for Europe itself, which led to extended diplomatic contacts. During the Age of Exploration, European interest in mapping Asia, especially the sea routes, became intense, but most was pursued outside the universities.
Renaissance to 1800
University Oriental studies became systematic during the Renaissance, with the linguistic and religious aspects initially continuing to dominate. There was also a political dimension, as translations for diplomatic purposes were needed even before the West engaged actively with the East beyond the Ottoman Empire. A landmark was the publication in Spain in 1514 of the first Polyglot Bible, containing the complete existing texts in Hebrew and Aramaic, in addition to Greek and Latin. At Cambridge University, there has been a Regius Professor of Hebrew since 1540 (the fifth-oldest regular chair there), and the university's chair in Arabic was founded in about 1643. Oxford followed for Hebrew in 1546 (both chairs were established by Henry VIII). One distinguished scholar was Edmund Castell, who published his Lexicon Heptaglotton Hebraicum, Chaldaicum, Syriacum, Samaritanum, Aethiopicum, Arabicum, et Persicum in 1669, and scholars like Edward Pococke had traveled to the East and wrote on the modern history and society of the Eastern peoples. The University of Salamanca had Professors of Oriental Languages at least in the 1570s. In France, Jean-Baptiste Colbert initiated a training programme for Les jeunes de langues (The Youth of Languages), young linguists in the diplomatic service, like François Pétis de la Croix, who, like his father and his son, served as an Arabic interpreter to the King. The study of the Far East was pioneered by missionaries, especially Matteo Ricci and others during the Jesuit China missions, and missionary motives were to remain important, at least in linguistic studies.
During the 18th century, Western scholars reached a reasonable basic level of understanding of the geography and most of the history of the region, but knowledge of the areas least accessible to Western travelers, like Japan and Tibet, and their languages remained limited. The Enlightenment thinkers characterized aspects of the pagan East as superior to the Christian West in Montesquieu's Lettres Persanes and Voltaire's ironic promotion of Zoroastrianism. Others, like Edward Gibbon, praised the relative religious tolerance of the Middle East over what they considered the intolerant Christian West. Many, including Diderot and Voltaire, praised the high social status of scholarship in Mandarin China.
The Università degli Studi di Napoli "L'Orientale" (English: University of Naples "L'Orientale"), founded in Naples in 1732, is the oldest school of Sinology and Oriental Studies of Continental Europe.
The late 18th century saw the start of a great increase in the study of the archaeology of the period, which was to be an ever-more important aspect of the field in the next century. Egyptology led the way and, as with many other ancient cultures, provide linguists with new material for decipherment and study.
19th century
With a great increase in knowledge of Asia among Western specialists, the increasing political and economic involvement in the region, and particularly the realization of the existence of close relations between Indian and European languages by William Jones, there emerged more complex intellectual connections between the early history of Eastern and Western cultures. Some of the developments occurred in the context of Franco–British rivalry for the control of India. Liberal economists, such as James Mill, denigrated Eastern civilizations as static and corrupt. Karl Marx, himself of Jewish origin, characterized the Asiatic mode of production as unchanging because of the economic narrowness of village economies and the state's role in production. Oriental despotism was generally regarded in Europe as a major factor in the relative failure of progress of Eastern societies. The study of Islam was particularly central to the field since most people living in the geographical area that was termed as the Orient were Muslims. The interest in understanding Islam was fueled partly by economic considerations of the growing trade in the Mediterranean region and by the changing cultural and intellectual climate of the time.
During the course of the century, Western archeology spread across the Middle East and Asia, with spectacular results. In the 1850s, for example, the French government was determined to mount large-scale operations in Assyria and Mesopotamia to showcase its dominance in the region. An archaeological team, led by Victor Place, excavated the palace of the Assyrian King Sargon II in Khorsabad (formerly Nineveh), which was the first systematic excavation of the site. The expedition resulted in a pioneering publication, Ninevah and Assyria, which jointly authored by Victor Place and Felix Thomas and was published around 1867. New national museums provided a setting for important archaeological finds, most of which were then bought back to Europe, and they put Orientalists in the public spotlight as never before.
The first serious European studies of Buddhism and Hinduism were by the scholars Eugene Burnouf and Max Müller. The academic study of Islam also developed, and by the mid-19th century, Oriental studies had become a well-established academic discipline in most European countries, especially those with imperial interests in the region. Although scholastic study expanded, so did racist attitudes and stereotypes of Asian peoples and cultures, however, which frequently extended to local Jewish and Romani communities since they were also of Oriental origin and widely recognized as such. Scholarship often was intertwined with prejudicial racist and religious presumptions<ref>J. Go, "'Racism' and Colonialism: Meanings of Difference and Ruling Practice in America's Pacific Empire" in Qualitative Sociology' 27.1 (March 2004).</ref> to which the new biological sciences tended to contribute until the end of the Second World War.
20th century
The participation in academic studies by scholars from the newly-independent nations of the region itself inevitably changed the nature of studies considerably, with the emergence of post-colonial studies and Subaltern Studies. The influence of Orientalism in the sense used by Edward Said in his book of the same name in scholarship on the Middle East was seen to have re-emerged and risen in prevalence again after the end of the Cold War. It is contended that was partly a response to "a lacuna" in identity politics in international relations generally and within the 'West' particularly, which was brought about by the absence of Soviet communism as a global adversary. The end of the Cold War caused an era that has been marked by discussions of Islamist terrorism framing views on the extent to which the culture of the Arab world and of Islam is a threat to that of the West. The essence of the debate reflects a presupposition for which Orientalism has been criticized by the Orient being defined exclusively by Islam. Such considerations were seen to have occurred in the wider context of the way in which many Western scholars responded to international politics after the Cold War, and they were arguably heightened by the terrorist attacks of September 11, 2001.
Symbolic of that type of response to the end of the Cold War was the popularization of the clash of civilizations thesis. That particular idea of a fundamental conflict between East and West was first advanced by Bernard Lewis in his article "The Roots of Muslim Rage", which was written in 1990. Again, that was seen as a way of accounting for new forms and lines of division in international society after the Cold War. The clash of civilizations approach involved another characteristic of Orientalist thought: the tendency to see the region as being one homogenous civilization, rather than as comprising various different and diverse cultures and strands. It was an idea that was taken on more famously by Samuel P. Huntington in his 1993 article in Foreign Affairs, "The Clash of Civilizations?"
Orientalism
The term Orientalism has come to acquire negative connotations in some quarters and is interpreted to refer to the study of the East by Westerners who are shaped by the attitudes of the era of European imperialism in the 18th and the 19th centuries. When used in that sense, the term often implies prejudiced outsider-caricatured interpretations of Eastern cultures and peoples. That viewpoint was most famously articulated and propagated by Edward Said in his Orientalism (1978), a critical history of that scholarly tradition. In contrast, the term has also been used by some modern scholars to refer to writers of the colonial era who had pro-Eastern attitudes, as opposed to those who saw nothing of value in non-Western cultures.
From "Oriental studies" to "Asian studies"
Like the term Orient, Orientalism is a term that derives from the Latin word oriens (rising) and, equally likely, from the Greek word ('he'oros', the direction of the rising sun). "Orient" is the opposite of Occident, a term for the Western world. In terms of the Old World, Europe was considered the Occident (the West) and its farthest-known extreme as the Orient (the East). From the Roman Empire to the Middle Ages, what is now in the West considered the Middle East was then considered the Orient. However, the use of the various terms and senses derived from "Orient" has greatly declined since the 20th century, especially since trans-Pacific links between Asia and America have grown, and travel from Asia usually arrive in the United States from the west.
In most North American and Australian universities, the field of Oriental studies has now been replaced by that of Asian studies. In many cases, the field has been localised to specific regions, such as Middle Eastern or Near Eastern studies, South Asian studies, and East Asian Studies. That reflects the fact that the Orient is not a single monolithic region but rather a broad area, encompassing multiple civilizations. The generic concept of Oriental studies has to its opponents lost any use that it may have once had and is perceived as obstructing changes in departmental structures to reflect actual patterns of modern scholarship. In many universities, like the University of Chicago, the faculties and institutions have been divided. The Biblical languages may be linked with theological institutes, and the study of ancient civilizations in the region may come under a different faculty from that of the studies of modern periods.
In 1970, the Faculty of Oriental Studies at the Australian National University was renamed the Faculty of Asian Studies. In 2007, the Faculty of Oriental Studies at Cambridge University was renamed the Faculty of Asian and Middle Eastern Studies, and the University of Oxford followed suit in 2022, also renaming the former Faculty of Oriental Studies as the Faculty of Asian and Middle Eastern Studies. Elsewhere, names have remained the same, as in the Chicago, Rome, and the London (only now referred to only by the acronym "SOAS"), and in other universities.
Various explanations for the change to "Asian studies" are offered; a growing number of professional scholars and students of Asian Studies are themselves Asian or from groups of Asian origin (like Asian Americans). This change of labeling may be correlated in some cases to the fact that sensitivity to the term "Oriental" has been heightened in a more politically correct atmosphere, although it began earlier: Bernard Lewis' own department at Princeton University was renamed a decade before Said wrote his book, a detail that Said gets wrong. By some, the term "Oriental" has come to be thought offensive to non-Westerners. Area studies that incorporate not only philological pursuits but identity politics may also account for the hesitation to use the term "Oriental".
Supporters of "Oriental Studies" counter that the term "Asian" is just as encompassing as "Oriental," and may well have originally had the same meaning, were it derived from an Akkadian word for "East" (a more common derivation is from one or both of two Anatolian proper names). Replacing one word with another is to confuse historically objectionable opinions about the East with the concept of "the East" itself. The terms Oriental/Eastern and Occidental/Western are both inclusive concepts that usefully identify large-scale cultural differences. Such general concepts do not preclude or deny more specific ones.
See also
Arabist
Biblical studies
Buddhist studies
Hebraism
Hebraist
Hindu studies
History of Christianity (mentions the beginnings and spread of Christianity in the Middle East and Asia)
Iranistics
Japonism
Jewish studies
List of Islamic studies scholars
Orientalism in early modern France
Philology
Silk Road
Institutions
Americas
American Oriental Society
Oriental Club of Philadelphia
Smithsonian Institution, Freer Gallery of Art
Asia
Tōyō Bunko in Tokyo
Khuda Bakhsh Oriental Public Library in India
Europe
Institute of Oriental Studies of the Russian Academy of Sciences
Institute of Oriental Manuscripts of the Russian Academy of Sciences
International Institute for Asian Studies, Leiden University
References
Further reading
Crawley, William. "Sir William Jones: A vision of Orientalism", Asian Affairs, Vol. 27, Issue 2. (Jun. 1996), pp. 163–176.
Fleming, K.E. "Orientalism, the Balkans, and Balkan Historiography", The American Historical Review, Vol. 105, No. 4. (Oct., 2000), pp. 1218–1233.
Halliday, Fred. "'Orientalism' and Its Critics", British Journal of Middle Eastern Studies, Vol. 20, No. 2. (1993), pp. 145–163.
Irwin, Robert. For lust of knowing: The Orientalists and their enemies. London: Penguin/Allen Lane, 2006 (hardcover, ). As Dangerous Knowledge: Orientalism and Its Discontents. New York: Overlook Press, 2006 (hardcover, ).
Reviewed by Philip Hensher in The Spectator, January 28, 2006.
Reviewed by Allan Massie in the Telegraph, February 6, 2006.
Reviewed by Terry Eagleton in the New Statesman, February 13, 2006.
Reviewed by Bill Saunders in The Independent, February 26, 2006.
Reviewed by Noel Malcolm in The Telegraph, February 26, 2006.
Reviewed by Maya Jasanoff in the London Review of Books, June 8, 2006.
Reviewed by Wolfgang G. Schwanitz in Frankfurter Rundschau, June 26, 2006.
Reviewed by William Grimes in the New York Times, November 1, 2006.
Reviewed by Michael Dirda in The Washington Post, November 12, 2006.
Reviewed by Lawrence Rosen in the Boston Review, January/February 2007.
Klein, Christina. Cold War Orientalism: Asia in the Middlebrow Imagination, 1945–1961. Berkeley: University of California Press, 2003 (hardcover, ; paperback, ).
Knight, Nathaniel. "Grigor'ev in Orenburg, 1851–1862: Russian Orientalism in the Service of Empire?", Slavic Review, Vol. 59, No. 1. (Spring, 2000), pp. 74–100.
Vasiliev, Leonid. "Stages of the World Historical Process: an Orientalist's View." Electronic Science and Education Journal: "Istoriya" 3:2, 10 (2012). http://history.jes.su/ Accessed: March 19, 2014.
Vasiliev, Leonid. "Stages of the World Historical Process: an Orientalist's View." Electronic Science and Education Journal: "Istoriya" 3:2, 10 (2012). http://history.jes.su/ Accessed: March 19, 2014.
Kontje, Todd. German Orientalisms. Ann Arbor, MI: University of Michigan Press, 2004.
Little, Douglas. American Orientalism: The United States and the Middle East Since 1945. Chapel Hill: The University of North Carolina Press, 2001 (hardcover, ); 2002 (paperback, ); London: I.B. Tauris, 2002 (new ed., hardcover, ).
Murti, Kamakshi P. India: The Seductive and Seduced "Other" of German Orientalism. Westport, CT: Greenwood Press, 2001 (hardcover, )
Suzanne L. Marchand: German Orientalism in the Age of Empire - Religion, Race and Scholarship, German Historical Institute, Washington, D.C. and Cambridge University Press, New York 2009 (hardback)
Noble dreams, wicked pleasures: Orientalism in America, 1870–1930 by Holly Edwards (Editor). Princeton: Princeton University Press, 2000 (hardcover, ; paperback, ).
Katz, Elizabeth. Virginia Law. Democracy in the Middle East. 2006. September 9, 2006
Gusterin, Pavel. Первый российский востоковед Дмитрий Кантемир / First Russian Orientalist Dmitry Kantemir. Moscow, 2008. .
Wokoeck, Ursula. German Orientalism: The Study of the Middle East and Islam from 1800 to 1945. London: Routledge, 2009.
Reviewed by Wolfgang G. Schwanitz in Insight Turkey, 12(2010)4, 225-7.
Lockman, Zachary. Contending Visions of the Middle East. The History and Politics of Orientalism. New York: Cambridge University Press 2004, .
Reviewed by Wolfgang G. Schwanitz in DAVO-Nachrichten'', Mainz, Germany, 23(2006)8, 77–78.
.
External links
Institutions
Americas
School of Oriental Studies at Universidad del Salvador, Argentina
Oriental Institute of the University of Chicago
American Center for Oriental Research
The Department of Near Eastern Languages and Civilizations at Harvard University
Asia
The Institute of Oriental Culture at Tokyo University
Institute for Research in Humanities at the Kyoto University
Europe
Wydział Orientalistyczny UW – Strona Wydziału Orientalistycznego Uniwersytetu Warszawskiego The Faculty of Oriental Studies at the University of Warsaw
Asiatica Association, Italy
Faculty of Oriental Studies at the University of Oxford
Oriental Collections at Bulgarian National Library
Uppsala University in Sweden
Ancient Indian & Iran Trust, London UK
Articles
Dictionary of the History of Ideas: China in Western Thought and Culture
John E. Hill, translation in his e-edition of Hou Hanshu
Edward Said's Splash The impact of Edward Said's book on Middle Eastern studies, by Martin Kramer.
Frontier Orientalism — an article by Austrian anthropologist Andre Gingrich
Edward Said and the Production of Knowledge
Orientalism as a tool of Colonialism
Area studies
Asian studies | 0.770864 | 0.991672 | 0.764444 |
Revolution | In political science, a revolution (, 'a turn around') is a rapid, fundamental transformation of a society's class, state, ethnic or religious structures. According to sociologist Jack Goldstone, all revolutions contain "a common set of elements at their core: (a) efforts to change the political regime that draw on a competing vision (or visions) of a just order, (b) a notable degree of informal or formal mass mobilization, and (c) efforts to force change through noninstitutionalized actions such as mass demonstrations, protests, strikes, or violence."
Revolutions have occurred throughout human history and varied in their methods, durations and outcomes. Some revolutions started with peasant uprisings or guerrilla warfare on the periphery of a country; others started with urban insurrection aimed at seizing the country's capital city. Revolutions can be inspired by the rising popularity of certain political ideologies, moral principles, or models of governance such as nationalism, republicanism, egalitarianism, self-determination, human rights, democracy, liberalism, fascism, or socialism.
A regime may become vulnerable to revolution due to a recent military defeat, or economic chaos, or an affront to national pride and identity, or pervasive repression and corruption. Revolutions typically trigger counter-revolutions which seek to halt revolutionary momentum, or to reverse the course of an ongoing revolutionary transformation.
Notable revolutions in the recent centuries include the American Revolution (1775–1783), French Revolution (1789–1799), Haitian Revolution (1791–1804), Spanish American wars of independence (1808–1826), Revolutions of 1848 in Europe, Mexican Revolution (1910–1920), Revolutions of 1917–1923 in Russia and worldwide, Xinhai Revolution in 1911, decolonization of Africa from the mid-1950s to 1975, Cuban Revolution in 1959, Iranian Revolution and Nicaraguan Revolution in 1979, worldwide Revolutions of 1989, and Arab Spring in the early 2010s.
Etymology
The French noun revolucion traces back to the 13th century, and the English equivalent "revolution" to the late 14th century. The word was limited then to mean the revolving motion of celestial bodies. "Revolution" in the sense of abrupt change in a social order was first recorded in the mid-15th century. By 1688, the political meaning of the word was familiar enough that the replacement of James II with William III was termed the "Glorious Revolution".
Definition
"Revolution" is now employed most often to denote a change in social and political institutions. Jeff Goodwin offers two definitions. First, a broad one, including "any and all instances in which a state or a political regime is overthrown and thereby transformed by a popular movement in an irregular, extraconstitutional or violent fashion". Second, a narrow one, in which "revolutions entail not only mass mobilization and regime change, but also more or less rapid and fundamental social, economic or cultural change, during or soon after the struggle for state power".
Jack Goldstone defines a revolution thusly:
"[Revolution is] an effort to transform the political institutions and the justifications for political authority in society, accompanied by formal or informal mass mobilization and noninstitutionalized actions that undermine authorities. This definition is broad enough to encompass events ranging from the relatively peaceful revolutions that toppled communist regimes to the violent Islamic revolution in Afghanistan. At the same time, this definition is strong enough to exclude coups, revolts, civil wars, and rebellions that make no effort to transform institutions or the justification for authority." Goldstone's definition excludes peaceful transitions to democracy through plebiscite or free elections, as occurred in Spain after the death of Francisco Franco, or in Argentina and Chile after the demise of their military juntas. Early scholars often debated the distinction between revolution and civil war. They also questioned whether a revolution is purely political (i.e., concerned with the restructuring of government) or whether "it is an extensive and inclusive social change affecting all the various aspects of the life of a society, including the economic, religious, industrial, and familial as well as the political".
Types
There are numerous typologies of revolution in the social science literature. Alexis de Tocqueville differentiated between:
political revolutions, sudden and violent revolutions that seek not only to establish a new political system but to overhaul an entire society, and;
slow but sweeping transformations of the entire society that take several generations to bring about (such as changes in religion).
One of the Marxist typologies divides revolutions into:
pre-capitalist
early bourgeois
bourgeois
bourgeois-democratic
early proletarian
socialist
Charles Tilly, a modern scholar of revolutions, differentiated between:
coup d'état (a top-down seizure of power), e.g., Poland, 1926
civil war
revolt, and
"great revolution" (a revolution that transforms economic and social structures as well as political institutions, such as the French Revolution of 1789, Russian Revolution of 1917, or Islamic Revolution of Iran in 1979).
Mark Katz identified six forms of revolution:
rural revolution
urban revolution
coup d'état, e.g., Egypt, 1952
revolution from above, e.g., Mao Zedong's Great Leap Forward of 1958
revolution from without, e.g., the Allied invasions of Italy in 1943 and of Germany in 1945.
revolution by osmosis, e.g., the gradual Islamization of several countries.
These categories are not mutually exclusive; the Russian Revolution of 1917 began with an urban revolution to depose the Czar, followed by a rural revolution, followed by the Bolshevik coup in November. Katz also cross-classified revolutions as follows:
Central: countries, usually Great Powers, which play a leading role in a revolutionary wave; e.g., the USSR, Nazi Germany, Iran since 1979.
Aspiring revolutions, which follow the Central revolution
subordinate or puppet revolutions
rival revolutions, e.g., Yugoslavia after 1948, and China after 1960
A further dimension to Katz's typology is that revolutions are either against (anti-monarchy, anti-dictatorial, anti-communist, anti-democratic) or for (pro-fascism, pro-communism, pro-nationalism, etc.). In the latter cases, a transition period is generally necessary to decide which direction to take to achieve the desired form of government. Other types of revolution, created for other typologies, include proletarian or communist revolutions (inspired by the ideas of Marxism that aim to replace capitalism with communism); failed or abortive revolutions (that are not able to secure power after winning temporary victories or amassing large-scale mobilizations); or violent vs. nonviolent revolutions. The term revolution has also been used to denote great changes outside the political sphere. Such revolutions, often labeled social revolutions, are recognized as major transformations in a society's culture, philosophy, or technology, rather than in its political system. Some social revolutions are global in scope, while others are limited to single countries. Commonly cited examples of social revolution are the Industrial Revolution, Scientific Revolution, Commercial Revolution, and Digital Revolution. These revolutions also fit the "slow revolution" type identified by Tocqueville.
Studies of revolution
Political and socioeconomic revolutions have been studied in many social sciences, particularly sociology, political science and history. Scholars of revolution differentiate four generations of theoretical research on the subject of revolution. Theorists of the first generation, including Gustave Le Bon, Charles A. Ellwood, and Pitirim Sorokin, were mainly descriptive in their approach, and their explanations of the phenomena of revolutions were usually related to social psychology, such as Le Bon's crowd psychology theory. The second generation sought to develop detailed frameworks, grounded in social behavior theory, to explain why and when revolutions arise. Their work can be divided into three categories: psychological, sociological and political.
The writings of Ted Robert Gurr, Ivo K. Feierbrand, Rosalind L. Feierbrand, James A. Geschwender, David C. Schwartz, and Denton E. Morrison fall into the first category. They utilized theories of cognitive psychology and frustration-aggression theory to link the cause of revolution to the state of mind of the masses. While these theorists varied in their approach as to what exactly incited the people to revolt (e.g., modernization, recession, or discrimination), they agreed that the primary cause for revolution was a widespread frustration with the socio-political situation.
The second group, composed of academics such as Chalmers Johnson, Neil Smelser, Bob Jessop, Mark Hart, Edward A. Tiryakian, and Mark Hagopian, drew on the work of Talcott Parsons and the structural-functionalist theory in sociology. They saw society as a system in equilibrium between various resources, demands, and subsystems (political, cultural, etc.). As in the psychological school, they differed in their definitions of what causes disequilibrium, but agreed that it is a state of severe disequilibrium that is responsible for revolutions.
The third group, including writers such as Charles Tilly, Samuel P. Huntington, Peter Ammann, and Arthur L. Stinchcombe, followed a political science path and looked at pluralist theory and interest group conflict theory. Those theories view events as outcomes of a power struggle between competing interest groups. In such a model, revolutions happen when two or more groups cannot come to terms within the current political system's normal decision-making process, and when they possess the required resources to employ force in pursuit of their goals.
The second-generation theorists regarded the development of revolutionary situations as a two-step process: "First, a pattern of events arises that somehow marks a break or change from previous patterns. This change then affects some critical variable—the cognitive state of the masses, the equilibrium of the system, or the magnitude of conflict and resource control of competing interest groups. If the effect on the critical variable is of sufficient magnitude, a potentially revolutionary situation occurs." Once this point is reached, a negative incident (a war, a riot, a bad harvest) that in the past might not have been enough to trigger a revolt, will now be enough. However, if authorities are cognizant of the danger, they can still prevent revolution through reform or repression.
In his influential 1938 book The Anatomy of Revolution, historian Crane Brinton established a convention by choosing four major political revolutions—England (1642), Thirteen Colonies of America (1775), France (1789), and Russia (1917)—for comparative study. He outlined what he called their "uniformities", although the American Revolution deviated somewhat from the pattern. As a result, most later comparative studies of revolution substituted China (1949) in their lists, but they continued Brinton's practice of focusing on four.
In subsequent decades, scholars began to classify hundreds of other events as revolutions (see List of revolutions and rebellions). Their expanded notion of revolution engendered new approaches and explanations. The theories of the second generation came under criticism for being too limited in geographical scope, and for lacking a means of empirical verification. Also, while second-generation theories may have been capable of explaining a specific revolution, they could not adequately explain why revolutions failed to occur in other societies experiencing very similar circumstances.
The criticism of the second generation led to the rise of a third generation of theories, put forth by writers such as Theda Skocpol, Barrington Moore, Jeffrey Paige, and others expanding on the old Marxist class-conflict approach. They turned their attention to "rural agrarian-state conflicts, state conflicts with autonomous elites, and the impact of interstate economic and military competition on domestic political change." In particular, Skocpol's States and Social Revolutions (1979) was a landmark book of the third generation. Skocpol defined revolution as "rapid, basic transformations of society's state and class structures ... accompanied and in part carried through by class-based revolts from below", and she attributed revolutions to "a conjunction of multiple conflicts involving state, elites and the lower classes".
In the late 1980s, a new body of academic work started questioning the dominance of the third generation's theories. The old theories were also dealt a significant blow by a series of revolutionary events that they could not readily explain. The Iranian and Nicaraguan Revolutions of 1979, the 1986 People Power Revolution in the Philippines, and the 1989 Autumn of Nations in Europe, Asia and Africa saw diverse opposition movements topple seemingly powerful regimes amidst popular demonstrations and mass strikes in nonviolent revolutions.
For some historians, the traditional paradigm of revolutions as class struggle-driven conflicts centered in Europe, and involving a violent state versus its discontented people, was no longer sufficient to account for the multi-class coalitions toppling dictators around the world. Consequently, the study of revolutions began to evolve in three directions. As Goldstone describes it, scholars of revolution:
Extended the third generation's structural theories to a more heterogeneous set of cases, "well beyond the small number of 'great' social revolutions".
Called for greater attention to conscious agency and contingency in understanding the course and outcome of revolutions.
Observed how studies of social movements—for women's rights, labor rights, and U.S. civil rights—had much in common with studies of revolution and could enrich the latter. Thus, "a new literature on 'contentious politics' has developed that attempts to combine insights from the literature on social movements and revolutions to better understand both phenomena."
The fourth generation increasingly turned to quantitative techniques when formulating its theories. Political science research moved beyond individual or comparative case studies towards large-N statistical analysis assessing the causes and implications of revolution. The initial fourth-generation books and journal articles generally relied on the Polity data series on democratization. Such analyses, like those by A. J. Enterline, Zeev Maoz, and Edward D. Mansfield and Jack Snyder, identified a revolution by a significant change in the country's score on Polity's autocracy-to-democracy scale.
Since the 2010s, scholars like Jeff Colgan have argued that the Polity data series—which evaluates the degree of democratic or autocratic authority in a state's governing institutions based on the openness of executive recruitment, constraints on executive authority, and political competition—is inadequate because it measures democratization, not revolution, and doesn't account for regimes which come to power by revolution but fail to change the structure of the state and society sufficiently to yield a notable difference in the Polity score. Instead, Colgan offered a new data set to single out governments that "transform the existing social, political, and economic relationships of the state by overthrowing or rejecting the principal existing institutions of society." This data set has been employed to make empirically based contributions to the literature on revolution by finding links between revolution and the likelihood of international disputes.
Revolutions have been further examined from an anthropological perspective. Drawing on Victor Turner's writings on ritual and performance, Bjorn Thomassen suggested that revolutions can be understood as "liminal" moments: modern political revolutions very much resemble rituals and can therefore be studied within a process approach. This would imply not only a focus on political behavior "from below", but also a recognition of moments where "high and low" are relativized, subverted, or made irrelevant, and where the micro and macro levels fuse together in critical conjunctions. Economist Douglass North raised a note of caution about revolutionary change, how it "is never as revolutionary as its rhetoric would have us believe". While the "formal rules" of laws and constitutions can be changed virtually overnight, the "informal constraints" such as institutional inertia and cultural inheritance do not change quickly and thereby slow down the societal transformation. According to North, the tension between formal rules and informal constraints is "typically resolved by some restructuring of the overall constraints—in both directions—to produce a new equilibrium that is far less revolutionary than the rhetoric."
See also
Age of Revolution
Armed Insurrection
Classless society
Counterrevolution
List of revolutions and rebellions
Passive revolution
Political warfare
Preference falsification
Psychological warfare
Rebellion
Reformism
Revolutionary wave
Right of revolution
Social movement
Subversion
User revolt
References
Bibliography
Peter Kropotkin (1906), Memoirs of a revolutionist. London: Swan Sonnenschein & Co., Ltd.
Further reading
Beissinger, Mark R. 2022. The Revolutionary City: Urbanization and the Global Transformation of Rebellion. Princeton University Press
Beissinger, Mark R. (2024). "The Evolving Study of Revolution". World Politics.
Goldstone, Jack A. (1982). "The Comparative and Historical Study of Revolutions". Annual Review of Sociology. 8: 187–207
External links
Arendt, Hannah (1963). IEP.UTM.edu. On Revolution. Penguin Classics. New Ed edition: February 8, 1991. .
Comparative politics
Social concepts
Social conflict | 0.765792 | 0.998233 | 0.76444 |
Earth science | Earth science or geoscience includes all fields of natural science related to the planet Earth. This is a branch of science dealing with the physical, chemical, and biological complex constitutions and synergistic linkages of Earth's four spheres: the biosphere, hydrosphere/cryosphere, atmosphere, and geosphere (or lithosphere). Earth science can be considered to be a branch of planetary science but with a much older history.
Geology
Geology is broadly the study of Earth's structure, substance, and processes. Geology is largely the study of the lithosphere, or Earth's surface, including the crust and rocks. It includes the physical characteristics and processes that occur in the lithosphere as well as how they are affected by geothermal energy. It incorporates aspects of chemistry, physics, and biology as elements of geology interact. Historical geology is the application of geology to interpret Earth history and how it has changed over time.
Geochemistry studies the chemical components and processes of the Earth. Geophysics studies the physical properties of the Earth. Paleontology studies fossilized biological material in the lithosphere. Planetary geology studies geoscience as it pertains to extraterrestrial bodies. Geomorphology studies the origin of landscapes. Structural geology studies the deformation of rocks to produce mountains and lowlands. Resource geology studies how energy resources can be obtained from minerals. Environmental geology studies how pollution and contaminants affect soil and rock. Mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. Petrology is the study of rocks, including the formation and composition of rocks. Petrography is a branch of petrology that studies the typology and classification of rocks.
Earth's interior
Plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the Earth's crust. Beneath the Earth's crust lies the mantle which is heated by the radioactive decay of heavy elements. The mantle is not quite solid and consists of magma which is in a state of semi-perpetual convection. This convection process causes the lithospheric plates to move, albeit slowly. The resulting process is known as plate tectonics. Areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the Earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform (or conservative) boundaries. Earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction.
Plate tectonics might be thought of as the process by which the Earth is resurfaced. As the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. Through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. Volcanoes result primarily from the melting of subducted crust material. Crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface—giving birth to volcanoes.
Atmospheric science
Atmospheric science initially developed in the late-19th century as a means to forecast the weather through meteorology, the study of weather. Atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. Climatology studies the climate and climate change.
The troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up Earth's atmosphere. 75% of the mass in the atmosphere is located within the troposphere, the lowest layer. In all, the atmosphere is made up of about 78.0% nitrogen, 20.9% oxygen, and 0.92% argon, and small amounts of other gases including CO2 and water vapor. Water vapor and CO2 cause the Earth's atmosphere to catch and hold the Sun's energy through the greenhouse effect. This makes Earth's surface warm enough for liquid water and life. In addition to trapping heat, the atmosphere also protects living organisms by shielding the Earth's surface from cosmic rays. The magnetic field—created by the internal motions of the core—produces the magnetosphere which protects Earth's atmosphere from the solar wind. As the Earth is 4.5 billion years old, it would have lost its atmosphere by now if there were no protective magnetosphere.
Earth's magnetic field
Hydrology
Hydrology is the study of the hydrosphere and the movement of water on Earth. It emphasizes the study of how humans use and interact with freshwater supplies. Study of water's movement is closely related to geomorphology and other branches of Earth science. Applied hydrology involves engineering to maintain aquatic environments and distribute water supplies. Subdisciplines of hydrology include oceanography, hydrogeology, ecohydrology, and glaciology. Oceanography is the study of oceans. Hydrogeology is the study of groundwater. It includes the mapping of groundwater supplies and the analysis of groundwater contaminants. Applied hydrogeology seeks to prevent contamination of groundwater and mineral springs and make it available as drinking water. The earliest exploitation of groundwater resources dates back to 3000 BC, and hydrogeology as a science was developed by hydrologists beginning in the 17th century. Ecohydrology is the study of ecological systems in the hydrosphere. It can be divided into the physical study of aquatic ecosystems and the biological study of aquatic organisms. Ecohydrology includes the effects that organisms and aquatic ecosystems have on one another as well as how these ecoystems are affected by humans. Glaciology is the study of the cryosphere, including glaciers and coverage of the Earth by ice and snow. Concerns of glaciology include access to glacial freshwater, mitigation of glacial hazards, obtaining resources that exist beneath frozen land, and addressing the effects of climate change on the cryosphere.
Ecology
Ecology is the study of the biosphere. This includes the study of nature and of how living things interact with the Earth and one another and the consequences of that. It considers how living things use resources such as oxygen, water, and nutrients from the Earth to sustain themselves. It also considers how humans and other living creatures cause changes to nature.
Physical geography
Physical geography is the study of Earth's systems and how they interact with one another as part of a single self-contained system. It incorporates astronomy, mathematical geography, meteorology, climatology, geology, geomorphology, biology, biogeography, pedology, and soils geography. Physical geography is distinct from human geography, which studies the human populations on Earth, though it does include human effects on the environment.
Methodology
Methodologies vary depending on the nature of the subjects being studied. Studies typically fall into one of three categories: observational, experimental, or theoretical. Earth scientists often conduct sophisticated computer analysis or visit an interesting location to study earth phenomena (e.g. Antarctica or hot spot island chains).
A foundational idea in Earth science is the notion of uniformitarianism, which states that "ancient geologic features are interpreted by understanding active processes that are readily observed." In other words, any geologic processes at work in the present have operated in the same ways throughout geologic time. This enables those who study Earth history to apply knowledge of how the Earth's processes operate in the present to gain insight into how the planet has evolved and changed throughout long history.
Earth's spheres
In Earth science, it is common to conceptualize the Earth's surface as consisting of several distinct layers, often referred to as spheres: the lithosphere, the hydrosphere, the atmosphere, and the biosphere, this concept of spheres is a useful tool for understanding the Earth's surface and its various processes these correspond to rocks, water, air and life. Also included by some are the cryosphere (corresponding to ice) as a distinct portion of the hydrosphere and the pedosphere (corresponding to soil) as an active and intermixed sphere.
The following fields of science are generally categorized within the Earth sciences:
Geology describes the rocky parts of the Earth's crust (or lithosphere) and its historic development. Major subdisciplines are mineralogy and petrology, geomorphology, paleontology, stratigraphy, structural geology, engineering geology, and sedimentology.
Physical geography focuses on geography as an Earth science. Physical geography is the study of Earth's seasons, climate, atmosphere, soil, streams, landforms, and oceans. Physical geography can be divided into several branches or related fields, as follows: geomorphology, biogeography, environmental geography, palaeogeography, climatology, meteorology, coastal geography, hydrology, ecology, glaciology.
Geophysics and geodesy investigate the shape of the Earth, its reaction to forces and its magnetic and gravity fields. Geophysicists explore the Earth's core and mantle as well as the tectonic and seismic activity of the lithosphere. Geophysics is commonly used to supplement the work of geologists in developing a comprehensive understanding of crustal geology, particularly in mineral and petroleum exploration. Seismologists use geophysics to understand plate tectonic movement, as well as predict seismic activity.
Geochemistry is defined as the study of the processes that control the abundance, composition, and distribution of chemical compounds and isotopes in geologic environments. Geochemists use the tools and principles of chemistry to study the composition, structure, processes, and other physical aspects of the Earth. Major subdisciplines are aqueous geochemistry, cosmochemistry, isotope geochemistry and biogeochemistry.
Soil science covers the outermost layer of the Earth's crust that is subject to soil formation processes (or pedosphere). Major subdivisions in this field of study include edaphology and pedology.
Ecology covers the interactions between organisms and their environment. This field of study differentiates the study of Earth from the study of other planets in the Solar System, Earth being its only planet teeming with life.
Hydrology, oceanography and limnology are studies which focus on the movement, distribution, and quality of the water and involves all the components of the hydrologic cycle on the Earth and its atmosphere (or hydrosphere). "Sub-disciplines of hydrology include hydrometeorology, surface water hydrology, hydrogeology, watershed science, forest hydrology, and water chemistry."
Glaciology covers the icy parts of the Earth (or cryosphere).
Atmospheric sciences cover the gaseous parts of the Earth (or atmosphere) between the surface and the exosphere (about 1000 km). Major subdisciplines include meteorology, climatology, atmospheric chemistry, and atmospheric physics.
Earth science breakup
Atmosphere
Atmospheric chemistry
Geography
Climatology
Meteorology
Hydrometeorology
Paleoclimatology
Biosphere
Biogeochemistry
Biogeography
Ecology
Landscape ecology
Geoarchaeology
Geomicrobiology
Paleontology
Palynology
Micropaleontology
Hydrosphere
Hydrology
Hydrogeology
Limnology (freshwater science)
Oceanography (marine science)
Chemical oceanography
Physical oceanography
Biological oceanography (marine biology)
Geological oceanography (marine geology)
Paleoceanography
Lithosphere (geosphere)
Geology
Economic geology
Engineering geology
Environmental geology
Forensic geology
Historical geology
Quaternary geology
Planetary geology and planetary geography
Sedimentology
Stratigraphy
Structural geology
Geography
Human geography
Physical geography
Geochemistry
Geomorphology
Geophysics
Geochronology
Geodynamics (see also Tectonics)
Geomagnetism
Gravimetry (also part of Geodesy)
Seismology
Glaciology
Hydrogeology
Mineralogy
Crystallography
Gemology
Petrology
Petrophysics
Speleology
Volcanology
Pedosphere
Geography
Soil science
Edaphology
Pedology
Systems
Earth system science
Environmental science
Geography
Human geography
Physical geography
Gaia hypothesis
Systems ecology
Systems geology
Others
Geography
Cartography
Geoinformatics (GIScience)
Geostatistics
Geodesy and Surveying
Remote Sensing
Hydrography
Nanogeoscience
See also
American Geosciences Institute
Earth sciences graphics software
Four traditions of geography
Glossary of geology terms
List of Earth scientists
List of geoscience organizations
List of unsolved problems in geoscience
Making North America
National Association of Geoscience Teachers
Solid-earth science
Science tourism
Structure of the Earth
References
Sources
Further reading
Allaby M., 2008. Dictionary of Earth Sciences, Oxford University Press,
Korvin G., 1998. Fractal Models in the Earth Sciences, Elsvier,
Tarbuck E. J., Lutgens F. K., and Tasa D., 2002. Earth Science, Prentice Hall,
External links
Earth Science Picture of the Day, a service of Universities Space Research Association, sponsored by NASA Goddard Space Flight Center.
Geoethics in Planetary and Space Exploration.
Geology Buzz: Earth Science
Planetary science
Science-related lists | 0.766378 | 0.997397 | 0.764383 |
Linguistic reconstruction | Linguistic reconstruction is the practice of establishing the features of an unattested ancestor language of one or more given languages. There are two kinds of reconstruction:
Internal reconstruction uses irregularities in a single language to make inferences about an earlier stage of that language – that is, it is based on evidence from that language alone.
Comparative reconstruction, usually referred to just as reconstruction, establishes features of the ancestor of two or more related languages, belonging to the same language family, by means of the comparative method. A language reconstructed in this way is often referred to as a proto-language (the common ancestor of all the languages in a given family); examples include Proto-Indo-European and Proto-Dravidian.
Texts discussing linguistic reconstruction commonly preface reconstructed forms with an asterisk (*) to distinguish them from attested forms.
An attested word from which a root in the proto-language is reconstructed is a . More generally, a reflex is the known derivative of an earlier form, which may be either attested or reconstructed. A reflex that is predictable from the reconstructed history of the language is a 'regular' reflex. Reflexes of the same source are cognates.
Methods
First, languages that are thought to have arisen from a common proto-language must meet certain criteria in order to be grouped together; this is a process called subgrouping. Since this grouping is based purely on linguistics, manuscripts and other historical documentation should be analyzed to accomplish this step. However, the assumption that the delineations of linguistics always align with those of culture and ethnicity must not be made. One of the criteria is that the grouped languages usually exemplify shared innovation. This means that the languages must show common changes made throughout history. In addition, most grouped languages have shared retention. This is similar to the first criterion, but instead of changes, they are features that have stayed the same in both languages.
Because linguistics, as in other scientific areas, seeks to reflect simplicity, an important principle in the linguistic reconstruction process is to generate the least possible number of phonemes that correspond to available data. This principle is again reflected when choosing the sound quality of phonemes, as the one which results in the fewest changes (with respect to the data) is preferred.
Comparative Reconstruction makes use of two rather general principles: The Majority Principle and the Most Natural Development Principle. The Majority Principle is the observation that if a cognate set displays a certain pattern (such as a repeating letter in specific positions within a word), it is likely that this pattern was retained from its mother language. The Most Natural Development Principle states that some alterations in languages, diachronically speaking, are more common than others. There are four key tendencies:
The final vowel in a word may be omitted.
Voiceless sounds, often between vowels, become voiced.
Phonetic stops become fricatives.
Consonants become voiceless at the end of words.
Sound construction
The Majority Principle is applied in identifying the most likely pronunciation of the predicted etymon, the original word from which the cognates originated. The Most Natural Development Principle describes the general directions in which languages appear to change and so one can search for those indicators. For example, from the words cantar (Spanish) and chanter (French), one may argue that because phonetic stops generally become fricatives, the cognate with the stop [k] is older than the cognate with the fricative [ʃ] and so the former is most likely to more closely resemble the original pronunciation.
See also
References
Sources
Anthony Fox, Linguistic Reconstruction: An Introduction to Theory and Method (Oxford University Press, 1995) .
George Yule, The Study of Language (7th Ed.) (Cambridge University Press, 2019) .
Henry M. Hoenigswald, Language Change and Linguistic Reconstruction (University of Chicago Press, 1960) .
Historical linguistics
Reconstructed languages | 0.774089 | 0.987378 | 0.764319 |
Germanic paganism | Germanic paganism or Germanic religion refers to the traditional, culturally significant religion of the Germanic peoples. With a chronological range of at least one thousand years in an area covering Scandinavia, the British Isles, modern Germany, the Netherlands, and at times other parts of Europe, the beliefs and practices of Germanic paganism varied. Scholars typically assume some degree of continuity between Roman-era beliefs and those found in Norse paganism, as well as between Germanic religion and reconstructed Indo-European religion and post-conversion folklore, though the precise degree and details of this continuity are subjects of debate. Germanic religion was influenced by neighboring cultures, including that of the Celts, the Romans, and, later, by the Christian religion. Very few sources exist that were written by pagan adherents themselves; instead, most were written by outsiders and can thus present problems for reconstructing authentic Germanic beliefs and practices.
Some basic aspects of Germanic belief can be reconstructed, including the existence of one or more origin myths, the existence of a myth of the end of the world, a general belief in the inhabited world being a "middle-earth", as well as some aspects of belief in fate and the afterlife. The Germanic peoples believed in a multitude of gods, and in other supernatural beings such as jötnar (often glossed as giants), dwarfs, elves, and dragons. Roman-era sources, using Roman names, mention several important male gods, as well as several goddesses such as Nerthus and the matronae. Early medieval sources identify a pantheon consisting of the gods *Wodanaz (Odin), *Thunraz (Thor), *Tiwaz (Tyr), and *Frijjō (Frigg), as well as numerous other gods, many of whom are only attested from Norse sources (see Proto-Germanic folklore).
Textual and archaeological sources allow the reconstruction of aspects of Germanic ritual and practice. These include well-attested burial practices, which likely had religious significance, such as rich grave goods and the burial in ships or wagons. Wooden carved figures that may represent gods have been discovered in bogs throughout northern Europe, and rich sacrificial deposits, including objects, animals, and human remains, have been discovered in springs, bogs, and under the foundations of new structures. Evidence for sacred places includes not only natural locations such as sacred groves but also early evidence for the construction of structures such as temples and the worship of standing poles in some places. Other known Germanic religious practices include divination and magic, and there is some evidence for festivals and the existence of priests.
Subject and terminology
Definition
Germanic religion is principally defined as the religious traditions of speakers of Germanic languages (the Germanic peoples). The term "religion" in this context is itself controversial, Bernhard Maier noting that it "implies a specifically modern point of view, which reflects the modern conceptual isolation of 'religion' from other aspects of culture". Never a unified or codified set of beliefs or practices, Germanic religion showed strong regional variations and Rudolf Simek writes that it is better to refer to "Germanic religions". In many contact areas (e.g. Rhineland and eastern and northern Scandinavia), Germanic paganism was similar to neighboring religions such as those of the Slavs, Celts, or Finnic peoples. The use of the qualifier "Germanic" (e.g. "Germanic religion" and its variants) remains common in German-language scholarship, but is less commonly used in English and other scholarly languages, where scholars usually specify which branch of paganism is meant (e.g. Norse paganism or Anglo-Saxon paganism). The term "Germanic religion" is sometimes applied to practices dating to as early as the Stone Age or Bronze Age, but its use is more generally restricted to the time period after the Germanic languages had become distinct from other Indo-European languages (early Iron Age). Germanic paganism covers a period of around one thousand years in terms of written sources, from the first reports in Roman sources to the final conversion to Christianity.
Continuity
Because of the amount of time and space covered by the term "Germanic religion", controversy exists as to the degree of continuity of beliefs and practices between the earliest attestations in Tacitus and the later attestations of Norse paganism from the high Middle Ages. Many scholars argue for continuity, seeing evidence of commonalities between the Roman, early medieval, and Norse attestations, while many other scholars are skeptical. The majority of Germanic gods attested by name during the Roman period cannot be related to a later Norse god; many names attested in the Nordic sources are similarly without any known non-Nordic equivalents. The much higher number of sources on Scandinavian religion has led to a methodologically problematic tendency to use Scandinavian material to complete and interpret the much more sparsely attested information on continental Germanic religion.
Most scholars accept some form of continuity between Indo-European and Germanic religion, but the degree of continuity is a subject of controversy. Jens Peter Schjødt writes that while many scholars view comparisons of Germanic religion with other attested Indo-European religions positively, "just as many, or perhaps even more, have been sceptical". While supportive of Indo-European comparison, Schjødt notes that the "dangers" of comparison are taking disparate elements out of context and arguing that myths and mythical structures found around the world must be Indo-European just because they appear in multiple Indo-European cultures. Bernhard Maier argues that similarities with other Indo-European religions do not necessarily result from a common origin, but can also be the result of convergence.
Continuity also concerns the question of whether popular, post-conversion beliefs and practices (folklore) found among Germanic speakers up to the modern day reflect a continuity with earlier Germanic religion. Earlier scholars, beginning with Jacob Grimm, believed that modern folklore was of ancient origin and had changed little over the centuries, which allowed the use of folklore and fairy tales as sources of Germanic religion. These ideas later came under the influence of völkisch ideology, which stressed the organic unity of a Germanic "national spirit", as expressed in Otto Höfler's "Germanic continuity theory". As a result, the use of folklore as a source went out of fashion after World War II, especially in Germany, but has experienced a revival since the 1990s in Nordic scholarship. Today, scholars are cautious in their use of folkloric material, keeping in mind that most was collected long after the conversion and the advent of writing. Areas where continuity can be noted include agrarian rites and magical ideas, as well as the root elements of some folktales.
Sources
Sources on Germanic religion can be divided between primary sources and secondary sources. Primary sources include texts, structures, place names, personal names, and objects that were created by devotees of the religion; secondary sources are normally texts that were written by outsiders.
Primary sources
Examples of primary sources include some Latin alphabet and Runic inscriptions, as well as poetic texts such as the Merseburg Charms and heroic texts that may date from pagan times, but were written down by Christians. The poems of the Edda, while pagan in origin, continued to circulate orally in a Christian context before being written down, which makes an application to pre-Christian times difficult. In contrast, pre-Christian images such as on bracteates, gold foil figures, and rune and picture stones are direct attestations of Germanic religion. The interpretation of these images is not always immediately obvious. Archaeological evidence is also extensive, including evidence from burials and sacrificial sites. Ancient votive altars from the Rhineland often contain inscriptions naming gods with Germanic or partially Germanic names.
Secondary sources
Most textual sources on Germanic religion were written by outsiders. The chief textual source for Germanic religion in the Roman period is Tacitus's Germania. There are problems with Tacitus's work, however, as it is unclear how much he really knew about the Germanic peoples he described and because he employed numerous topoi dating back to Herodotus that were used when describing a barbarian people. Tacitus' reliability as a source can be characterized by his rhetorical tendencies, since one of the purposes of Germania was to present his Roman compatriots with an example of the virtues he believed they lacked. Julius Caesar, Procopius, and other ancient authors also offer some information on Germanic religion.
Textual sources for post-Roman continental Germanic religion are written by Christian authors: Some of the gods of the Lombards are described in the 7th-century Origo gentis Langobardorum ("Origin of the Lombard People"), while a small amount of information on the religion of the pagan Franks can be found in Gregory of Tours's late 6th-century ("History of the Franks"). An important source for the pre-Christian religion of the Anglo-Saxons is Bede's Ecclesiastical History of the English People (c. 731). Other sources include historians such as Jordanes (6th century CE) and Paul the Deacon (8th century), as well as saint lives and Christian legislation against various practices.
Textual sources for Scandinavian religion are much more extensive. They include the aforementioned poems of the Poetic Edda, Eddic poetry found in other sources, the Prose Edda, which is usually attributed to the Icelander Snorri Sturluson (13th century CE), Skaldic poetry, poetic kennings with mythological content, Snorri's Heimskringla, the Gesta Danorum of Saxo Grammaticus (12th-13th century CE), Icelandic historical writing and sagas, as well as outsider sources such as the report on the Rus' made by the Arab traveler Ahmad ibn Fadlan (10th century), the Gesta Hammaburgensis ecclesiae pontificum by bishop Adam of Bremen (11th century CE), and various saints' lives.
Outside influences and syncretism
Germanic religion has been influenced by the beliefs of other cultures. Celtic and Germanic peoples were in close contact in the first millennium BCE, and evidence for Celtic influence on Germanic religion is found in religious vocabulary. This includes, for instance, the name of the deity *Þun(a)raz (Thor), which is identical to Celtic *Toranos (Taranis), the Germanic name of the runes (Celtic *rūna 'secret, magic'), and the Germanic name for the sacred groves, *nemeđaz (Celtic nemeton). Evidence for further close religious contacts is found in the Roman-era Rhineland goddesses known as matronae, which display both Celtic and Germanic names. During the Viking Age, there is evidence for continued Irish mythological and Insular Celtic influence on Norse religion.
During the Roman period, Germanic gods were equated with Roman gods and worshipped with Roman names in contact zones, a process known as ; later, Germanic names were also applied to Roman gods.
This was done to better understand one another's religions as well as to syncretize elements of each religion. This resulted in various aspects of Roman worship and iconography being adopted among the Germanic peoples, including those living at some distance from the Roman frontier.
In later centuries, Germanic religion was also influenced by Christianity. There is evidence for the appropriation of Christian symbolism on gold bracteates and possibly in the understanding of the roles of particular gods. The Christianization of the Germanic peoples was a long process during which there are many textual and archaeological examples of the co-existence and sometimes mixture of pagan and Christian worship and ideas. Christian sources frequently equate Germanic gods with demons and forms of the devil.
Cosmology
Creation myth
It is likely that multiple creation myths existed among Germanic peoples. Creation myths are not attested for the continental Germanic peoples or Anglo-Saxons; Tacitus includes the story of Germanic tribes' descent from the gods Tuisto (or Tuisco), who is born from the earth, and Mannus (Germania chapter 2), resulting in a division into three or five Germanic subgroups. Tuisto appears to mean "twin" or "double-being", suggesting that he was a hermaphroditic being capable of impregnating himself. These gods are only attested in Germania. It is not possible to decide based on Tacitus's report whether the myth was meant to describe an origin of the gods or of humans. Tacitus also includes a second myth: the Semnones believed that they originated in a sacred grove of fetters where a particular god dwelled (Germania chapter 39, for more on this see "Sacred trees, groves, and poles" below).
The only Nordic comprehensive origin myth is provided by the Prose Edda book Gylfaginning. According to Gylfaginning, the first being was the giant Ymir, who was followed by the cow Auðumbla, eventually leading to the birth of Odin and his two brothers. The brothers kill Ymir and make the world out of his body, before finally making the first man and woman out of trees (Ask and Embla). Some scholars suspect that Gylfaginning had been compiled from various contradictory sources, with some details from those sources having been left out. Besides Gylfaginning, the most important sources on Nordic creation myths are the Eddic poems Vǫluspá, Vafþrúðnismál, and Grímnismál. The 9th-century Old High German Wessobrunn Prayer begins with a series of negative pairs to describe the time before creation that show similarity to a number of Nordic descriptions of the time before the world, suggesting an orally transmitted formula.
There may be a continuity between Tacitus's account of Tuisto and Mannus and the Gylfaginning account of the creation of the world. The name Tuisto, if it means 'twin' or 'double-being', could connect him to the name of the primordial being Ymir, whose name probably has a similar meaning. On the other hand, the form "Tuisco" may suggest a connection to Tyr. Similarly, both myths have a genealogy consisting of a grandfather, a father, and then three sons. Ymir's name is etymologically connected to the Sanskrit Yama and Iranian Yima, while the creation of the world from Ymir's body is paralleled by the creation of the world from the primordial being Purusha in Indic mythology, suggesting not only a Proto-Germanic origin for Ymir but an even older Indo-European origin (see Indo-European cosmogony).
Myth of the end of the world
There is evidence of a myth of the end of the world in Germanic mythology, which can be reconstructed in very general terms from the surviving sources. The best known is the myth of Ragnarök, attested from Old Norse sources, which involves a war between the gods and the beings of chaos, leading to the destruction of almost all gods, giants, and living things in a cataclysm of fire. It is followed by a rebirth of the world. The notion of the world's destruction by fire in the Southern Germanic area seems confirmed by the existence of the word (probably "world conflagration") to refer to the end of the world in Old High German; however, it is possible that this aspect derives from Christian influence. Scholarship on Ragnarök tends to either argue that it is a myth with composite, partially non-Scandinavian origins, that it has Indo-European parallels and thus origins, or that it derives from Christian influence.
Physical cosmos
Information on Germanic cosmology is only provided in Nordic sources, but there is evidence for considerable continuity of beliefs despite variation over time and space. Scholarship is marked by disagreement about whether Snorri Sturlason's Edda is a reliable source for pre-Christian Norse cosmology, as Snorri has undoubtedly imposed an ordered, Christian worldview on his material.
Midgard ("dwelling place in the middle") is used to refer to the inhabited world or a barrier surrounding the inhabited world in Norse mythology. The term is first attested as in Gothic with Wulfila's translation of the bible (c. 370 CE), and has cognates in Saxon, Old English, and Old High German. It is thus probably an old Germanic designation. In the Prose Edda, Midgard also seems to be the part of the world inhabited by the gods. The dwelling place of the gods themselves is known as Asgard, while the giants dwell in lands sometimes referred to as Jötunheimar, outside of Midgard. The ash tree Yggdrasill is at the center of the world, and propped up the heavens in the same way as the Saxon pillar Irminsul was said to. The world of the dead (Hel) seems to have been underground, and it is possible that the realm of the gods was originally subterranean as well. The Norse imagined the inhabited world to be surrounded by a sort of dragon or serpent, Jörmungandr; although only explicitly attested in Scandinavian sources, allusions to a world-surrounding monster from southern Germany and England suggest that this concept may have been common Germanic.
Fate
Some Christian authors of the Middle Ages, such as Bede (c. 700) and Thietmar of Merseburg (c. 1000), attribute a strong belief in fate and chance to the followers of Germanic religion. Similarly, Old English, Old High German, and Old Saxon associate a word for fate, wyrd, as referring to an inescapable, impersonal fate or death. While scholarship of the early 20th century believed that this meant that Germanic religion was essentially fatalistic, scholars since 1969 have noted that this concept appears to have been heavily influenced by the Christianized Greco-Roman notion of ("fatal fortune") rather than reflecting Germanic belief. Nevertheless, Norse myth attests the belief that even the gods were subject to fate. While it is thus clear that older scholarship exaggerated the importance of fate in Germanic religion, it still had its own concept of fate. Most Norse texts dealing with fate are heroic, which probably influences their portrayal of fate.
In Norse myth, fate was created by supernatural female beings called Norns, who appear either individually or as a collective and who give people their fate at birth and are somehow involved in their deaths. Other female beings, the disir and valkyries, were also associated with fate.
Afterlife
Early Germanic beliefs about the afterlife are not well known; however, the sources indicate a variety of beliefs, including belief in an underworld, continued life in the grave, a world of the dead in the sky, and reincarnation. Beliefs varied by time and place and may have contradictory in the same time and place. The two most important afterlives in the attested corpus were located at Hel and Valhalla, while additional destinations for the dead are also mentioned. A number of sources refer to Hel as the general abode of the dead.
The Old Norse proper noun Hel and its cognates in other Germanic languages are used for the Christian hell, but they originally refer to a Germanic underworld and/or afterlife location that predates Christianization. Its relation to the West Germanic verb ("to hide") suggests that it may have originally referred to the grave itself. It could also suggest the idea that the realm of the dead is hidden from human view. It was not conceived of as a place of punishment until the high Middle Ages, when it takes on some characteristics of the Christian hell. It is described as cold, dark, and in the north. Valhalla ("hall of the slain"), on the other hand, is a hall in Asgard where the illustrious dead dwell with Odin, feasting and fighting.
Old Norse material often include the notion that the dead lived in their graves, and that they can sometimes come back as revenants. Several inscriptions in the Elder Futhark found on stones marking graves seem intended to prevent this. The concept of the Wild Hunt of the dead, first attested in the 11th century, is found throughout the Germanic-speaking regions.
Religiously significant numbers
In Germanic mythology, the numbers three, nine, and twelve play an important role. The symbolic importance of the number three is attested widely among many cultures, and the number twelve is also attested as significant in other cultures, meaning that foreign influence is possible. The number three often occurs as a symbol of completeness, which is probably how the frequent use in Germanic religion of triads of gods or giants should be understood. Groups of three gods are mentioned in a number of sources, including Adam of Bremen, the Nordendorf Fibula, the Old Saxon Baptismal Formula, Gylfaginning, and Þorsteins þáttr uxafóts. The number nine can be understood as three threes. Its importance is attested in both mythology and worship.
Supernatural and divine beings
Gods
The Germanic gods were a category of supernatural beings who interacted with humans, as well as with other supernatural beings such as giants (jötnar), elves, and dwarfs. The distinction between gods and other supernaturally powerful beings might not always be clear. Unlike the Christian god, the Germanic gods were born, can die, and are unable to change the fate of the world. The gods had mostly human features, with human forms, male or female gender, and familial relationships, and lived in a society organized like human society; however, their sight, hearing, and strength were superhuman, and they possessed a superhuman ability to influence the world. Within the religion, they functioned as helpers of humans, granting ("good luck, good fortune") for correct religious observance. The adjectival form (English holy) is attested in all Germanic languages, including Gothic on the Ring of Pietroassa.
Based on Old Norse evidence, Germanic paganism probably had a variety of words to refer to gods. Words descended from Proto-Germanic *ansuz, the origin of the Old Norse family of gods known as the Aesir (singular Áss), are attested as a name for divine beings from around the Germanic world. The earliest attestations are the name of a war goddess ("battle goddess") that appears on a Roman inscription from Tongeren, with a second early attestion on a Runic belt-buckle found at Vimose, Denmark from around 200 CE. The historian Jordanes mentions the Latinized form in the Getica, while the Old English rune poem attests the Old English form , and personal names also exist using the word from the area where Old High German was spoken. The Indo-European word for god, *deiuos, is only found in Old Norse, where it occurs as ; it mostly appears in the plural or in compound bynames.
In Norse mythology, the Aesir are one of two families of gods, the other being the Vanir: the most important gods of Norse mythology belong to the Aesir and the term can also be used for the gods in general. The Vanir appear to have been mostly fertility gods. There is no evidence for the existence of a separate Vanir family of gods outside of Icelandic mythological texts, namely the Eddic poem Vǫluspá and Snorri Sturluson's Prose Edda and Ynglinga Saga. These sources detail a mythical Æsir–Vanir War, which, however, is portrayed quite differently in the different accounts.
Giants (Jötnar)
Giants (Jötnar) play a significant role in Germanic myth as preserved in Iceland, being just as important as the gods in myths of the cosmology and the creation and the end of the world. They appear to have been various types of powerful, non-divine supernatural beings who lived in a kind of wilderness and were mostly hostile to humans and gods. They have human form and live in families, but can sometimes take on animal form. In addition to , the beings are also commonly referred to as , both terms having cognates in West Germanic; is probably derived from the verb "to eat", either referring to their strength, or possibly to cannibalism as a characteristic trait of giants. Giants often have a special association with some phenomena of nature, such as frost, mountains, water, and fire. Scholars are divided as to whether there were any religious offerings or rituals offered to giants in Germanic religion. Scholars such as Gro Steisland and Nanna Løkka have suggested that the division of the gods from giants is not actually very clear.
Elves, dwarfs and other beings
Germanic religion also contained various other mythological beings, such as the monstrous wolf Fenrir, as well as beings such as elves, dwarfs, and other non-divine supernatural beings.
Elves are beings of Germanic lower mythology that are mostly male and appear as a collective. Snorri Sturluson divides the elves into two groups, the dark elves and the light elves; however, this division is not attested elsewhere. People's understanding of elves varied by time and place: in some instances they were godlike beings, in others dead ancestors, nature spirits, or demons. In Norse pagan belief, elves seem to have been worshipped to some extent. The concept of elves begins to differ between Scandinavia and the West Germanic peoples in the Middle Ages, possibly under Celtic influence. In Anglo-Saxon England, elves seem to have been potentially dangerous, powerful supernatural beings associated with woods, fields, hills, and bodies of water.
Like elves, dwarfs are beings of Germanic lower mythology. They are mostly male and imagined as a collective; however, individual named dwarfs also play an important role in Norse mythology. In Norse and German texts, dwarfs live in mountains and are known as great smiths and craftsmen. They may have originally been nature spirits or demons of death. Snorri Sturluson equates the dwarfs to a subgroup of the elves, and many high medieval German epics and some Old Norse myths give dwarfs names with the word or ("elf") in them, suggesting some confusion between the two. However, there is no evidence that the dwarfs were worshipped. In Anglo-Saxon England, dwarfs were potentially dangerous supernatural beings associated with madness, fever, and dementia, and have no known association with mountains.
Dragons occur in Germanic mythology, with Norse examples including Níðhöggr and the world-serpent Jörmungandr. In the (late) sources for Scandinavian religion, dragons play an important role in the mythic cosmology. It is difficult to tell how much existing sources have been influenced by Greco-Roman and Christian ideas about dragons. Based on the native word (, related to , "snake"), the early description in Beowulf, and early pictorial depictions, they were probably imagined as snake-like and of large size, able to spit poison or fire, and dwelled under the earth. Scholarly consensus is that Germanic dragons were originally more snake- or worm-like and could not fly, but that the idea of flying dragons entered from Greco-Roman culture. The medieval Germanic languages did not distinguish linguistically or conceptually between snakes and dragons in their mythology.
Pantheon
Due to the scarcity of sources and the origin of the Germanic gods over a broad period of time and in different locations, it is not possible to reconstruct a full pantheon of Germanic deities that is valid for Germanic religion everywhere; this is only possible for the last stage of Germanic religion, Norse paganism. People in different times and places would have worshiped different individual gods and groups of gods. Placename evidence containing divine names gives some indication of which gods were important in particular regions, however, such names are not well attested or researched outside Scandinavia.
The following section first includes some information on the gods attested during the Roman period, then the four main Germanic gods *Tiwaz (Tyr), Thunraz (Thor), *Wodanaz (Odin), and Frijjō (Frigg), who are securely attested since the early Middle Ages but were probably worshiped during Roman times, and finally some information on other gods, many of whom are only attested in Norse paganism.
Roman-era
Germanic gods with Roman names
The Roman authors Julius Caesar and Tacitus both use Roman names to describe foreign gods, but whereas Caesar claims the Germani worshiped no individual gods but only natural phenomena such as the sun, moon, and fire, Tacitus mentions a number of deities, saying that the most worshiped god is Mercury, followed by Hercules, and Mars; he also mentions Isis, Odysseus, and Laertes. Scholars generally interpret Mercury as meaning Odin, Hercules as meaning Thor, and Mars as meaning Tyr. As these names are only attested much later, however, there is some doubt about these identifications and it has been suggested that the gods Tacitus names were not worshiped by all Germanic peoples or that he has transferred information about the Gauls to the Germans.
The Germani themselves also worshiped gods with Roman names at votive altars constructed according to Roman tradition; while isolated instances of Germanic bynames (such as "Mars Thingsus") indicate that a Germanic god was meant, often it is not possible to know if the Roman god or a Germanic equivalent is meant. Most surviving dedications are to Mercury. Female deities, on the other hand, were not given Roman names. Additionally, the Germanic speakers also translated Roman gods' names into their own languages most prominently in the Germanic days of the week. Usually the translation of the days of the week is dated to the 3rd or 4th century CE; however, they are not attested until the early Middle Ages. This late attestation causes some scholars to question the usefulness of the days of the week for reconstructing early Germanic religion.
Alcis
Tacitus mentions a divine pair of twins called the Alcis worshipped by the Naharvali, whom he compares to the Roman twin horsemen Castor and Pollux. These twins can be associated with the Indo-European myth of the divine twin horsemen (Dioscuri) attested in various Indo-European cultures. Among later Germanic peoples, twin founding figures such as Hengist and Horsa allude to the motif of the divine twins; Hengist and Horsa's names both mean "horse", strengthening the connection. In Scandinavia, images of divine twins are attested from 15th century BCE until the 8th century CE, after which they disappear, apparently as a result of religious change. Norse texts contain no identifiable divine twins, though scholars have looked for parallels among gods and heroes.
Nerthus
In Germania, Tacitus mentions that the Lombards and Suebi venerated a goddess, Nerthus, and describes the rites of the goddess in some detail. At their center is a ceremonial wagon procession. Nerthus's cart is found on an unspecified island in the "ocean", where it is kept in a sacred grove and draped in white cloth. Only a priest may touch it. When the priest detects Nerthus's presence by the cart, the cart is drawn by heifers. Nerthus's cart is met with celebration and peacetime everywhere it goes, and during her procession no one goes to war and all iron objects are locked away. In time, after the goddess has had her fill of human company, the priest returns the cart to her "temple" and slaves ritually wash the goddess, her cart, and the cloth in a "secluded lake". According to Tacitus, the slaves are then immediately drowned in the lake.
The majority of modern scholars identify Nerthus as a direct etymological precursor to the Old Norse deity Njörðr, attested over a thousand years later. However, Njörðr is attested as male, leading to many proposals regarding this apparent change, such as incest motifs described among the Vanir, a group of gods to which Njörðr belongs, in Old Norse sources.
Matronae
Collectives of three goddess known as matronae appear on numerous votive altars from the Roman province of Germania inferior, especially from Cologne, dating to the third and fourth centuries CE. The altars depict three women in non-Roman dress. About half of serving matronae altars can be identified as Germanic because of their bynames; other have Latin or Celtic bynames. The bynames are often connected to a place or ethnic group, but a number are associated with water, and many of them seem to indicate a giving and protecting nature. Despite their frequency in the archaeological record, the matronae receive no mention in any written source.
The matronae may be connected to female deities attested in collectives from later times, such as the Norns, the disir and valkyries; Rudolf Simek suggests that a connection to the disir is most likely. The disir may be etymologically connected to minor Hindu deities known as dhisanās, who likewise appear in a group; this would give them an Indo-European origin. Since Jacob Grimm, scholars have sought to connect the disir with the found in the Old High German First Merseburg Charm and with a conjecturally corrected place name from Tacitus; however, these connections are contested. The disir share some functions with the Norns and valkyries, and the Nordic sources suggest a close association between the three groups of Norse minor female deities. Further connections of the matronae have been proposed: the Anglo-Saxon pagan festival of ("night of the mothers") mentioned by Bede has been associated with the matronae. Likewise, the poorly attested Anglo-Saxon goddesses Eostre and Rheda may be connected with the matronae.
Other female deities
Besides Nerthus, Tacitus elsewhere mentions other important female deities worshiped by the Germanic peoples, such as Tamfana by the Marsi (Annals, 1:50) and the "mother of the gods" by the Aestii (Germania, chapter 45).
In addition to the collective , votive altars from Roman Germania attest a number of individual goddesses. A goddess Nehelenia is attested on numerous votive altars from the 3rd century CE on the Rhine islands of Walcheren and Noord-Beveland, as well as at Cologne. Dedicatory inscriptions to Nehelenia make up 15% of all extant dedications to gods from the Roman province Germania inferior and 50% of dedications to female deities. She appears to have been associated with trade and commerce, and was possibly a chthonic deity: she is usually depicted with baskets of fruit, a dog, or the prow of a ship or an oar. Her attributes are shared with the Hellenistic-Egyptian goddess Isis, suggesting a connection to the Isis of the Suebi mentioned by Tacitus. Despite her obvious importance, she is not attested in later periods.
Another goddess, Hludana, is also attested from five votive inscriptions along the Rhine; her name is cognate with Old Norse Hlóðyn, one of the names of Jörð (earth), the mother of Thor. It has thus been suggested she may have been a chthonic deity, possibly also connected to later attested figures such as Hel, Huld and Frau Holle.
Post-Roman era
*Tiwaz/Tyr
The god *Tiwaz (Tyr) may be attested as early as 450-350 BCE on the Negau helmet. Etymologically, his name is related to the Vedic Dyaus and Greek Zeus, indicating an origin in the reconstructed Indo-European sky deity *Dyēus. He is thus the only attested Germanic god who was already important in Indo-European times. When the days of the week were translated into Germanic, Tyr was associated with the Roman god Mars, so that (day of Mars) became "Tuesday" ("day of *Tiwaz/Tyr"). A votive inscription to "Mars Thingsus" (Mars of the thing) suggests he also had a connection to the legal sphere.
Scholars generally believe that Tyr became less and less important in the Scandinavian branch of Germanic paganism over time and had largely ceased to be worshiped by the Viking Age. He plays a major role in only one myth, the binding of the monstrous wolf Fenrir, during which Tyr loses his hand.
*Thunraz/Thor
Thor was the most widely known and perhaps the most widely worshiped god in Viking Age Scandinavia. When the days of the week were translated into Germanic, he was associated with Jupiter, so that ("Day of Jupiter") becomes "Thursday" ["day of Thunraz/Thor"]). This contradicts the earlier , where Thor is generally thought to be Hercules. Textual sources such as Adam of Bremen as well as the association with Jupiter in the suggest he may have been the head of the pantheon, at least in some times and places. Alternatively, Thor's hammer may have been equated with Jupiter's lightning bolt. Outside of Scandinavia, he appears on the Nordendorf fibulae (6th or 7th century CE) and in the Old Saxon Baptismal Vow (9th century CE). The Oak of Jupiter, destroyed by Saint Boniface among the Chatti in 723 CE, is also usually presumed to have been dedicated to Thor.
Viking age runestones as well as the Nordendorf fibulae appear to call upon Thor to bless objects. The most important archaeological evidence for the worship of Thor in Viking Age Scandinavia is found in the form of Thor's hammer pendants. Myths about Thor are only attested from Scandinavia, and it is unclear how representative the Nordic corpus is for the entire Germanic region. As Thor's name means "thunder", scholars since Jacob Grimm have interpreted him to be a sky and weather god. In Norse mythology, he shares features with other Indo-European thunder gods, including his slaying of monsters; these features likely derive from a common Indo-European source. In the extant mythology of Thor, however, he has very little association with thunder.
*Wodanaz/Odin
Odin (*Wodanaz) plays the main role in a number of myths as well as well-attested Norse rituals; he appears to have been venerated by many Germanic peoples in the early Middle Ages, though his exact characteristics probably varied in different times and places. In the Germanic days of the week, Odin is equated with Mercury ( [day of Mercury] which became "Wednesday" ["day of *Wodanaz/Odin"]), an association that accords with the usual scholarly interpretation of the and is also found in early medieval authors. It may have been inspired by both gods' connections to arcane knowledge and the dead.
The age of the cult of Odin is disputed. The earliest clear reference to Odin by name is found on a C-bracteate discovered in Denmark in 2020. Dated to as early as the 400s, the bracteate features a Proto-Norse Elder Futhark inscription reading "He is Odin’s man". Archaeological evidence for Odin is found in the form of his later bynames on Runic inscriptions found in Danish bogs from 4th or 5th century AD; other possible archaeological attestations may date to the 3rd century CE. Images of Odin dating to the late migration period are known from Frisia, but appear to have come there from Scandinavia.
In Norse myths, Odin plays one of the most important roles of all the gods. He is also attested in myths outside of the Norse area. In the mid-7th century CE, the Franco-Burgundian chronicler Fredegar narrates that "Wodan" gave the Lombards their name; this story also appears in the roughly contemporary Origo gentis Langobardorum and later in the Historia Langobardorum of Paul the Deacon (790 CE). In Germany, Odin is attested as part of a divine triad on the Nordendorf fibulae and the second Merseburg charm, in which he heals Balder's horse. In England, he appears as a healing magician in the Nine Herbs Charm and in Anglo-Saxon genealogies. It is disputed whether he was worshiped among the Goths.
*Frijjō/Frigg
The only major Norse goddess also found in the pre-Viking period is Frigg, Odin's wife. When the Germanic days of the week were translated, Frigg was equated with Venus, so that ("day of Venus") became "Friday" ("day of Frijjō/Frigg"). This translation suggests a connection to fertility and sexuality, and her name is etymologically derived from an Indo-European root meaning "love". In the stories of how the Lombards got their name, Frea (Frigg) plays an important role in tricking her husband Vodan (Odin) into giving the Lombards victory. She is also mentioned in the Merseburg Charms, where she displays magical abilities. The only Norse myth in which Frigg plays a major role is the death of Baldr, and there is only little evidence for a cult of Frigg in Scandinavia.
Other gods
The god Baldr is attested from Scandinavia, England, and Germany; except for the Old High German Second Merseburg Charm (9th century CE), all literary references to the god are from Scandinavia and nothing is known of his worship.
The god Freyr was the most important fertility god of the Viking Age. He is sometimes known as Yngvi-Freyr, which would associate him with the god or hero *Ingwaz, the presumed progenitor of the Inguaeones found in Tacitus's Germania, whose name is attested in the Old English rune poem (8th or 9th century CE) as Ing. A minor god named Forseti is attested in a few Old Norse sources; he is generally associated with the Frisian god Fosite who was worshiped on Helgoland, but this connection is uncertain. The Old Saxon Baptismal Formula and some Old English genealogies mention a god Saxnot, who appears to be the founder of the Saxons; some scholars identify him as a form of Tyr, while others propose that he may be a form of Freyr.
The most important goddess in the recorded Old Norse pantheon was Freyr's sister, Freyja, who features in more myths and appears to have been worshiped more than Frigg, Odin's wife. She was associated with sexuality and fertility, as well as war, death, and magic. It is unclear how old the worship of Freyja is, and there is no indisputable evidence for her or any of the vanir gods in the southern Germanic area. There is considerable debate about whether Frigg and Freyja were originally the same goddess or aspects of the same goddess.
Besides Freyja, many gods and goddesses are only known from Scandinavia, including Ægir, Höðr, Hönir, Heimdall, Idunn, Loki, Njörðr, Sif, and Ullr. There are a number of minor or regional gods mentioned in various medieval Norse sources: in some cases, it is unclear whether or not they are post-conversion literary creations. Many regional or highly local gods and spirits are probably not mentioned in the sources at all. It is also likely that many Roman-era and continental Germanic gods do not appear in Norse mythology.
Places and objects of worship
Divine images
Julius Caesar and Tacitus claimed that the Germani did not venerate their gods in human form; however, this is a topos of ancient ethnography when describing supposedly primitive people. Archaeologists have found Germanic statues that appear to depict gods, and Tacitus appears to contradict himself when discussing the cult of Nerthus (Germania chapter 40); the Eddic poem Hávamál also mentions wooden statues of gods, while Gregory of Tours (Historia Francorum II: 29) mentions wooden statues and ones made of stone and metal. Archaeologists have not found any divine statues dating from after the end of the migration period; it is likely that they were destroyed during Christianization, as is repeatedly depicted in the Norse sagas.
Roughly carved wooden male and female figures that may depict gods are frequent finds in bogs; these figures generally follow the natural form of a branch. It is unclear whether the figures themselves were sacrifices or if they were the beings to whom the sacrifice was given. Most date from the first several centuries CE. For the pre-Roman Iron Age, board-like statues that were set up in dangerous places encountered in everyday life are also attested. Most statues were made out of oak wood. Small animal figurines of cattle and horses are also found in bogs; some may have been worn as amulets while others seem to have been placed by hearths before they were sacrificed.
Holy sites from the migration period frequently contain gold bracteates and gold foil figures that depict obviously divine figures. The bracteates are originally based on motifs found on Roman gold medallions and coins of the era of Constantine the Great, but have become highly stylized. A few them have runic inscriptions that may be names of Odin. Others, such as Trollhätten-A, may display scenes known from later mythological texts.
The stone altars of the matronae and Nehalennia show women in Germanic dress, but otherwise follow Roman models, while images of Mercury, Hercules, or Mars do not show any difference from Roman models. Many bronze and silver statues of Roman gods have been found throughout Germania, some made by the Germani themselves, suggesting an appropriation of these figures by the Germani. Heiko Steuer suggests that these statues likely were reinterpreted as local, Germanic gods and used on home altars: a find from Odense dating c. 100-300 CE includes statues of Mercury, Mars, Jupiter, and Apollo. Imported Roman swords, found from Scandinavia to the Black Sea, frequently depicted the Roman god Mars Ultor ("Mars the Avenger").
Sacred places
Caesar and Tacitus claimed that the ancient Germans had no temples and only worshipped in sacred groves. However, while groves, trees, bogs, springs, and lakes undoubtedly were seen as holy places by the Germani, there is archaeological evidence for temples. Archaeology also indicates that neolithic structures and Bronze Age tumuli were used as places of worship. Steuer argues that finds of sacrificial places enclosed with a palisade in England indicate that similarly enclosed areas in northern Germany and Jutland may have been holy sites. Large fire pits near settlements, found in many sites including those from the Bronze Age, the pre-Roman Iron Age, and the migration period, probably served as ritual, political, and social locations. Large halls in settlements probably also fulfilled ceremonial religious functions.
Tacitus mentions a temple of the goddess Tamfana in Annales 1.51, and also uses the word in reference to Nerthus in Germania, though this could simply mean a consecrated place rather than a building. Later Christian sources refer to temples used by the Franks, Lombards, continental Saxons, and Anglo-Saxons, while the post-conversion Lex Frisionum (Frisian Law) continued to include punishments for those who broke into or desecrated temples. A temple dedicated to Hercules from the territory of the Batavi at Empel in the Netherlands shows a typical Romano-Celtic building style. Other Roman-style temples dedicated to the matronae are known from the Lower Rhine region.
An early Scandinavian temple has been identified at Uppåkra, modern Sweden. The building, a very large hall with two entrances, was rebuilt on exactly the same site 7 times from 200 to 950 CE. Architecturally, the temple resembles later Scandinavian stave churches in construction. The building was surrounded by animal bones and a few human bones. A similar building has been found at Møllebækvej on Zealand dating from the 3rd century CE, while the later stages of a ritual house at Tissø in Zealand (850-950 CE) likewise resemble a stave church.
The most important description of a Scandinavian temple is of the Temple of Uppsala by Adam of Bremen (11th century): he describes the temple as containing the idols of Borr, Thor, Odin, and Frey (Fricco). Glosses mention the existence of a large tree and well nearby where sacrifices were made. Some aspects of Adam's description appear to be inaccurate, possibly influenced by Norse mythology. Archaeology has shown that Uppsala became an important cult center around 500 CE, with a main royal hall dating from 600 to 800 CE and having large doors with iron spirals flat against the wood. Four large grave mounds were constructed southwest of the main hall, and there were ritual roads with rows of large wooden posts and lines of fireplaces. The arrangements indicate that there were different processions and rituals both inside and around Gamla Uppsala. The only material remains from the rituals once performed there are of animals; the age of the animals indicates that they were deposited in March, which agrees with the written sources on the Dísablót.
Sacred trees, groves, and poles
Sacred trees occur as important symbols in many pre-modern cultures, particularly those of Indo-European origin. Modern scholars, on the basis of Greco-Roman religious understanding, usually distinguish between sacred groves and trees, where a god is worshiped, and the worship of trees as divine (tree cult); it is unclear whether this distinction is valid for Germanic religion. Tacitus describes the ancient Germani as worshiping in sacred groves, including the grove of fetters of the Semnones and the grove where the Alcis were worshipped by the Nahanarvali. Tacitus mentions the following functions for Germanic sacred groves: the display of captured enemy standards and weapons, the keeping of the animal-shaped standards of the Batavii (Tac. hist. 4.22), and human sacrifice. Reconstructed Germanic words for sacred groves include *nimið-, *alh-, and *haruh-, which may have originally described different functions of the groves.
Physical trees or poles could represent either a world tree, (Yggdrasil in Norse mythology), or a world pillar. Modern scholars describe such a sacred tree as an axis mundi ("hub of the world"), a center that runs along and connects multiple levels of the universe while also representing the world itself. In Roman Germania, columns depicting the god Jupiter as a rider are commonly found; they probably have a Celtic background and some connection to the notion of the world tree or column. One example of a sacred tree during the Middle Ages is the Oak of Jupiter purportedly felled by Saint Boniface in 724 CE in Hesse. Adam of Bremen mentions a sacred tree at the Temple of Uppsala, but the existence of this tree is controversial among scholars. It is also mentioned in Hervarar saga, and it may have been the central focus at the site and represented the world tree Yggdrasil. Further support for the existence of votive trees is provided by a birch root surrounded by animal skulls that was excavated at Frösö. Pagan Anglo-Saxon settlements often contained large standing poles, which were condemned as focuses of pagan worship by 6th-century English bishop Aldhelm. The Irminsul (Old Saxon great pillar) among the continental Saxons may have also been part of such a pole cult.
Personnel and devotees
Animal symbolism and warrior bands
Post-conversion Norse texts mention dedicated groups of warriors, some of whom, the (berserkers) and , were associated with bears and wolves respectively. In Ynglinga saga, Snorri Sturluson associates these warriors with Odin. Many scholars argue that warrior bands, with their initiation rites and forms of organization, can be traced to the time of Tacitus, who discusses several warrior bands and societies among the Germani. These scholars further argue that these bands can be traced further back to Proto-Indo-European precursors to some extent. Other scholars, such as Hans Kuhn, dispute continuity between Norse and earlier warrior bands. Inhumation and cremation graves containing bear claws, teeth, and hides are found throughout the Germanic-speaking area, being especially common on the Elbe from 100 BCE to 100 CE and in Scandinavia from the 2nd to 5th centuries CE; these may be connected to the warrior societies.
Archaeologists have found metal objects, especially on weapons and brooches, decorated with animal art and dating from the 4th to the 12th centuries CE in Scandinavia. Animals depicted include snakes, birds of prey, wolves, and boars. Some scholars have discussed these images as related to shamanism, while others view animal art as similar to Skaldic kennings, capable of expressing both Christian and pagan meanings.
Ritual specialists
Scholars are divided as to the nature and function of Germanic ritual specialists: many religious studies scholars believe that there was originally no class of priests and cultic functions were mostly carried out by kings and chieftains; many philologists, however, argue on the basis of reconstructed words for "priest" that a specialized class of priests existed. Caesar says the Germani had no druids, while Tacitus mentions several priests. Roman sources do not otherwise mention Germanic cultic functionaries. Later descriptions of similar rituals to those mentioned in Tacitus do not mention any ritual specialists; however, it is reasonable to assume that they continued to exist. While ritual specialists in Viking Age Scandinavia may have had defining insignia such as staffs and oath rings, it is unclear if they formed a hierarchy and they seem to have fulfilled non-cultic roles in society as well.
Caesar and Tacitus both mention women engaged in casting lots and prophecy and there are some other indications of female ritual specialists. Tacitus and the Roman writer Cassius Dio (163-c. 229 CE) both mention several seeresses by name, while an ostracon from Egypt attests one living in the second century CE. A female ritual specialist named Gambara appears in Paul the Deacon (8th century). A gap in the historical record occurs until the North Germanic record began over a millennium later, when the Old Norse sagas frequently mention female ritual specialists among the North Germanic peoples, both in the form of priestesses and diviners. Both Tacitus and Eiríks saga rauða mention the seeress prophesying from a raised platform, while Eiríks saga rauða also mentions the use of a wand.
Practices
Burial practices
Some insight into Germanic religion can be provided by burial customs, which varied widely in time and space but nonetheless show a few consistent practices. The Germanic peoples generally practiced cremation until the first century BCE, when limited inhumation burials begin to appear. The ashes were usually placed in an urn, but the use of pits, mounds, and cases when the ashes were left on the pyre after cremation are also known. In Viking Age Scandinavia, as much as half the population may not have received any grave, with their ashes scattered or their bodies unburied. Grave goods, which might be broken and placed in the grave or burnt on the pyre with the body, included clothing, jewelry, food, drink, dishes, and utensils. Beginning in the early 1st century CE, a minority of graves also included weapons. On the continent, inhumation burial becomes the most common form of burial among the southern Germanic peoples by the end of the migration period, while cremation remains more common in Scandinavia. In the Migration period and Merovingian period, the grave was often reopened and these grave gifts removed, either as grave robbery or as part of an authorized removal. By the Merovingian period, most male burials include weapons.
Often, urns were covered with stones and then surrounded by circles of stones. The urns of the dead were often placed in a mortuary house, which may have served as a cultic structure. Cemeteries might be placed around or reuse old Bronze Age barrows, and later placed near Roman ruins and roads, possibly to ease the passing of the dead into the afterlife. Some graves included burials of horses and dogs; horses may have been meant as conveyances to the afterlife. Burials with dogs are found over a wide area through the migration period; it is possible that they were meant either to protect the deceased in the afterlife or to prevent the return of the dead as a revenant.
After 1 CE, inhumation burials in large burial mounds with wooden or stone grave chambers, which contained expensive grave goods and were separate from the normal cemeteries, begin to appear across the entire Germanic area. By the 3rd century, elite burials are attested from Norway to Slovakia, with a large number appearing on Jutland. These graves usually include dishes and tableware: this may have been meant for the deceased to use in the afterlife or may have been used in a funerary meal. In the 400s CE, the practice of erecting elite ("row graves") appears among the continental Germanic peoples: these grave were arranged in rows and contain large amounts of gold, jewelry, ornaments, and other luxury items. Unlike cremation cemeteries, only a few hundred individuals are found buried in Reihengräber cemeteries. Elite chamber graves become especially common in Scandinavia in the 9th and 10th centuries, in which the body of deceased was sometimes buried seated with objects in the hands or on the lap.
Stones set up in the shape of a ship are known from Scandinavia, where they are sometimes surrounded by graves or occasionally contain one or more cremations. The earliest ship burial is found in Jutland from the late Roman Imperial period. Another earlier burial is from outside Scandinavia, near Wremen on the Weser river in northern Germany from the 4th or 5th century CE. Ship burials are attested in England from around 600 CE and from across Scandinavia and areas where Scandinavians traveled beginning around the same time and for centuries afterward. In some cases, the deceased was evidently cremated in the ship before a mound was thrown up over it, as is described by Ahmad ibn Fadlan for the Rus'. Scholars debate the meaning of these burials: the ship may have been a means of transport to the next life or may have represented a feasting hall. Parts of the ships were often left uncovered for extended periods of time.
Divination
Various practices for divining the future are attested for Germanic paganism, some of which were likely only practiced in a particular time or place. The main sources on Germanic divination are Tacitus, Christian early medieval texts of the missionary period (such as penitentials and Frankish capitularies), and various texts describing Scandinavian practices; however, the value of all of these sources for genuine Germanic practices is debated.
The casting and drawing of lots to determine the future is well-attested among the Germanic peoples in medieval and ancient texts; linguistic analysis confirms that it was an old practice. As of 2002, about 160 lots made of various materials have been found in Roman-era and migration-period archaeological sites. The most detailed description of Germanic lots is found in Tacitus, Germania, chapter 10. According to Tacitus, the Germani cast lots, made from the wood of fruit-bearing trees and marked with signs, onto a white sheet, after which three lots were drawn by either the head of the family or a priest. While the signs Tacitus mentions have been interpreted as Runes, most scholars believe they were simple symbols. Thirteenth-century Icelandic sources also attest the drawing of lots carved with signs; however, there is debate about whether these late sources represent a form of ordeal that was introduced with Christianity or a continuation of Germanic practice.
Another important form of divination involved animals. The interpretation of the actions of birds is a common practice across the world and is well attested for the Germani and the Norse. More uniquely, Tacitus says the Germani used the whinnying of horses to divine the future. Although there is no later or corroborating evidence for Tacitus's horse-divination, the importance of horses in Germanic religion is well-attested. Both forms of divination might be connected to the portrayal of birds and horses on gold bracteates.
A few other methods of divination are also attested. Tacitus mentions duels as a method of learning the future; while Norse sources attest many duels, none are obviously used for divination. Roman and Christian sources sometimes claimed that the Germanic peoples used the blood or entrails of human sacrifices to divine the future. This may derive from ancient topoi rather than reality, although blood played an important role in pagan ritual. Norse sources include additional forms of divination such as a form of necromancy known as , as well as seiðr rituals.
Feasts and festivals
The evidence suggests that the Germanic peoples had recurrent sacrifices and festivals at certain times of year. Often these feasts involved sacrifice at communal meals, ritual drinking, as well as processions and divination. Almost all information on Germanic religious festivals concerns Western Scandinavia, but Tacitus mentions a sacrifice to the goddess Tamfana took place in the autumn, while Bede mentions a festival called Mōdraniht that occurred in early February, and Jonas of Bobbio's Life of Saint Columbanus (640s) mentions a festival to Vodan (Odin) held by the Suebi that involved the drinking of beer. On the basis of several informants and possibly textual sources, Adam of Bremen describes a Swedish sacrificial festival held every nine years at the Temple of Uppsala, while Thietmar of Merseburg mentions a similar festival taking place each January at Lejre in Zealand. The Swedish feast known as took place in February, the same time as the Old English ; the only other widely attested festival is Yule around Christmas. Snorri Sturluson mentions three additional festivals in Ynglinga saga: a festival at the beginning of winter for a good harvest, one at midwinter for fertility, and one at the beginning of summer for victory. The summer festival is not attested elsewhere, but Rudolf Simek argues that the winter festival was probably in honor of the ancestors while another festival at spring was for fertility.
Magic
Magic is an element of religion that intends to influence the world with the help of the otherworldly by using particular rituals, means, or words. Sources on pre-Christian magic among the Germanic peoples are either textual descriptions or archaeological finds of objects. The Germanic languages lack a common word that can be translated as "magic", and there is no indication that the Germanic peoples distinguished between "white" and "black magic". In Norse texts, the god Odin is especially associated with magic, a connection also found, for instance, in the Old High German Second Merseburg Charm. Although runes are often associated with magic, most scholars no longer believe that runes were in and of themselves regarded as magical.
Migration-age inscriptions on bracteates and later rune stones contain a number of early magical words and formulas, the best attested of which, alu, is found on multiple objects from 200 to 700 CE. Post-conversion Christian sources from continental Europe mention forms of magic including amulets, charms, "witchcraft", divination, and especially weather magic. Old Norse mythology and post-conversion literature also attest various forms of magic, including divination, magic affecting nature (weather or otherwise), spells to make warriors impervious to weapons, spells to strengthen weapons, and spells to harm and distress others.
The term "charm" is used to mean magical poetry, which could be blessings or curses; most attested charms are blessings and seek protection, defense against magic or sickness, and healing; the only form of curses attested outside of literature are calls for death. In Old Norse, a specific meter of alliterative verse was used and some pre-Christian charms have survived inscribed on metal or bone. Otherwise, few charms are attested in Old Norse outside of literature. Later post-conversion Icelandic charms sometimes mention Odin or Thor, but they may reflect Christian conceptions of magic. Numerous charms are attested in Old High German, but only the Merseburg Charms exist in a non-Christianized form. A similar situation exists in Old English, where over 100 charms are attested, including the Nine Herbs Charm, which mentions Wodan (Odin).
Ritual procession
Ritual processions of the idol of a god in some form of vehicle, usually a wagon, are attested in many religions of Europe and Asia. Various archaeological finds indicate the existence of such rituals in Scandinavia as early as the Bronze Age. Ships may also have been used for processions, such as the ship found at Oberdorla moor in Thuringia from the Migration Period. The processions are usually interpreted as fertility rites. An image of a Viking-age process of some sort, including men, women, and carriages, is provided by the Oseberg tapestry fragments.
The earliest written source for a ritual procession in Germanic religion is in Tacitus's Germania, chapter 40, when he describes the worship of Nerthus. According to Tacitus, Nerthus's idol is drawn around the land for several days on a cart pulled by cows, before being brought to a lake and cleaned by slaves, who are then drowned in the lake. Tacitus's description is reminiscent of archaeological finds of highly decorated wagons in water and in burials from southern Scandinavia roughly contemporary to Tacitus. A similar ritual is attested for the Goths, who forced Christians to participate during the Gothic persecution of Christians (369-372 CE), as well as among the Franks by Gregory of Tours, although the latter sets his ritual in pre-Germanic Gaul for an eastern goddess. The Frankish Merovingian kings are also attested as having been carried by an oxcart to assemblies, something reminiscent of Tacitus's description. An extensive description of a ritual procession for the god Freyr is found in the Flateyjarbók (1394); it describes Freyr being driven around in a wagon to ensure a good harvest. This and several other post-conversion Scandinavian sources on such processions may derive from oral tradition of the worship of Freyr.
Sacrifices
Archaeology provides evidence of sacrificial offerings of various types. Deposits of valuable objects, including of gold and silver, that were buried in the earth are frequently attested for the period of 1-100 CE. While these objects may have been buried with the intention of their being removed again at a later date, it is also possible that they were intended as sacrifices for the gods or for use in the afterlife. Metal objects deposited in springs are attested from Bad Pyrmont and Duchcov, as well as such objects deposited in bogs. There are also examples of hair, clothing, and textiles from c. 500 BCE-200 CE found in Scandinavian wetlands. Gregory of Tours, when describing a Frankish shrine near Cologne, depicts worshipers leaving wooden carvings of parts of the human body whenever they felt pain.
Animal sacrifices are attested by bones in various holy places associated with the Przeworsk culture as well as in Denmark, with animals sacrificed included cattle, horses, pigs, and sheep or goats; there is also evidence for human sacrifice. In Scandinavia, animal bones are often found in bogs and lakes, where a higher proportion of horse bones and young animal bones are found than at settlements. A detailed description of Norse animal sacrifice at Lade is provided by Snorri Sturluson in Hákonar saga góða, although its accuracy is questionable. Evidence of the sacrifice of objects, humans, and animals is also found in settlements throughout Germania, perhaps to mark the beginning of the construction of a building. Dogs buried under the thresholds of houses probably served as protectors.
Human sacrifices are mentioned periodically by Roman authors, usually to stress elements that they found shocking or abnormal. Individual finds of human bodies in the bogs, representing all ages and both sexes, show signs of violent death and may have been human sacrifices or victims of capital punishment. There are over 100 bog bodies from Denmark alone, attested from the 800 BCE to 200 CE. Human body parts such as skulls are deposited in the same period and as late as 1100 CE. Regularly occurring human sacrifices among the Norse are mentioned by authors such as Thietmar of Merseburg and Adam of Bremen as well as the Gutasaga. An image on the picture stone Stora Hammars I is usually interpreted as depicting a human sacrifice.
Sacrifices of the weapons of defeated enemies have been uncovered in bogs in Jutland as well as in rivers throughout Germania: such sacrifices probably occurred in other parts of Germania on dry land. Tacitus reports a similar sacrifice and destruction of weapons performed in the forest after Arminius's victory over the Romans at the Battle of the Teutoburg Forest. Large deposits of weapons are attested from 350 BCE to 400 CE, with smaller deposits continuing to be made until 600 CE. Deposits of various sizes were common and often included objects besides weapons, even warships that had been burned and destroyed. They appear to be from a ritual performed over a defeated enemy to commit the weapons to the gods. There is no archaeological evidence for what happened to the warriors who bore the weapons, but Roman sources describe them as being sacrificed as well. A possible exception is the site of Alken Enge bog in Jutland: it contains the crushed and dismembered bodies of about 200 men, aged 13–45 years, who seem to have died on a battlefield. No later Scandinavian sources mention rituals associated with the destruction of weapons, implying that these rites had died out and been forgotten at an early date.
Variations of Germanic paganism
Anglo-Saxon paganism
Continental Germanic mythology
Frankish mythology
Gothic paganism
Old Norse religion
See also
Ancient Celtic religion
Ancient Greek religion
Ancient Iranian religion
Germanic mythology
Hittite mythology and religion
Historical Vedic religion
Religion in ancient Rome
Scythian religion
Slavic paganism
Notes
References
Bibliography
External links
Paganism | 0.765783 | 0.998083 | 0.764315 |
Socratic method | The Socratic method (also known as method of Elenchus or Socratic debate) is a form of argumentative dialogue between individuals, based on asking and answering questions.
In Plato's dialogue "Theaetetus", Socrates describes his method as a form of "midwifery" because it is employed to help his interlocutors develop their understanding in a way analogous to a child developing in the womb. The Socratic method begins with commonly held beliefs and scrutinizes them by way of questioning to determine their internal consistency and their coherence with other beliefs and so to bring everyone closer to the truth.
In modified forms, it is employed today in a variety of pedagogical contexts.
Development
In the second half of the 5th century BC, sophists were teachers who specialized in using the tools of philosophy and rhetoric to entertain, impress, or persuade an audience to accept the speaker's point of view. Socrates promoted an alternative method of teaching, which came to be called the Socratic method.
Socrates began to engage in such discussions with his fellow Athenians after his friend from youth, Chaerephon, visited the Oracle of Delphi, which asserted that no man in Greece was wiser than Socrates. Socrates saw this as a paradox, and began using the Socratic method to answer his conundrum. Diogenes Laërtius, however, wrote that Protagoras invented the "Socratic" method.
Plato famously formalized the Socratic elenctic style in prose—presenting Socrates as the curious questioner of some prominent Athenian interlocutor—in some of his early dialogues, such as Euthyphro and Ion, and the method is most commonly found within the so-called "Socratic dialogues", which generally portray Socrates engaging in the method and questioning his fellow citizens about moral and epistemological issues. But in his later dialogues, such as Theaetetus or Sophist, Plato had a different method to philosophical discussions, namely dialectic.
Method
Elenchus is the central technique of the Socratic method. The Latin form (plural ) is used in English as the technical philosophical term. The most common adjectival form in English is ; and are also current. This was also very important in Plato's early dialogues.
Socrates (as depicted by Plato) generally applied his method of examination to concepts such as the virtues of piety, wisdom, temperance, courage, and justice. Such an examination challenged the implicit moral beliefs of the interlocutors, bringing out inadequacies and inconsistencies in their beliefs, and usually resulting in aporia. In view of such inadequacies, Socrates himself professed ignorance. Socrates said that his awareness of his ignorance made him wiser than those who, though ignorant, still claimed knowledge. This claim was based on a reported Delphic oracular pronouncement that no man was wiser than Socrates. While this belief seems paradoxical at first glance, in fact it allowed Socrates to discover his own errors.
Socrates used this claim of wisdom as the basis of moral exhortation. He claimed that the chief goodness consists in the caring of the soul concerned with moral truth and moral understanding, that "wealth does not bring goodness, but goodness brings wealth and every other blessing, both to the individual and to the state", and that "life without examination [dialogue] is not worth living".
Socrates rarely used the method to actually develop consistent theories, and he even made frequent use of creative myths and allegories. The Parmenides dialogue shows Parmenides using the Socratic method to point out the flaws in the Platonic theory of forms, as presented by Socrates; it is not the only dialogue in which theories normally expounded by Plato's Socrates are broken down through dialectic. Instead of arriving at answers, the method breaks down the theories we hold, to go "beyond" the axioms and postulates we take for granted. Therefore, myth and the Socratic method are not meant by Plato to be incompatible; they have different purposes, and are often described as the "left hand" and "right hand" paths to good and wisdom.
Scholarly debate
In Plato's early dialogues, the elenchus is the technique Socrates uses to investigate, for example, the nature or definition of ethical concepts such as justice or virtue. According to Gregory Vlastos, it has the following steps:
Socrates' interlocutor asserts a thesis, for example "Courage is endurance of the soul".
Socrates decides whether the thesis is false and targets for refutation.
Socrates secures his interlocutor's agreement to further premises, for example "Courage is a fine thing" and "Ignorant endurance is not a fine thing".
Socrates then argues, and the interlocutor agrees, these further premises imply the contrary of the original thesis; in this case, it leads to: "courage is not endurance of the soul".
Socrates then claims he has shown his interlocutor's thesis is false and its negation is true.
One elenctic examination can lead to a new, more refined, examination of the concept being considered, in this case it invites an examination of the claim: "Courage is endurance of the soul". Most Socratic inquiries consist of a series of and typically end in puzzlement known as .
Michael Frede points out Vlastos' conclusion in step No. 5 above makes nonsense of the aporetic nature of the early dialogues. Having shown a proposed thesis is false is insufficient to conclude some other competing thesis must be true. Rather, the interlocutors have reached aporia, an improved state of still not knowing what to say about the subject under discussion.
The exact nature of the elenchus is subject to a great deal of debate, in particular concerning whether it is a positive method, leading to knowledge, or a negative method used solely to refute false claims to knowledge. Some qualitative research shows that the use of the Socratic method within a traditional Yeshiva education setting helps students succeed in law school, although it remains an open question as to whether that relationship is causal or merely correlative.
Yet, W. K. C. Guthrie in The Greek Philosophers sees it as an error to regard the Socratic method as a means by which one seeks the answer to a problem, or knowledge. Guthrie claims that the Socratic method actually aims to demonstrate one's ignorance. Socrates, unlike the Sophists, did believe that knowledge was possible, but believed that the first step to knowledge was recognition of one's ignorance. Guthrie writes, "[Socrates] was accustomed to say that he did not himself know anything, and that the only way in which he was wiser than other men was that he was conscious of his own ignorance, while they were not. The essence of the Socratic method is to convince the interlocutor that whereas he thought he knew something, in fact he does not."
Modern applications
Socratic seminar
A Socratic seminar (also known as a Socratic circle) is a pedagogical approach based on the Socratic method and uses a dialogic approach to understand information in a text. Its systematic procedure is used to examine a text through questions and answers founded on the beliefs that all new knowledge is connected to prior knowledge, that all thinking comes from asking questions, and that asking one question should lead to asking further questions. A Socratic seminar is not a debate. The goal of this activity is to have participants work together to construct meaning and arrive at an answer, not for one student or one group to "win the argument".
This approach is based on the belief that participants seek and gain deeper understanding of concepts in the text through thoughtful dialogue rather than memorizing information that has been provided for them. While Socratic seminars can differ in structure, and even in name, they typically involve a passage of text that students must read beforehand and facilitate dialogue. Sometimes, a facilitator will structure two concentric circles of students: an outer circle and an inner circle. The inner circle focuses on exploring and analysing the text through the act of questioning and answering. During this phase, the outer circle remains silent. Students in the outer circle are much like scientific observers watching and listening to the conversation of the inner circle. When the text has been fully discussed and the inner circle is finished talking, the outer circle provides feedback on the dialogue that took place. This process alternates with the inner circle students going to the outer circle for the next meeting and vice versa. The length of this process varies depending on the text used for the discussion. The teacher may decide to alternate groups within one meeting, or they may alternate at each separate meeting.
The most significant difference between this activity and most typical classroom activities involves the role of the teacher. In Socratic seminar, the students lead the discussion and questioning. The teacher's role is to ensure the discussion advances regardless of the particular direction the discussion takes.
Various approaches to Socratic seminar
Teachers use Socratic seminar in different ways. The structure it takes may look different in each classroom. While this is not an exhaustive list, teachers may use one of the following structures to administer Socratic seminar:
Inner/outer circle or fishbowl: Students need to be arranged in inner and outer circles. The inner circle engages in discussion about the text. The outer circle observes the inner circle, while taking notes. The outer circle shares their observations and questions the inner circle with guidance from the teacher/facilitator. Students use constructive criticism as opposed to making judgements. The students on the outside keep track of topics they would like to discuss as part of the debrief. Participants in the outer circle can use an observation checklist or notes form to monitor the participants in the inner circle. These tools will provide structure for listening and give the outside members specific details to discuss later in the seminar. The teacher may also sit in the circle but at the same height as the students.
Triad: Students are arranged so that each participant (called a "pilot") in the inner circle has two "co-pilots" sitting behind them on either side. Pilots are the speakers because they are in the inner circle; co-pilots are in the outer circle and only speak during consultation. The seminar proceeds as any other seminar. At a point in the seminar, the facilitator pauses the discussion and instructs the triad to talk to each other. Conversation will be about topics that need more in-depth discussion or a question posed by the leader. Sometimes triads will be asked by the facilitator to come up with a new question. Any time during a triad conversation, group members can switch seats and one of the co-pilots can sit in the pilot's seat. Only during that time is the switching of seats allowed. This structure allows for students to speak, who may not yet have the confidence to speak in the large group. This type of seminar involves all students instead of just the students in the inner and outer circles.
Simultaneous seminars: Students are arranged in multiple small groups and placed as far as possible from each other. Following the guidelines of the Socratic seminar, students engage in small group discussions. Simultaneous seminars are typically done with experienced students who need little guidance and can engage in a discussion without assistance from a teacher/facilitator. According to the literature, this type of seminar is beneficial for teachers who want students to explore a variety of texts around a main issue or topic. Each small group may have a different text to read/view and discuss. A larger Socratic seminar can then occur as a discussion about how each text corresponds with one another. Simultaneous Seminars can also be used for a particularly difficult text. Students can work through different issues and key passages from the text.
No matter what structure the teacher employs, the basic premise of the seminar/circles is to turn partial control and direction of the classroom over to the students. The seminars encourage students to work together, creating meaning from the text and to stay away from trying to find a correct interpretation. The emphasis is on critical and creative thinking.
Text selection
Socratic seminar texts
A Socratic seminar text is a tangible document that creates a thought-provoking discussion.
The text ought to be appropriate for the participants' current level of intellectual and social development. It provides the anchor for dialogue whereby the facilitator can bring the participants back to the text if they begin to digress. Furthermore, the seminar text enables the participants to create a level playing field – ensuring that the dialogical tone within the classroom remains consistent and pure to the subject or topic at hand. Some practitioners argue that "texts" do not have to be confined to printed texts, but can include artifacts such as objects, physical spaces, and the like.
Pertinent elements of an effective Socratic text
Socratic seminar texts are able to challenge participants' thinking skills by having these characteristics:
Ideas and values: The text must introduce ideas and values that are complex and difficult to summarize. Powerful discussions arise from personal connections to abstract ideas and from implications to personal values.
Complexity and challenge: The text must be rich in ideas and complexity and open to interpretation. Ideally it should require multiple readings, but should be neither far above the participants' intellectual level nor very long.
Relevance to participants' curriculum: An effective text has identifiable themes that are recognizable and pertinent to the lives of the participants. Themes in the text should relate to the curriculum.
Ambiguity: The text must be approachable from a variety of different perspectives, including perspectives that seem mutually exclusive, thus provoking critical thinking and raising important questions. The absence of right and wrong answers promotes a variety of discussion and encourages individual contributions.
Two different ways to select a text
Socratic texts can be divided into two main categories:
Print texts (e.g., short stories, poems, and essays) and non-print texts (e.g. photographs, sculptures, and maps); and
Subject area, which can draw from print or non-print artifacts. As examples, language arts can be approached through poems, history through written or oral historical speeches, science through policies on environmental issues, math through mathematical proofs, health through nutrition labels, and physical education through fitness guidelines.
Questioning methods
Socratic seminars are based upon the interaction of peers. The focus is to explore multiple perspectives on a given issue or topic. Socratic questioning is used to help students apply the activity to their learning. The pedagogy of Socratic questions is open-ended, focusing on broad, general ideas rather than specific, factual information. The questioning technique emphasizes a level of questioning and thinking where there is no single right answer.
Socratic seminars generally start with an open-ended question proposed either by the leader or by another participant. There is no designated first speaker; as individuals participate in Socratic dialogue, they gain experience that enables them to be effective in this role of initial questioner.
The leader keeps the topic focused by asking a variety of questions about the text itself, as well as questions to help clarify positions when arguments become confused. The leader also seeks to coax reluctant participants into the discussion, and to limit contributions from those who tend to dominate. She or he prompts participants to elaborate on their responses and to build on what others have said. The leader guides participants to deepen, clarify, and paraphrase, and to synthesize a variety of different views.
The participants share the responsibility with the leader to maintain the quality of the Socratic circle. They listen actively to respond effectively to what others have contributed. This teaches the participants to think and speak persuasively using the discussion to support their position. Participants must demonstrate respect for different ideas, thoughts and values, and must not interrupt each other.
Questions can be created individually or in small groups. All participants are given the opportunity to take part in the discussion. Socratic circles specify three types of questions to prepare:
Opening questions generate discussion at the beginning of the seminar in order to elicit dominant themes.
Guiding questions help deepen and elaborate the discussion, keeping contributions on topic and encouraging a positive atmosphere and consideration for others.
Closing questions lead participants to summarize their thoughts and learning and personalize what they've discussed.
Challenges and disadvantages
Scholars such as Peter Boghossian suggest that although the method improves creative and critical thinking, there is a flip side to the method. He states that the teachers who use this method wait for the students to make mistakes, thus creating negative feelings in the class, exposing the student to possible ridicule and humiliation.
Some have countered this thought by stating that the humiliation and ridicule is not caused by the method, rather it is due to the lack of knowledge of the student. Boghossian mentions that even though the questions may be perplexing, they are not originally meant for it, in fact such questions provoke the students and can be countered by employing counterexamples.
Psychotherapy
The Socratic method, in the form of Socratic questioning, has been adapted for psychotherapy, most prominently in classical Adlerian psychotherapy, logotherapy, rational emotive behavior therapy, cognitive therapy and reality therapy. It can be used to clarify meaning, feeling, and consequences, as well as to gradually unfold insight, or explore alternative actions.
The Socratic method has also recently inspired a new form of applied philosophy: Socratic dialogue, also called philosophical counseling. In Europe Gerd B. Achenbach is probably the best known practitioner, and Michel Weber has also proposed another variant of the practice.
See also
Devil's advocate
Harkness tablea teaching method based on the Socratic method
Marva Collins
Pedagogy
The Paper Chase1973 film based on a 1971 novel of the same name, dramatizing the use of the Socratic method in law school classes
Socrates Cafe
Socratic questioning
Socratic irony
References
External links
Robinson, Richard, Plato's Earlier Dialectic, 2nd edition (Clarendon Press, Oxford, 1953).
Ch. 2: Elenchus;
Ch. 3: Elenchus: Direct and Indirect
Philosopher.org – 'Tips on Starting your own Socrates Cafe', Christopher Phillips, Cecilia Phillips
Socraticmethod.net Socratic Method Research Portal
How to Use the Socratic Method
UChicago.edu – 'The Socratic Method' by Elizabeth Garrett (1998)
Teaching by Asking Instead of by Telling, an example from Rick Garlikov
Project Gutenberg: Works by Plato
Project Gutenberg: Works by Xenophon (includes some Socratic works)
Project Gutenberg: Works by Cicero (includes some works in the "Socratic dialogue" format)
The Socratic Club
Socratic and Scientific Method
Method
Debate types
Education in ancient Greece
Educational psychology
History of education
Dialectic
Inquiry
Philosophical methodology
Group problem solving methods
Rhetoric | 0.765043 | 0.999048 | 0.764315 |
Classics | Classics or classical studies is the study of classical antiquity. In the Western world, classics traditionally refers to the study of Classical Greek and Roman literature and their related original languages, Ancient Greek and Latin. Classics also includes Greco-Roman philosophy, history, archaeology, anthropology, art, mythology and society as secondary subjects.
In Western civilization, the study of the Greek and Roman classics was traditionally considered to be the foundation of the humanities and has traditionally been the cornerstone of a typical elite European education.
Etymology
The word classics is derived from the Latin adjective classicus, meaning "belonging to the highest class of citizens." The word was originally used to describe the members of the Patricians, the highest class in ancient Rome. By the 2nd century AD the word was used in literary criticism to describe writers of the highest quality. For example, Aulus Gellius, in his Attic Nights, contrasts "classicus" and "proletarius" writers. By the 6th century AD, the word had acquired a second meaning, referring to pupils at a school. Thus, the two modern meanings of the word, referring both to literature considered to be of the highest quality and the standard texts used as part of a curriculum, were both derived from Roman use.
History
Middle Ages
In the Middle Ages, classics and education were tightly intertwined; according to Jan Ziolkowski, there is no era in history in which the link was tighter. Medieval education taught students to imitate earlier classical models, and Latin continued to be the language of scholarship and culture, despite the increasing difference between literary Latin and the vernacular languages of Europe during the period.
While Latin was hugely influential, according to thirteenth-century English philosopher Roger Bacon, "there are not four men in Latin Christendom who are acquainted with the Greek, Hebrew, and Arabic grammars." Greek was rarely studied in the West, and Greek literature was known almost solely in Latin translation. The works of even major Greek authors such as Hesiod, whose names continued to be known by educated Europeans, along with most of Plato, were unavailable in Christian Europe. Some were rediscovered through Arabic translations; a School of Translators was set up in the border city of Toledo, Spain, to translate from Arabic into Latin.
Along with the unavailability of Greek authors, there were other differences between the classical canon known today and the works valued in the Middle Ages. Catullus, for instance, was almost entirely unknown in the medieval period. The popularity of different authors also waxed and waned throughout the period: Lucretius, popular during the Carolingian Renaissance, was barely read in the twelfth century, while for Quintilian the reverse is true.
Renaissance
The Renaissance led to the increasing study of both ancient literature and ancient history, as well as a revival of classical styles of Latin. From the 14th century, first in Italy and then increasingly across Europe, Renaissance Humanism, an intellectual movement that "advocated the study and imitation of classical antiquity", developed. Humanism saw a reform in education in Europe, introducing a wider range of Latin authors as well as bringing back the study of Greek language and literature to Western Europe. This reintroduction was initiated by Petrarch (1304–1374) and Boccaccio (1313–1375) who commissioned a Calabrian scholar to translate the Homeric poems. This humanist educational reform spread from Italy, in Catholic countries as it was adopted by the Jesuits, and in countries that became Protestant such as England, Germany, and the Low Countries, in order to ensure that future clerics were able to study the New Testament in the original language.
Neoclassicism
The late 17th and 18th centuries are the period in Western European literary history which is most associated with the classical tradition, as writers consciously adapted classical models. Classical models were so highly prized that the plays of William Shakespeare were rewritten along neoclassical lines, and these "improved" versions were performed throughout the 18th century. In the United States, the nation's Founders were strongly influenced by the classics, and they looked in particular to the Roman Republic for their form of government.
From the beginning of the 18th century, the study of Greek became increasingly important relative to that of Latin.
In this period Johann Winckelmann's claims for the superiority of the Greek visual arts influenced a shift in aesthetic judgements, while in the literary sphere, G.E. Lessing "returned Homer to the centre of artistic achievement".
In the United Kingdom, the study of Greek in schools began in the late 18th century. The poet Walter Savage Landor claimed to have been one of the first English schoolboys to write in Greek during his time at Rugby School. In the United States, philhellenism began to emerge in the 1830s, with a turn "from a love of Rome and a focus on classical grammar to a new focus on Greece and the totality of its society, art, and culture.".
19th century
The 19th century saw the influence of the classical world, and the value of a classical education, decline, especially in the United States, where the subject was often criticised for its elitism. By the 19th century, little new literature was still being written in Latin – a practice which had continued as late as the 18th century – and a command of Latin declined in importance. Correspondingly, classical education from the 19th century onwards began to increasingly de-emphasise the importance of the ability to write and speak Latin. In the United Kingdom this process took longer than elsewhere. Composition continued to be the dominant classical skill in England until the 1870s, when new areas within the discipline began to increase in popularity.
In the same decade came the first challenges to the requirement of Greek at the universities of Oxford and Cambridge, though it would not be finally abolished for another 50 years.
Though the influence of classics as the dominant mode of education in Europe and North America was in decline in the 19th century, the discipline was rapidly evolving in the same period. Classical scholarship was becoming more systematic and scientific, especially with the "new philology" created at the end of the 18th and beginning of the 19th century. Its scope was also broadening: it was during the 19th century that ancient history and classical archaeology began to be seen as part of classics, rather than separate disciplines.
20th century to present
During the 20th century, the study of classics became less common. In England, for instance, Oxford and Cambridge universities stopped requiring students to have qualifications in Greek in 1920, and in Latin at the end of the 1950s. When the National Curriculum was introduced in England, Wales, and Northern Ireland in 1988, it did not mention the classics. By 2003, only about 10% of state schools in Britain offered any classical subjects to their students at all. In 2016, AQA, the largest exam board for A-Levels and GCSEs in England, Wales and Northern Ireland, announced that it would be scrapping A-Level subjects in Classical Civilisation, Archaeology, and Art History. This left just one out of five exam boards in England which still offered Classical Civilisation as a subject. The decision was immediately denounced by archaeologists and historians, with Natalie Haynes of the Guardian stating that the loss of the A-Level would deprive state school students, 93% of all students, the opportunity to study classics while making it once again the exclusive purview of wealthy private-school students.
However, the study of classics has not declined as fast elsewhere in Europe. In 2009, a review of Meeting the Challenge, a collection of conference papers about the teaching of Latin in Europe, noted that though there is opposition to the teaching of Latin in Italy, it is nonetheless still compulsory in most secondary schools. The same may also be said in the case of France or Greece. Indeed, Ancient Greek is one of the compulsory subjects in Greek secondary education, whereas in France, Latin is one of the optional subjects that can be chosen in a majority of middle schools and high schools. Ancient Greek is also still being taught, but not as much as Latin.
Subdisciplines
One of the most notable characteristics of the modern study of classics is the diversity of the field. Although traditionally focused on ancient Greece and Rome, the study now encompasses the entire ancient Mediterranean world, thus expanding the studies to Northern Africa and parts of the Middle East.
Philology
Philology is the study of language preserved in written sources; classical philology is thus concerned with understanding any texts from the classical period written in the classical languages of Latin and Greek.
The roots of classical philology lie in the Renaissance, as humanist intellectuals attempted to return to the Latin of the classical period, especially of Cicero, and as scholars attempted to produce more accurate editions of ancient texts. Some of the principles of philology still used today were developed during this period; for instance, the observation that if a manuscript could be shown to be a copy of an earlier extant manuscript, then it provides no further evidence of the original text, was made as early as 1489 by Angelo Poliziano. Other philological tools took longer to be developed: the first statement, for instance, of the principle that a more difficult reading should be preferred over a simpler one, was in 1697 by Jean Le Clerc.
The modern discipline of classical philology began in Germany at the turn of the nineteenth century. It was during this period that scientific principles of philology began to be put together into a coherent whole, in order to provide a set of rules by which scholars could determine which manuscripts were most accurate. This "new philology", as it was known, centered around the construction of a genealogy of manuscripts, with which a hypothetical common ancestor, closer to the original text than any existing manuscript, could be reconstructed.
Archaeology
Classical archaeology is the oldest branch of archaeology, with its roots going back to J.J. Winckelmann's work on Herculaneum in the 1760s. It was not until the last decades of the 19th century, however, that classical archaeology became part of the tradition of Western classical scholarship. It was included as part of Cambridge University's Classical Tripos for the first time after the reforms of the 1880s, though it did not become part of Oxford's Greats until much later.
The second half of the 19th century saw Schliemann's excavations of Troy and Mycenae; the first excavations at Olympia and Delos; and Arthur Evans' work in Crete, particularly on Knossos. This period also saw the foundation of important archaeological associations (e.g. the Archaeological Institute of America in 1879), including many foreign archaeological institutes in Athens and Rome (the American School of Classical Studies at Athens in 1881, British School at Athens in 1886, American Academy in Rome in 1895, and British School at Rome in 1900).
More recently, classical archaeology has taken little part in the theoretical changes in the rest of the discipline, largely ignoring the popularity of "New Archaeology", which emphasized the development of general laws derived from studying material culture, in the 1960s. New Archaeology is still criticized by traditional minded scholars of classical archaeology despite a wide acceptance of its basic techniques.
Art history
Some art historians focus their study on the development of art in the classical world. Indeed, the art and architecture of Ancient Rome and Greece is very well regarded and remains at the heart of much of our art today. For example, Ancient Greek architecture gave us the Classical Orders: Doric, Ionic, and Corinthian. The Parthenon is still the architectural symbol of the classical world.
Greek sculpture is well known and we know the names of several Ancient Greek artists: for example, Phidias.
Ancient history
With philology, archaeology, and art history, scholars seek understanding of the history and culture of a civilization, through critical study of the extant literary and physical artefacts, in order to compose and establish a continual historic narrative of the Ancient World and its peoples. The task is difficult due to a dearth of physical evidence: for example, Sparta was a leading Greek city-state, yet little evidence of it survives to study, and what is available comes from Athens, Sparta's principal rival; likewise, the Roman Empire destroyed most evidence (cultural artefacts) of earlier, conquered civilizations, such as that of the Etruscans.
Philosophy
The English word "philosophy" comes from the Greek word φιλοσοφία, meaning "love of wisdom", probably coined by Pythagoras. Along with the word itself, the discipline of philosophy as we know it today has its roots in ancient Greek thought, and according to Martin West "philosophy as we understand it is a Greek creation". Ancient philosophy was traditionally divided into three branches: logic, physics, and ethics. However, not all of the works of ancient philosophers fit neatly into one of these three branches. For instance, Aristotle's Rhetoric and Poetics have been traditionally classified in the West as "ethics", but in the Arabic world were grouped with logic; in reality, they do not fit neatly into either category.
From the last decade of the eighteenth century, scholars of ancient philosophy began to study the discipline historically. Previously, works on ancient philosophy had been unconcerned with chronological sequence and with reconstructing the reasoning of ancient thinkers; with what Wolfgang-Ranier Mann calls "New Philosophy", this changed.
Reception studies
Another discipline within the classics is "reception studies", which developed in the 1960s at the University of Konstanz.
Reception studies is concerned with how students of classical texts have understood and interpreted them.
As such, reception studies is interested in a two-way interaction between reader and text, taking place within a historical context.
Though the idea of an "aesthetics of reception" was first put forward by Hans Robert Jauss in 1967, the principles of reception theory go back much earlier than this.
As early as 1920, T. S. Eliot wrote that "the past [is] altered by the present as much as the present is directed by the past"; Charles Martindale describes this as a "cardinal principle" for many versions of modern reception theory.
Classical Greece
Ancient Greece was the civilization belonging to the period of Greek history lasting from the Archaic period, beginning in the eighth century BC, to the Roman conquest of Greece after the Battle of Corinth in 146 BC. The Classical period, during the fifth and fourth centuries BC, has traditionally been considered the height of Greek civilisation. The Classical period of Greek history is generally considered to have begun with the first and second Persian invasions of Greece at the start of the Greco-Persian wars, and to have ended with the death of Alexander the Great.
Classical Greek culture had a powerful influence on the Roman Empire, which carried a version of it to many parts of the Mediterranean region and Europe; thus Classical Greece is generally considered to be the seminal culture which provided the foundation of Western civilization.
Language
Ancient Greek is the historical stage in the development of the Greek language spanning the Archaic ( to 6th centuries BC), Classical ( to 4th centuries BC), and Hellenistic ( century BC to 6th century AD) periods of ancient Greece and the ancient world. It is predated in the 2nd millennium BC by Mycenaean Greek. Its Hellenistic phase is known as Koine ("common") or Biblical Greek, and its late period mutates imperceptibly into Medieval Greek. Koine is regarded as a separate historical stage of its own, although in its earlier form it closely resembles Classical Greek. Prior to the Koine period, Greek of the classical and earlier periods included several regional dialects.
Ancient Greek was the language of Homer and of classical Athenian historians, playwrights, and philosophers. It has contributed many words to the vocabulary of English and many other European languages, and has been a standard subject of study in Western educational institutions since the Renaissance. Latinized forms of Ancient Greek roots are used in many of the scientific names of species and in other scientific terminology.
Literature
The earliest surviving works of Greek literature are epic poetry. Homer's Iliad and Odyssey are the earliest to survive to us today, probably composed in the eighth century BC. These early epics were oral compositions, created without the use of writing.
Around the same time that the Homeric epics were composed, the Greek alphabet was introduced; the earliest surviving inscriptions date from around 750 BC.
European drama was invented in ancient Greece. Traditionally this was attributed to Thespis, around the middle of the sixth century BC, though the earliest surviving work of Greek drama is Aeschylus' tragedy The Persians, which dates to 472 BC. Early Greek tragedy was performed by a chorus and two actors, but by the end of Aeschylus' life, a third actor had been introduced, either by him or by Sophocles. The last surviving Greek tragedies are the Bacchae of Euripides and Sophocles' Oedipus at Colonus, both from the end of the fifth century BC.
Surviving Greek comedy begins later than tragedy; the earliest surviving work, Aristophanes' Acharnians, comes from 425 BC. However, comedy dates back as early as 486 BC, when the Dionysia added a competition for comedy to the much earlier competition for tragedy. The comedy of the fifth century is known as Old Comedy, and it comes down to us solely in the eleven surviving plays of Aristophanes, along with a few fragments. Sixty years after the end of Aristophanes' career, the next author of comedies to have any substantial body of work survive is Menander, whose style is known as New Comedy.
Mythology and religion
Greek mythology is the body of myths and legends belonging to the ancient Greeks concerning their gods and heroes, the nature of the world, and the origins and significance of their own cult and ritual practices. They were a part of religion in ancient Greece. Modern scholars refer to the myths and study them in an attempt to throw light on the religious and political institutions of Ancient Greece and its civilization, and to gain understanding of the nature of myth-making itself.
Greek religion encompassed the collection of beliefs and rituals practiced in ancient Greece in the form of both popular public religion and cult practices. These different groups varied enough for it to be possible to speak of Greek religions or "cults" in the plural, though most of them shared similarities. Also, the Greek religion extended out of Greece and out to neighbouring islands.
Many Greek people recognized the major gods and goddesses: Zeus, Poseidon, Hades, Apollo, Artemis, Aphrodite, Ares, Dionysus, Hephaestus, Athena, Hermes, Demeter, Hestia and Hera; though philosophies such as Stoicism and some forms of Platonism used language that seems to posit a transcendent single deity. Different cities often worshipped the same deities, sometimes with epithets that distinguished them and specified their local nature.
Philosophy
The earliest surviving philosophy from ancient Greece dates back to the 6th century BC, when according to Aristotle Thales of Miletus was considered to have been the first Greek philosopher. Other influential pre-Socratic philosophers include Pythagoras and Heraclitus. The most famous and significant figures in classical Athenian philosophy, from the 5th to the 3rd centuries BC, are Socrates, his student Plato, and Aristotle, who studied at Plato's Academy before founding his own school, known as the Lyceum. Later Greek schools of philosophy, including the Cynics, Stoics, and Epicureans, continued to be influential after the Roman annexation of Greece, and into the post-Classical world.
Greek philosophy dealt with a wide variety of subjects, including political philosophy, ethics, metaphysics, ontology, and logic, as well as disciplines which are not today thought of as part of philosophy, such as biology and rhetoric.
Classical Rome
Language
The language of ancient Rome was Latin, a member of the Italic family of languages. The earliest surviving inscription in Latin comes from the 7th century BC, on a brooch from Palestrina. Latin from between this point and the early 1st century BC is known as Old Latin. Most surviving Latin literature is Classical Latin, from the 1st century BC to the 2nd century AD. Latin then evolved into Late Latin, in use during the late antique period. Late Latin survived long after the end of classical antiquity, and was finally replaced by written Romance languages around the 9th century AD. Along with literary forms of Latin, there existed various vernacular dialects, generally known as Vulgar Latin, in use throughout antiquity. These are mainly preserved in sources such as graffiti and the Vindolanda tablets.
Literature
Latin literature seems to have started in 240 BC, when a Roman audience saw a play adapted from the Greek by Livius Andronicus. Andronicus also translated Homer's Odyssey into an Saturnian verse. The poets Ennius, Accius, and Patruvius followed. Their work survives only in fragments; the earliest Latin authors whose work we have full examples of are the playwrights Plautus and Terence. Much of the best known and most highly thought of Latin literature comes from the classical period, with poets such as Virgil, Horace, and Ovid; historians such as Julius Caesar and Tacitus; orators such as Cicero; and philosophers such as Seneca the Younger and Lucretius. Late Latin authors include many Christian writers such as Lactantius, Tertullian and Ambrose; non-Christian authors, such as the historian Ammianus Marcellinus, are also preserved.
History
According to legend, the city of Rome was founded in 753 BC; in reality, there had been a settlement on the site since around 1000 BC, when the Palatine Hill was settled. The city was originally ruled by kings, first Roman, and then Etruscanaccording to Roman tradition, the first Etruscan king of Rome, Tarquinius Priscus, ruled from 616 BC. Over the course of the 6th century BC, the city expanded its influence over the entirety of Latium. Around the end of the 6th century – traditionally in 510 BCthe kings of Rome were driven out, and the city became a republic.
Around 387 BC, Rome was sacked by the Gauls following the Battle of the Allia. It soon recovered from this humiliating defeat, however, and in 381 the inhabitants of Tusculum in Latium were made Roman citizens. This was the first time Roman citizenship was extended in this way. Rome went on to expand its area of influence, until by 269 the entirety of the Italian peninsula was under Roman rule. Soon afterwards, in 264, the First Punic War began; it lasted until 241. The Second Punic War began in 218, and by the end of that year, the Carthaginian general Hannibal had invaded Italy. The war saw Rome's worst defeat to that point at Cannae; the largest army Rome had yet put into the field was wiped out, and one of the two consuls leading it was killed. However, Rome continued to fight, annexing much of Spain and eventually defeating Carthage, ending her position as a major power and securing Roman preeminence in the Western Mediterranean.
Legacy of the classical world
The classical languages of the Ancient Mediterranean world influenced every European language, imparting to each a learned vocabulary of international application. Thus, Latin grew from a highly developed cultural product of the Golden and Silver eras of Latin literature to become the international lingua franca in matters diplomatic, scientific, philosophic and religious, until the 17th century. Long before this, Latin had evolved into the Romance languages and Ancient Greek into Modern Greek and its dialects. In the specialised science and technology vocabularies, the influence of Latin and Greek is notable. Ecclesiastical Latin, the Roman Catholic Church's official language, remains a living legacy of the classical world in the contemporary world.
Latin had an impact far beyond the classical world. It continued to be the pre-eminent language for serious writings in Europe long after the fall of the Roman empire. The modern Romance languages (French, Italian, Portuguese, Romanian, Spanish, Galician, Catalan) all derive from Latin. Latin is still seen as a foundational aspect of European culture.
The legacy of the classical world is not confined to the influence of classical languages. The Roman empire was taken as a model by later European empires, such as the Spanish and British empires. Classical art has been taken as a model in later periods – medieval Romanesque architecture and Enlightenment-era neoclassical literature were both influenced by classical models, to take but two examples, while James Joyce's Ulysses is one of the most influential works of twentieth-century literature.
See also
Classical tradition
Great Books of the Western World
Neoclassicism
Outline of classical studies
Outline of ancient Greece
Outline of ancient Rome
References
Citations
Sources
Works cited
Further reading
General
Art and archaeology
History, Greek
History, Roman
Literature
Philology
Philosophy
External links
Electronic Resources for Classicists by the University of California, Irvine.
Perseus Project website at Tufts University
Alpheios Project website
Ancient Greece studies
Humanities
Ancient Roman studies | 0.76621 | 0.99749 | 0.764287 |
Antipositivism | In social science, antipositivism (also interpretivism, negativism or antinaturalism) is a theoretical stance which proposes that the social realm cannot be studied with the methods of investigation utilized within the natural sciences, and that investigation of the social realm requires a different epistemology. Fundamental to that antipositivist epistemology is the belief that the concepts and language researchers use in their research shape their perceptions of the social world they are investigating and seeking to define.
Interpretivism (anti-positivism) developed among researchers dissatisfied with post-positivism, the theories of which they considered too general and ill-suited to reflect the nuance and variability found in human interaction. Because the values and beliefs of researchers cannot fully be removed from their inquiry, interpretivists believe research on human beings by human beings cannot yield objective results. Thus, rather than seeking an objective perspective, interpretivists look for meaning in the subjective experiences of individuals engaging in social interaction. Many interpretivist researchers immerse themselves in the social context they are studying, seeking to understand and formulate theories about a community or group of individuals by observing them from the inside. Interpretivism is an inductive practice influenced by philosophical frameworks such as hermeneutics, phenomenology, and symbolic interactionism. Interpretive methods are used in many fields of the social sciences, including human geography, sociology, political science, cultural anthropology, among others.
History
Beginning with Giambattista Vico, in the early eighteenth century, and later with Montesquieu, the study of natural history and human history were separate fields of intellectual enquiry. Natural history is not under human control, whereas human history is a human creation. As such, antipositivism is informed by an epistemological distinction between the natural world and the social realm. The natural world can only be understood by its external characteristics, whereas the social realm can be understood externally and internally, and thus can be known.
In the early nineteenth century, intellectuals, led by the Hegelians, questioned the prospect of empirical social analysis. Karl Marx died before the establishment of formal social science, but nonetheless rejected the sociological positivism of Auguste Comte—despite his attempt to establish a historical materialist science of society.
The enhanced positivism of Émile Durkheim served as foundation of modern academic sociology and social research, yet retained many mechanical elements of its predecessor. Hermeneuticians such as Wilhelm Dilthey theorized in detail on the distinction between natural and social science ('Geisteswissenschaft'), whilst neo-Kantian philosophers such as Heinrich Rickert maintained that the social realm, with its abstract meanings and symbolisms, is inconsistent with scientific methods of analysis. Edmund Husserl, meanwhile, negated positivism through the rubric of phenomenology.
At the turn of the twentieth century, the first wave of German sociologists formally introduced verstehende (interpretive) sociological antipositivism, proposing research should concentrate on human cultural norms, values, symbols, and social processes viewed from a resolutely subjective perspective. As an antipositivist, however, one seeks relationships that are not as "ahistorical, invariant, or generalizable" as those pursued by natural scientists.
The interaction between theory (or constructed concepts) and data is always fundamental in social science and this subjection distinguishes it from physical science. Durkheim himself noted the importance of constructing concepts in the abstract (e.g. "collective consciousness" and "social anomie") in order to form workable categories for experimentation. Both Weber and Georg Simmel pioneered the verstehen (or 'interpretative') approach toward social science; a systematic process in which an outside observer attempts to relate to a particular cultural group, or indigenous people, on their own terms and from their own point of view.
Through the work of Simmel in particular, sociology acquired a possible character beyond positivist data-collection or grand, deterministic systems of structural law. Relatively isolated from the sociological academy throughout his lifetime, Simmel presented idiosyncratic analyses of modernity more reminiscent of the phenomenological and existential writers than of Comte or Durkheim, paying particular concern to the forms of, and possibilities for, social individuality. His sociology engaged in a neo-Kantian critique of the limits of human perception.
Antipositivism thus holds there is no methodological unity of the sciences: the three goals of positivism – description, control, and prediction – are incomplete, since they lack any understanding. Science aims at understanding causality so control can be exerted. If this succeeded in sociology, those with knowledge would be able to control the ignorant and this could lead to social engineering.
This perspective has led to controversy over how one can draw the line between subjective and objective research, much less draw an artificial line between environment and human organization (see environmental sociology), and influenced the study of hermeneutics. The base concepts of antipositivism have expanded beyond the scope of social science, in fact, phenomenology has the same basic principles at its core. Simply put, positivists see sociology as a science, while anti-positivists do not.
Frankfurt School
The antipositivist tradition continued in the establishment of critical theory, particularly the work associated with the Frankfurt School of social research. Antipositivism would be further facilitated by rejections of 'scientism'; or science as ideology. Jürgen Habermas argues, in his On the Logic of the Social Sciences (1967), that "the positivist thesis of unified science, which assimilates all the sciences to a natural-scientific model, fails because of the intimate relationship between the social sciences and history, and the fact that they are based on a situation-specific understanding of meaning that can be explicated only hermeneutically ... access to a symbolically prestructured reality cannot be gained by observation alone."
The sociologist Zygmunt Bauman argued that "our innate tendency to express moral concern and identify with the Other's wants is stifled in modernity by positivistic science and dogmatic bureaucracy. If the Other does not 'fit in' to modernity's approved classifications, it is liable to be extinguished."
See also
Critical theory
Grounded theory
Holism
Humanistic sociology
Methodological dualism
Philosophy of social science
Poststructuralism
Social action
Symbolic interactionism
References
Philosophy of science
Sociological theories
History of sociology
Philosophy of social science
Politics of science
Symbolic interactionism | 0.771085 | 0.991156 | 0.764266 |
The End of History and the Last Man | The End of History and the Last Man is a 1992 book of political philosophy by American political scientist Francis Fukuyama which argues that with the ascendancy of Western liberal democracy—which occurred after the Cold War (1945–1991) and the dissolution of the Soviet Union (1991)—humanity has reached "not just... the passing of a particular period of post-war history, but the end of history as such: That is, the end-point of mankind's ideological evolution and the universalization of Western liberal democracy as the final form of human government."
Fukuyama draws upon the philosophies and ideologies of Georg Wilhelm Friedrich Hegel and Karl Marx, who define human history as a linear progression, from one socioeconomic epoch to another.
The book expands on Fukuyama's essay "The End of History?" that was published in The National Interest journal, Summer 1989.
Overview
Fukuyama argues that history should be viewed as an evolutionary process, and that the end of history, in this sense, means that liberal democracy is the final form of government for all nations. According to Fukuyama, since the French Revolution, liberal democracy has repeatedly proven to be a fundamentally better system (ethically, politically, economically) than any of the alternatives, and so there can be no progression from it to an alternative system. Fukuyama claims not that events will stop occurring in the future, but rather that all that will happen in the future (even if totalitarianism returns) is that democracy will become more and more prevalent in the long term.
Some argue that Fukuyama presents "American-style" democracy as the only "correct" political system and argues that all countries must inevitably follow this particular system of government. However, many Fukuyama scholars claim this is a misreading of his work. Fukuyama's argument is only that in the future there will be more and more governments that use the framework of parliamentary democracy and that contain markets of some sort. He has said:
The End of History was never linked to a specifically American model of social or political organization. Following Alexandre Kojève, the Russian-French philosopher who inspired my original argument, I believe that the European Union more accurately reflects what the world will look like at the end of history than the contemporary United States. The EU's attempt to transcend sovereignty and traditional power politics by establishing a transnational rule of law is much more in line with a "post-historical" world than the Americans' continuing belief in God, national sovereignty, and their military.
Arguments in favor
An argument in favor of Fukuyama's thesis is the democratic peace theory, which argues that mature democracies rarely or never go to war with one another. This theory has faced criticism, with arguments largely resting on conflicting definitions of "war" and "mature democracy". Part of the difficulty in assessing the theory is that democracy as a widespread global phenomenon emerged only very recently in human history, which makes generalizing about it difficult. (See also list of wars between democracies.)
Other major empirical evidence includes the elimination of interstate warfare in South America, Southeast Asia, and Eastern Europe among countries that moved from military dictatorship to liberal democracies.
According to several studies, the end of the Cold War and the subsequent increase in the number of liberal democratic states were accompanied by a sudden and dramatic decline in total warfare, interstate wars, ethnic wars, revolutionary wars, and the number of refugees and displaced persons.
Criticisms
Jacques Derrida
In Specters of Marx: The State of the Debt, the Work of Mourning and the New International (1993), Jacques Derrida criticized Fukuyama as a "come-lately reader" of the philosopher-statesman Alexandre Kojève (1902–1968), who "in the tradition of Leo Strauss" (1899–1973), in the 1950s, already had described the society of the U.S. as the "realization of communism"; and said that the public-intellectual celebrity of Fukuyama and the mainstream popularity of his book, The End of History and the Last Man, were symptoms of right-wing, cultural anxiety about ensuring the "Death of Marx". In criticising Fukuyama's celebration of the economic and cultural hegemony of Western liberalism, Derrida said:
Therefore, Derrida said: "This end of History is essentially a Christian eschatology. It is consonant with the current discourse of the Pope on the European Community: Destined to become [either] a Christian State or [a] Super-State; [but] this community would still belong, therefore, to some Holy Alliance"; that Fukuyama practised an intellectual "sleight-of-hand trick", by using empirical data whenever suitable to his message, and by appealing to an abstract ideal whenever the empirical data contradicted his end-of-history thesis; and that Fukuyama sees the United States and the European Union as imperfect political entities, when compared to the distinct ideals of liberal democracy and of the free market, but understands that such abstractions (ideals) are not demonstrated with empirical evidence, nor ever could be empirically demonstrated, because they are philosophical and religious abstractions that originated from the Gospels of Philosophy of Hegel; and yet, Fukuyama still uses empirical observations to prove his thesis, which he, himself, agrees are imperfect and incomplete, to validate his end-of-history thesis, which remains an abstraction.
Radical Islam, tribalism, and the "Clash of Civilizations"
Various Western commentators have described the thesis of The End of History as flawed because it does not sufficiently take into account the power of ethnic loyalties and religious fundamentalism as a counter-force to the spread of liberal democracy, with the specific example of Islamic fundamentalism, or radical Islam, as the most powerful of these.
Benjamin Barber wrote a 1992 article and a 1995 book, Jihad vs. McWorld, that addressed this theme. Barber described "McWorld" as a secular, liberal, corporate-friendly transformation of the world and used the word "jihad" to refer to the competing forces of tribalism and religious fundamentalism, with a special emphasis on Islamic fundamentalism.
Samuel P. Huntington wrote a 1993 essay, The Clash of Civilizations, in direct response to The End of History; he then expanded the essay into a 1996 book, The Clash of Civilizations and the Remaking of World Order. In the essay and book, Huntington argued that the temporary conflict between ideologies is being replaced by the ancient conflict between civilizations. The dominant civilization decides the form of human government, and these will not be constant. He especially singled out Islam, which he described as having "bloody borders".
After the September 11 attacks, The End of History was cited by some commentators as a symbol of the supposed naiveté and undue optimism of the Western world during the 1990s, in thinking that the end of the Cold War also represented the end of major global conflict. In the weeks after the attacks, Fareed Zakaria called the events "the end of the end of history", while George Will wrote that history had "returned from vacation".
Fukuyama did discuss radical Islam briefly in The End of History. He argued that Islam is not an imperialist force like Stalinism and fascism; that is, it has little intellectual or emotional appeal outside the Islamic "heartlands". Fukuyama pointed to the economic and political difficulties that Iran and Saudi Arabia face and argued that such states are fundamentally unstable: either they will become democracies with a Muslim society (like Turkey) or they will simply disintegrate. Moreover, when Islamic states have actually been created, they were easily dominated by the powerful Western states.
In October 2001, Fukuyama, in a Wall Street Journal opinion piece, responded to criticism of his thesis after the September 11 attacks, saying, "I believe that in the end I remain right". He explained that what he meant by "End of History" was the evolution of human political system, toward that of the "liberal-democratic West". He also noted that his original thesis "does not imply a world free from conflict, nor the disappearance of culture as a distinguishing characteristic of societies".
The resurgence of Russia and China
Another challenge to the "end of history" thesis is the growth in the economic and political power of two countries, Russia and China. China has a one-party state government, while Russia, though formally a democracy, is often described as an autocracy; it is categorized as an anocracy in the Polity data series.
Azar Gat, Professor of National Security at Tel Aviv University, argued this point in his 2007 Foreign Affairs article, "The Return of Authoritarian Great Powers", stating that the success of these two countries could "end the end of history". Gat also discussed radical Islam, but stated that the movements associated with it "represent no viable alternative to modernity and pose no significant military threat to the developed world". He considered the challenge of China and Russia to be the major threat, since they could pose a viable rival model which could inspire other states.
This view was echoed by Robert Kagan in his 2008 book, The Return of History and the End of Dreams, whose title was a deliberate rejoinder to The End of History.
In his 2008 Washington Post opinion piece, Fukuyama also addressed this point. He wrote, "Despite recent authoritarian advances, liberal democracy remains the strongest, most broadly appealing idea out there. Most autocrats, including Putin and Chávez, still feel that they have to conform to the outward rituals of democracy even as they gut its substance. Even China's Hu Jintao felt compelled to talk about democracy in the run-up to Beijing's Olympic Games."
His "ultimate nightmare", he said in March 2022, is a world in which China supports Russia's invasion of Ukraine and Russia supports a Chinese invasion of Taiwan. If that were to happen, and be successful, Fukuyama said, "then you would really be living in a world that was being dominated by these non-democratic powers. If the United States and the rest of the West couldn't stop that from happening, then that really is the end of the end of history."
Failure of civil society and political decay
In 2014, on the occasion of the 25th anniversary of the publication of the original essay, "The End of History?", Fukuyama wrote a column in The Wall Street Journal again updating his hypothesis. He wrote that, while liberal democracy still had no real competition from more authoritarian systems of government "in the realm of ideas", nevertheless he was less idealistic than he had been "during the heady days of 1989". Fukuyama noted the Orange Revolution in Ukraine and the Arab Spring, both of which seemed to have failed in their pro-democracy goals, as well as the "backsliding" of democracy in countries including Thailand, Turkey and Nicaragua. He stated that the biggest problem for the democratically elected governments in some countries was not ideological but "their failure to provide the substance of what people want from government: personal security, shared economic growth and the basic public services ... that are needed to achieve individual opportunity." Though he believed that economic growth, improved government and civic institutions all reinforced one another, he wrote that it was not inevitable that "all countries will ... get on that escalator".
Fukuyama also warned of "political decay", which he wrote could also affect established democracies like the United States, in which corruption and crony capitalism erode liberty and economic opportunity. Nevertheless, he expressed his continued belief that "the power of the democratic ideal remains immense".
Following the United Kingdom's decision to leave the European Union and the election of Donald Trump as President of the United States in 2016, Fukuyama feared for the future of liberal democracy in the face of resurgent populism, and the rise of a "post-fact world", saying that "twenty five years ago, I didn't have a sense or a theory about how democracies can go backward. And I think they clearly can." He warned that America's political rot was infecting the world order to the point where it "could be as big as the Soviet collapse". Fukuyama also highlighted Russia's interference in the Brexit referendum and 2016 U.S. elections.
Posthuman future
Fukuyama has also stated that his thesis was incomplete, but for a different reason: "there can be no end of history without an end of modern natural science and technology" (quoted from Our Posthuman Future). Fukuyama predicts that humanity's control of its own evolution will have a great and possibly terrible effect on liberal democracy.
Split between democracy and capitalism
Slovenian philosopher Slavoj Žižek argues that Fukuyama's idea that we have reached the end of history is not wholly true. Žižek points out that liberal democracy is linked to capitalism; however, the success of capitalism in authoritarian nations like China and Singapore shows that the link between capitalism and democracy is broken. Problems caused by the success of capitalism and neo-liberal policies, such as greater wealth inequality and environmental hazards, manifested in many countries with unrest towards elected governments. As a result liberal democracy has struggled to survive many of the problems caused by a free market economy and many nations would see a decline in the quality of their democracy.
Publication history
Free Press, 1992, hardcover
Perennial, 1993, paperback
See also
Democratic peace theory
End of history
Last Man
Post-truth politics
Sociocultural evolution
Thumos
The Clash of Civilizations
Whig history
Notes
References
Morton Halperin, Joanne J. Myers, Joseph T. Siegle, Michael M. Weinstein. (2005-03-17). The Democracy Advantage: How Democracies Promote Prosperity and Peace. Carnegie Council for Ethics in International Affairs. Retrieved 2008-06-18.
Potter, Robert (2011), 'Recalcitrant Interdependence', Thesis, Flinders University
Mahbubani, Kishore (2008). The New Asian Hemisphere: The Irresisble Shift of Global Power to the East. New York: PublicAffairs
External links
"The End of History?" essay by Francis Fukuyama - published in "The National Interest" journal Summer 1989
Islam and America... Friends or Foes?
Introduction to Text
Booknotes interview with Fukuyama on The End of History and the Last Man, February 9, 1992
The End of the End of History
1992 non-fiction books
20th century
American exceptionalism
Books about civilizations
Books about cultural geography
Books about globalization
Books about the West
Books in political philosophy
Books about capitalism
Democracy
English-language books
Free Press (publisher) books
Postmodernism
Works about the theory of history
Universal history books
Works by Francis Fukuyama | 0.765701 | 0.998085 | 0.764235 |
Middle class | The middle class refers to a class of people in the middle of a social hierarchy, often defined by occupation, income, education, or social status. The term has historically been associated with modernity, capitalism and political debate. Common definitions for the middle class range from the middle fifth of individuals on a nation's income ladder, to everyone but the poorest and wealthiest 20%. Theories like "Paradox of Interest" use decile groups and wealth distribution data to determine the size and wealth share of the middle class.
Terminology differs in the United States, where the term middle class describes people who in other countries would be described as working class.
There has been significant global middle-class growth over time. In February 2009, The Economist asserted that over half of the world's population belonged to the middle class, as a result of rapid growth in emerging countries. It characterized the middle class as having a reasonable amount of discretionary income and defined it as beginning at the point where people have roughly a third of their income left for discretionary spending after paying for basic food and shelter.
History and evolution of the term
The term "middle class" is first attested in James Bradshaw's 1745 pamphlet Scheme to prevent running Irish Wools to France. Another phrase used in early modern Europe was "the middling sort".
The term "middle class" has had several and sometimes contradictory meanings. Friedrich Engels saw the category as an intermediate social class between the nobility and the peasantry in late-feudalist society. While the nobility owned much of the countryside, and the peasantry worked it, a new bourgeoisie (literally "town-dwellers") arose around mercantile functions in the city. In France, the middle classes helped drive the French Revolution. This "middle class" eventually overthrew the ruling monarchists of feudal society, thus becoming the new ruling class or bourgeoisie in the new capitalist-dominated societies.
The modern usage of the term "middle-class", however, dates to the 1913 UK Registrar-General's report, in which the statistician T. H. C. Stevenson identified the middle class as those falling between the upper-class and the working-class. The middle class includes: professionals, managers, and senior civil servants. The chief defining characteristic of membership in the middle-class is control of significant human capital while still being under the dominion of the elite upper class, who control much of the financial and legal capital in the world.
Within capitalism, "middle-class" initially referred to the bourgeoisie; later, with the further differentiation of classes as capitalist societies developed, the term came to be synonymous with the term petite bourgeoisie. The boom-and-bust cycles of capitalist economies result in the periodic (and more or less temporary) impoverishment and proletarianization of much of the petite bourgeois world, resulting in their moving back and forth between working-class and petite-bourgeois status. The typical modern definitions of "middle class" tend to ignore the fact that the classical petite-bourgeoisie is and has always been the owner of a small-to medium-sized business whose income is derived almost exclusively from the employment of workers; "middle class" came to refer to the combination of the labour aristocracy, professionals, and salaried, white-collar workers.
The size of the middle class depends on how it is defined, whether by education, wealth, environment of upbringing, social network, manners or values, etc. These are all related, but are far from deterministically dependent. The following factors are often ascribed in the literature on this topic to a "middle class:"
Achievement of tertiary education.
Holding professional qualifications, including academics, lawyers, chartered engineers, politicians, and doctors, regardless of leisure or wealth.
Belief in bourgeois values, such as high rates of house ownership, delayed gratification, and jobs that are perceived to be secure.
Lifestyle. In the United Kingdom, social status has historically been linked less directly to wealth than in the United States, and has also been judged by such characteristics as accent (Received Pronunciation and U and non-U English), manners, type of school attended (state or private school), occupation, and the class of a person's family, circle of friends and acquaintances.
In the United States, by the end of the twentieth century, more people identified themselves as middle-class (with insignificant numbers identifying themselves as upper-class). The Labour Party in the UK, which grew out of the organised labour movement and originally drew almost all of its support from the working-class, reinvented itself under Tony Blair in the 1990s as "New Labour", a party competing with the Conservative Party for the votes of the middle-class as well as those of the Labour Party's traditional group of voters – the working-class. Around 40% of British people consider themselves to be middle class, and this number has remained relatively stable over the last few decades.
According to the OECD, the middle class refers to households with income between 75% and 200% of the median national income.
Marxism
Marxism defines social classes according to their relationship with the means of production. The main basis of social class division of Marxism: the possession of means of production, the role and position it plays in social labor organization (production process), the distribution of wealth and resources and the amount. Marxist writers have used the term middle class in various ways. In the first sense, it is used for the bourgeoisie (the urban merchant and professional class) that arose between the aristocracy and the proletariat in the waning years of feudalism in the Marxist model. Vladimir Lenin stated that the "peasantry ... in Russia constitute eight- or nine-tenths of the petty bourgeoisie". However, in modern developed countries, Marxist writers define the petite bourgeoisie as primarily comprising (as the name implies) owners of small to medium-sized businesses, who derive their income from the exploitation of wage-laborers (and who are in turn exploited by the "big" bourgeoisie i.e. bankers, owners of large corporate trusts, etc.) as well as the highly educated professional class of doctors, engineers, architects, lawyers, university professors, salaried middle-management of capitalist enterprises of all sizes, etc. – as the "middle class" which stands between the ruling capitalist "owners of the means of production" and the working class (whose income is derived solely from hourly wages).
Pioneer 20th century American Marxist theoretician Louis C. Fraina (Lewis Corey) defined the middle class as "the class of independent small enterprisers, owners of productive property from which a livelihood is derived". From Fraina's perspective, this social category included "propertied farmers" but not propertyless tenant farmers. Middle class also included salaried managerial and supervisory employees but not "the masses of propertyless, dependent salaried employees. Fraina speculated that the entire category of salaried employees might be adequately described as a "new middle class" in economic terms, although this remained a social grouping in which "most of whose members are a new proletariat."
Professional–managerial class
In 1977 Barbara Ehrenreich and John Ehrenreich defined a new class in the United States as "salaried mental workers who do not own the means of production and whose major function in the social division of labor ... [is] ... the reproduction of capitalist culture and capitalist class relations;" the Ehrenreichs named this group the "professional–managerial class". This group of middle-class professionals is distinguished from other social classes by their training and education (typically business qualifications and university degrees), with example occupations including academics and teachers, social workers, engineers, accountants, managers, nurses, and middle-level administrators. The Ehrenreichs developed their definition from studies by André Gorz, Serge Mallet, and others, of a "new working class," which, despite education and a perception of themselves as middle class, were part of the working class because they did not own the means of production, and were wage earners paid to produce a piece of capital. The professional–managerial class seek higher rank status and salaries and tend to have incomes above the average for their country.
Recent global growth
Modern definitions of the term "middle class" are often politically motivated and vary according to the exigencies of political purpose which they were conceived to serve in the first place as well as due to the multiplicity of more- or less-scientific methods used to measure and compare wealth between modern advanced industrial states, where poverty is relatively low and the distribution of wealth more egalitarian in a relative sense, and in developing countries, where poverty and a profoundly unequal distribution of wealth crush the vast majority of the population. Many of these methods of comparison have been harshly criticised; for example, economist Thomas Piketty, in his book Capital in the Twenty-First Century, describes one of the most commonly used comparative measures of wealth across the globe, the Gini coefficient, as being an example of "synthetic indices ... which mix very different things, such as inequality with respect to labor and capital, so that it is impossible to distinguish clearly among the multiple dimensions of inequality and the various mechanisms at work."
In February 2009, The Economist asserted that over half the world's population now belongs to the middle class, as a result of rapid growth in emerging countries. It characterized the middle class as having a reasonable amount of discretionary income, so that they do not live from hand-to-mouth as the poor do, and defined it as beginning at the point where people have roughly a third of their income left for discretionary spending after paying for basic food and shelter. This allows people to buy consumer goods, improve their health care, and provide for their children's education. Most of the emerging middle class consists of people who are middle class by the standards of the developing world but not the developed one, since their money incomes do not match developed country levels, but the percentage of it which is discretionary does. By this definition, the number of middle-class people in Asia exceeded that in the West sometime around 2007 or 2008.
The Economist article pointed out that in many emerging countries, the middle class has not grown incrementally but explosively. The point at which the poor start entering the middle class by the millions is alleged to be the time when poor countries get the maximum benefit from cheap labour through international trade, before they price themselves out of world markets for cheap goods. It is also a period of rapid urbanization, when subsistence farmers abandon marginal farms to work in factories, resulting in a several-fold increase in their economic productivity before their wages catch up to international levels. That stage was reached in China sometime between 1990 and 2005, when the Chinese "middle class" grew from 15% to 62% of the population and is just being reached in India now.
The Economist predicted that surge across the poverty line should continue for a couple of decades and the global middle class will grow exponentially between now and 2030. Based on the rapid growth, scholars expect the global middle class to be the driving force for sustainable development. This assumption, however, is contested (see below).
As the American middle class is estimated by some researchers to comprise approximately 45% of the population, The Economist article would put the size of the American middle class below the world average. This difference is due to the extreme difference in definitions between The Economist and many other models.
In 2010, a working paper by the OECD asserted that 1.8 billion people were now members of the global "middle class". Credit Suisse's Global Wealth Report 2014, released in October 2014, estimated that one billion adults belonged to the "middle class," with wealth anywhere between the range of $10,000–$100,000.
According to a study carried out by the Pew Research Center, a combined 16% of the world's population in 2011 were "upper-middle income" and "upper income".
An April 2019 OECD report said that the millennial generation is being pushed out of the middle class throughout the Western world.
China
Since the beginning of the 21st century, China's middle class has grown by significant margins. According to the Center for Strategic and International Studies, by 2013, some 420 million people, or 31%, of the Chinese population qualified as middle class. Based on the World Bank definition of middle class as those having with daily spending between $10 and $50 per day, nearly 40% of the Chinese population were considered middle class as of 2017.
China's middle class represents a significant market force with effects extending beyond its borders. The promise of increased prosperity and a consumer society has been a driving force behind this growth, while also suggesting the potential for political changes similar to those seen in Europe and North America. However, it's important to note that the Chinese middle class differs substantially from its Western counterparts. Despite its growth, it remains a relatively small group profoundly joined with the Chinese state. This unique relationship challenges assumptions about its role in political change. Criticisms from this demographic often center on improving efficiency and social justice within the existing political framework, rather than advocating for a regime change.
India
Estimates vary widely on the number of middle-class people in India. A 1983 article put the Indian middle class as somewhere between 70 and 100 million. According to one study the middle class in India stood at between 60 and 80 million by 1990. According to The Economist, 78 million of India's population are considered middle class as of 2017, if defined using the cutoff of those making more than $10 per day, a standard used by the India's National Council of Applied Economic Research. If including those with incomes between $2 and $10 per day, the number increases to 604 million. This was termed by researchers as the "new middle class". Measures considered include geography, lifestyle, income, and education. The World Inequality Report in 2018 further concluded that elites (i.e. the top 10%) are accumulating wealth at a greater rate than the middle class, that rather than growing, India's middle class may be shrinking in size.
Africa
According to a 2014 study by Standard Bank economist Simon Freemantle, a total of 15.3 million households in 11 surveyed nations of Africa are middle class: Angola, Ethiopia, Ghana, Kenya, Mozambique, Nigeria, South Sudan, Sudan, Tanzania, Uganda, and Zambia. In South Africa, a report conducted by the Institute for Race Relations in 2015 estimated that between 10% and 20% of South Africans are middle class, based on various criteria. An earlier study estimated that in 2008 21.3% of South Africans were members of the middle class.
A study by EIU Canback indicates 90% of Africans fall below an income of $10 a day. The proportion of Africans in the $10–$20 middle class (excluding South Africa), rose from 4.4% to only 6.2% between 2004 and 2014. Over the same period, the proportion of "upper middle class" income ($20–$50 a day) went from 1.4% to 2.3%.
According to a 2014 study by the German Development Institute, the middle class of Sub-Saharan Africa rose from 14 million to 31 million people between 1990 and 2010.
Latin America
Over the years estimates on the size of the Latin America's middle class have varied. A 1960 study stated that the middle strata in Latin America as a whole, exclusive of Indians, constituted just under 20% of the national society. A 1964 study estimated that 45 million Latin Americans belonged to the urban middle class while 15 million to the urban well-to-do and 8 million to the rural middle class and well-to-do. In Brazil, according to one estimate, in 1970 the lower middle class comprised 12% of the population, while the upper middle class comprised 3%. In the mid-1970s it was estimated by one authority that the Brazilian middle class comprised between 15 and 25% of the population. A 1969 economic survey estimated that 15% of Brazilians belonged to the middle class. By 1970, according to one study, the middle class of Argentina comprised 38% of the economically active population, compared with 19% in Brazil and 24% in Mexico. A 1975 study on Mexico estimated that the middle class in 1968 (defined as families earning between 2,000 and 5,000 pesos annually) comprised 36.4% of the population, while the upper class (defined as families earning over 5,000 pesos annually) comprised 9.4% of the population and the lower class (defined as families earning less than 2,000 pesos annually) comprised 53.9% of the population.
According to a study by the World Bank, the number of Latin Americans who are middle class rose from 103 million to 152 million between 2003 and 2009.
Russia
In 2012, the "middle class" in Russia was estimated as 15% of the whole population. Due to sustainable growth, the pre-crisis level was exceeded. In 2015, research from the Russian Academy of Sciences estimated that around 15% of the Russian population are "firmly middle class", while around another 25% are "on the periphery".
See also
References
Further reading
Balzer, Harley D., ed. Russia's Missing Middle Class: The Professions in Russian History (M.E. Sharpe, 1996).
Blackbourn, David, and Richard J. Evans, eds. The German Bourgeoisie: Essays on the Social History of the German Middle Class from the Late Eighteenth to the Early Twentieth Century (1991).
Cashell, Brian W. Who Are the "Middle Class"?, CRS Report for the Congress, 20 March 2007
Dejung, Christof, David Motadel, and Jürgen Osterhammel, eds. The Global Bourgeoisie: The Rise of the Middle Classes in the Age of Empire (2019) scholarly essays covering major countries and region s in 19th century excerpt also chapters online
Jones, Larry Eugene. "'The Dying Middle': Weimar Germany and the Fragmentation of Bourgeois Politics." Central European History 5.1 (1972): 23–54.
Kocka, Jürgen. "The Middle Classes in Europe," Journal of Modern History 67#4 (1995): 783–806. doi.org/10.1086/245228. online
Kocka, Jürgen, and J. Allan Mitchell, eds. Bourgeois Society in 19th Century Europe (1992)
Lebovics, Herman. Social Conservatism and the Middle Class in Germany, 1914–1933 (Princeton UP, 2015).
López, A. Ricardo, and Barbara Weinstein, eds. The Making of the Middle Class: Toward a Transnational History (Duke University Press, 2012) 446 pp. scholarly essays
McKibbin, Ross. Classes and Cultures: England 1918–1951 (2000) pp 44–105.
Mills, C. Wright, White Collar: The American Middle Classes (1951).
Pilbeam, Pamela. The Middle Classes in Europe, 1789–1914: France, Germany, Italy, and Russia (1990)
Wells, Jonathan Daniel. "The Southern Middle Class," Journal of Southern History, Volume: 75#3 2009. pp 651+.
Williams, E. N. "Our Merchants Are Princes": The English Middle Classes In The Eighteenth Century". History Today (Aug 1962), Vol. 12 Issue 8, pp. 548-557.
External links
Beazley reaches out to 'middle Australia'
NOW on PBS: "Middle Class Insecurity" Are politicians listening to middle-class families on the edge of economic collapse?
Contains estimates on the size of the middle class in various countries
Contains estimates on the size of the middle class in Latin America and other countries
Contains Contains estimates on the size of the middle class in Africa, based on various definitions
1740s neologisms
Social classes | 0.765247 | 0.998657 | 0.764219 |
Early Dynastic Period (Mesopotamia) | The Early Dynastic period (abbreviated ED period or ED) is an archaeological culture in Mesopotamia (modern-day Iraq) that is generally dated to and was preceded by the Uruk and Jemdet Nasr periods. It saw the development of writing and the formation of the first cities and states. The ED itself was characterized by the existence of multiple city-states: small states with a relatively simple structure that developed and solidified over time. This development ultimately led to the unification of much of Mesopotamia under the rule of Sargon, the first monarch of the Akkadian Empire. Despite this political fragmentation, the ED city-states shared a relatively homogeneous material culture. Sumerian cities such as Uruk, Ur, Lagash, Umma, and Nippur located in Lower Mesopotamia were very powerful and influential. To the north and west stretched states centered on cities such as Kish, Mari, Nagar, and Ebla.
The study of Central and Lower Mesopotamia has long been given priority over neighboring regions. Archaeological sites in Central and Lower Mesopotamia—notably Girsu but also Eshnunna, Khafajah, Ur, and many others—have been excavated since the 19th century. These excavations have yielded cuneiform texts and many other important artifacts. As a result, this area was better known than neighboring regions, but the excavation and publication of the archives of Ebla have changed this perspective by shedding more light on surrounding areas, such as Upper Mesopotamia, western Syria, and southwestern Iran. These new findings revealed that Lower Mesopotamia shared many socio-cultural developments with neighboring areas and that the entirety of the ancient Near East participated in an exchange network in which material goods and ideas were being circulated.
History of research
Dutch archaeologist Henri Frankfort coined the term Early Dynastic (ED) period for Mesopotamia, the naming convention having been borrowed from the similarly named Early Dynastic (ED) period for Egypt. The periodization was developed in the 1930s during excavations that were conducted by Henri Frankfort on behalf of the University of Chicago Oriental Institute at the archaeological sites of Tell Khafajah, Tell Agrab, and Tell Asmar in the Diyala Region of Iraq.
The ED was divided into the sub-periods ED I, II, and III. This was primarily based on complete changes over time in the plan of the Abu Temple of Tell Asmar, which had been rebuilt multiple times on exactly the same spot. During the 20th century, many archaeologists also tried to impose the scheme of ED I–III upon archaeological remains excavated elsewhere in both Iraq and Syria, dated to 3000–2000 BC. However, evidence from sites elsewhere in Iraq has shown that the ED I–III periodization, as reconstructed for the Diyala river valley region, could not be directly applied to other regions.
Research in Syria has shown that developments there were quite different from those in the Diyala river valley region or southern Iraq, rendering the traditional Lower Mesopotamian chronology useless. During the 1990s and 2000s, attempts were made by various scholars to arrive at a local Upper Mesopotamian chronology, resulting in the Early Jezirah (EJ) 0–V chronology that encompasses everything from 3000 to 2000 BC. The use of the ED I–III chronology is now generally limited to Lower Mesopotamia, with the ED II sometimes being further restricted to the Diyala river valley region or discredited altogether.
Periodization
The ED was preceded by the Jemdet Nasr and then succeeded by the Akkadian period, during which, for the first time in history, large parts of Mesopotamia were united under a single ruler. The entirety of the ED is now generally dated to approximately 2900–2350 BC according to the widely accepted middle chronology or 2800–2230 BC according to the short chronology, which is increasingly less accepted by scholars. The ED was divided into the ED I, ED II, ED IIIa, and ED IIIb sub-periods. ED I–III were more or less contemporary with the Early Jezirah (EJ) I–III in Upper Mesopotamia. The exact dating of the ED sub-periods varies between scholars—with some abandoning ED II and using only Early ED and Late ED instead and others extending ED I while allowing ED III begin earlier so that ED III was to begin immediately after ED I with no gap between the two.
Many historical periods in the Near East are named after the dominant political force at that time, such as the Akkadian or Ur III periods. This is not the case for the ED period. It is an archaeological division that does not reflect political developments, and it is based upon perceived changes in the archaeological record, e.g. pottery and glyptics. This is because the political history of the ED is unknown for most of its duration. As with the archaeological subdivision, the reconstruction of political events is hotly debated among researchers.
The ED I (2900–2750/2700 BC) is poorly known, relative to the sub-periods that followed it. In Lower Mesopotamia, it shared characteristics with the final stretches of the Uruk (–3100 BC) and Jemdet Nasr (–2900 BC) periods. ED I is contemporary with the culture of the Scarlet Ware pottery typical of sites along the Diyala in Lower Mesopotamia, the Ninevite V culture in Upper Mesopotamia, and the Proto-Elamite culture in southwestern Iran.
New artistic traditions developed in Lower Mesopotamia during the ED II (2750/2700–2600 BC). These traditions influenced the surrounding regions. According to later Mesopotamian historical tradition, this was the time when legendary mythical kings such as Lugalbanda, Enmerkar, Gilgamesh, and Aga ruled over Mesopotamia. Archaeologically, this sub-period has not been well-attested to in excavations of Lower Mesopotamia, leading some researchers to abandon it altogether.
The ED III (2600–2350 BC) saw an expansion in the use of writing and increasing social inequality. Larger political entities developed in Upper Mesopotamia and southwestern Iran. ED III is usually further subdivided into the ED IIIa (2600–2500/2450 BC) and ED IIIb (2500/2450–2350 BC). The Royal Cemetery at Ur and the archives of Fara and Abu Salabikh date back to ED IIIa. The ED IIIb is especially well known through the archives of Girsu (part of Lagash) in Iraq and Ebla in Syria.
The end of the ED is not defined archaeologically but rather politically. The conquests of Sargon and his successors upset the political equilibrium throughout Iraq, Syria, and Iran. The conquests lasted many years into the reign of Naram-Sin of Akkad and built on ongoing conquests during the ED. The transition is much harder to pinpoint within an archaeological context. It is virtually impossible to date a particular site as being that of either ED III or Akkadian period using ceramic or architectural evidence alone.
History
The contemporary sources from the Early Dynastic period do not allow the reconstruction of a political history. Royal inscriptions only offer a glimpse of the military conflicts and relations among the different city-states. Instead, rulers were more interested in glorifying their pious acts, such as the construction and restoration of temples and offerings to the gods.
For the ED I and ED II periods, there are no contemporary documents shedding any light on warfare or diplomacy. Only for the end of the ED III period are contemporary texts available from which a political history can be reconstructed. The largest archives come from Lagash and Ebla. Smaller collections of clay tablets have been found at Ur, Tell Beydar, Tell Fara, Abu Salabikh, and Mari. They show that the Mesopotamian states were constantly involved in diplomatic contacts, leading to political and perhaps even religious alliances. Sometimes one state would gain hegemony over another, which foreshadows the rise of the Akkadian Empire.
The well-known Sumerian King List dates to the early second millennium BC. It consists of a succession of royal dynasties from different Sumerian cities, ranging back into the Early Dynastic Period. Each dynasty rises to prominence and dominates the region, only to be replaced by the next. The document was used by later Mesopotamian kings to legitimize their rule. While some of the information in the list can be checked against other texts such as economic documents, much of it is probably fictional, and its use as a historical document for the Early Dynastic period is limited to none.
Diplomacy
There may have been a common or shared cultural identity among the Early Dynastic Sumerian city-states, despite their political fragmentation. This notion was expressed by the terms kalam or ki-engir. Numerous texts and cylinder seals seem to indicate the existence of a league or amphictyony of Sumerian city-states. For example, clay tablets from Ur bear cylinder seal impressions with signs representing other cities. Similar impressions have also been found at Jemdet Nasr, Uruk, and Susa. Some impressions show exactly the same list of cities. It has been suggested that this represented a system in which specific cities were associated with delivering offerings to the major Sumerian temples, similar to the bala system of the Ur III period.
The texts from Shuruppak, dating to ED IIIa, also seem to confirm the existence of a ki-engir league. Member cities of the alliance included Umma, Lagash, Uruk, Nippur, and Adab. Kish may have had a leading position, whereas Shuruppak may have been the administrative center. The members may have assembled in Nippur, but this is uncertain. This alliance seems to have focused on economic and military collaboration, as each city would dispatch soldiers to the league. The primacy of Kish is illustrated by the fact that its ruler Mesilim (c. 2500 BC) acted as arbitrator in a conflict between Lagash and Umma. However, it is not certain whether Kish held this elevated position during the entire period, as the situation seems to have been different during later conflicts between Lagash and Umma. Later, rulers from other cities would use the title 'King of Kish' to strengthen their hegemonic ambitions and possibly also because of the symbolic value of the city.
The texts of this period also reveal the first traces of a wide-ranging diplomatic network. For example, the peace treaty between Entemena of Lagash and Lugal-kinishe-dudu of Uruk, recorded on a clay nail, represents the oldest known agreement of this kind. Tablets from Girsu record reciprocal gifts between the royal court and foreign states. Thus, Baranamtarra, wife of king Lugalanda of Lagash, exchanged gifts with her peers from Adab and even Dilmun.
War
The first recorded war in history took place in Mesopotamia in around 2700 B.C. during the ED period, between the forces of Sumer and Elam. The Sumerians, under the command of Enmebaragesi, the King of Kish, defeated the Elamites and is recorded "carried away as spoils the weapons of Elam".
It is only for the later parts of the ED period that information on political events becomes available, either as echoes in later writings or from contemporary sources. Writings from the end of the third millennium, including several Sumerian heroic narratives and the Sumerian King List, seem to echo events and military conflicts that may have occurred during the ED II period. For example, the reigns of legendary figures like king Gilgamesh of Uruk and his adversaries Enmebaragesi and Aga of Kish possibly date to ED II. These semi-legendary narratives seem to indicate an age dominated by two major powers: Uruk in Sumer and Kish in the Semitic country. However, the existence of the kings of this "heroic age" remains controversial.
Somewhat reliable information on then-contemporary political events in Mesopotamia is available only for the ED IIIb period. These texts come mainly from Lagash and detail the recurring conflict with Umma over control of irrigated land. The kings of Lagash are absent from the Sumerian King List, as are their rivals, the kings of Umma. This suggests that these states, while powerful in their own time, were later forgotten.
The royal inscriptions from Lagash also mention wars against other Lower Mesopotamian city-states, as well as against kingdoms farther away. Examples of the latter include Mari, Subartu, and Elam. These conflicts show that already in this stage in history there was a trend toward stronger states dominating larger territories. For example, king Eannatum of Lagash was able to defeat Mari and Elam around 2450 B.C. Enshakushanna of Uruk seized Kish and imprisoned its king Embi-Ishtar around 2350 B.C. Lugal-zage-si, king of Uruk and Umma, was able to seize most of Lower Mesopotamia around 2358 B.C. This phase of warring city-states came to an end with the emergence of the Akkadian Empire under the rule of Sargon of Akkad in 2334 B.C. (middle).
Neighboring areas
The political history of Upper Mesopotamia and Syria is well known from the royal archives recovered at Ebla. Ebla, Mari, and Nagar were the dominant states for this period. The earliest texts indicate that Ebla paid tribute to Mari but was able to reduce it after it won a military victory. Cities like Emar on the Upper Euphrates and Abarsal (location unknown) were vassals of Ebla. Ebla exchanged gifts with Nagar, and a royal marriage was concluded between the daughter of a king of Ebla and the son of his counterpart at Nagar. The archives also contain letters from more distant kingdoms, such as Kish and possibly Hamazi, although it is also possible that there were cities with the same names closer to Ebla. In many ways, the diplomatic interactions in the wider Ancient Near East during this period resemble those from the second millennium BC, which are particularly well known from the Amarna letters.
Recent discoveries
In March 2020, archaeologists announced the discovery of a 5,000-year-old cultic area filled with more than 300 broken ceremonial ceramic cups, bowls, jars, animal bones and ritual processions dedicated to Ningirsu at the site of Girsu. One of the remains was a duck-shaped bronze figurine with eyes made from bark which is thought to be dedicated to Nanshe.
Early Dynastic kingdoms and rulers
The Early Dynastic period is preceded by the Uruk period and the Jemdet Nasr period. The Early Dynastic period is followed by the rise of the Akkadian Empire.
Geographical context
Lower Mesopotamia
The preceding Uruk period in Lower Mesopotamia saw the appearance of the first cities, early state structures, administrative practices, and writing. Evidence for these practices was attested to during the Early Dynastic period.
The ED period is the first for which it is possible to say something about the ethnic composition of the population of Lower Mesopotamia. This is due to the fact that texts from this period contained sufficient phonetic signs to distinguish separate languages. They also contained personal names, which can potentially be linked to an ethnic identity. The textual evidence suggested that Lower Mesopotamia during the ED period was largely dominated by Sumer and primarily occupied by the Sumerian people, who spoke a non-Semitic language isolate (Sumerian). It is debated whether Sumerian was already in use during the Uruk period.
Textual evidence indicated the existence of a Semitic population in the upper reaches of Lower Mesopotamia. The texts in question contained personal names and words from a Semitic language, identified as Old Akkadian. However, the use of the term Akkadian before the emergence of the Akkadian Empire is problematic, and it has been proposed to refer to this Old Akkadian phase as being of the "Kish civilization" named after Kish (the seemingly most powerful city during the ED period) instead. Political and socioeconomic structures in these two regions also differed, although Sumerian influence is unparalleled during the Early Dynastic period.
Agriculture in Lower Mesopotamia relied on intensive irrigation. Cultivars included barley and date palms in combination with gardens and orchards. Animal husbandry was also practiced, focusing on sheep and goats. This agricultural system was probably the most productive in the entire ancient Near East. It allowed the development of a highly urbanized society. It has been suggested that, in some areas of Sumer, the population of the urban centers during ED III represented three-quarters of the entire population.
The dominant political structure was the city-state in which a large urban center dominated the surrounding rural settlements. The territories of these city-states were in turn delimited by other city-states that were organized along the same principles. The most important centers were Uruk, Ur, Lagash, Adab, and Umma-Gisha. Available texts from this period point to recurring conflicts between neighboring kingdoms, notably between Umma and Lagash.
The situation may have been different further north, where Semitic people seem to have been dominant. In this area, Kish was possibly the center of a large territorial state, competing with other powerful political entities such as Mari and Akshak.
The Diyala River valley is another region for which the ED period is relatively well-known. Along with neighboring areas, this region was home to Scarlet Ware—a type of painted pottery characterized by geometric motifs representing natural and anthropomorphic figures. In the Jebel Hamrin, fortresses such as Tell Gubba and Tell Maddhur were constructed. It has been suggested that these sites were established to protect the main trade route from the Mesopotamian lowlands to the Iranian plateau. The main Early Dynastic sites in this region are Tell Asmar and Khafajah. Their political structure is unknown, but these sites were culturally influenced by the larger cities in the Mesopotamian lowland.
Neighboring regions
Upper Mesopotamia and Central Syria
At the beginning of the third millennium BC, the Ninevite V culture flourished in Upper Mesopotamia and the Middle Euphrates River region. It extended from Yorghan Tepe in the east to the Khabur Triangle in the west. Ninevite V was contemporary with ED I and marked an important step in the urbanization of the region. The period seems to have experienced a phase of decentralization, as reflected by the absence of large monumental buildings and complex administrative systems similar to what had existed at the end of the fourth millennium BC.
Starting in 2700 BC and accelerating after 2500, the main urban sites grew considerably in size and were surrounded by towns and villages that fell inside their political sphere of influence. This indicated that the area was home to many political entities. Many sites in Upper Mesopotamia, including Tell Chuera and Tell Beydar, shared a similar layout: a main tell surrounded by a circular lower town. German archaeologist Max von Oppenheim called them Kranzhügel, or "cup-and-saucer-hills". Among the important sites of this period are Tell Brak (Nagar), Tell Mozan, Tell Leilan, and Chagar Bazar in the Jezirah and Mari on the middle Euphrates.
Urbanization also increased in western Syria, notably in the second half of the third millennium BC. Sites like Tell Banat, Tell Hadidi, Umm el-Marra, Qatna, Ebla, and Al-Rawda developed early state structures, as evidenced by the written documentation of Ebla. Substantial monumental architecture such as palaces, temples, and monumental tombs appeared in this period. There is also evidence for the existence of a rich and powerful local elite.
The two cities of Mari and Ebla dominate the historical record for this region. According to the excavator of Mari, the circular city on the middle Euphrates was founded ex nihilo at the time of the Early Dynastic I period in Lower Mesopotamia. Mari was one of the main cities of the Middle East during this period, and it fought many wars against Ebla during the 24th century BC. The archives of Ebla, capital city of a powerful kingdom during the ED IIIb period, indicated that writing and the state were well-developed, contrary to what had been believed about this area before its discovery. However, few buildings from this period have been excavated at the site of Ebla itself.
The territories of these kingdoms were much larger than in Lower Mesopotamia. Population density, however, was much lower than in the south where subsistence agriculture and pastoralism were more intensive. Towards the west, agriculture takes on more "Mediterranean" aspects: the cultivation of olive and grape was very important in Ebla. Sumerian influence was notable in Mari and Ebla. At the same time, these regions with a Semitic population shared characteristics with the Kish civilization while also maintaining their own unique cultural traits.
Iranian Plateau
In southwestern Iran, the first half of the Early Dynastic period corresponded with the Proto-Elamite period. This period was characterized by indigenous art, a script that has not yet been deciphered, and an elaborate metallurgy in the Lorestan region. This culture disappeared toward the middle of the third millennium, to be replaced by a less sedentary way of life. Due to the absence of written evidence and a lack of archaeological excavations targeting this period, the socio-political situation of Proto-Elamite Iran is not well understood. Mesopotamian texts indicated that the Sumerian kings dealt with political entities in this area. For example, legends relating to the kings of Uruk referred to conflicts against Aratta. Aratta had not been identified, but it is believed to have been located somewhere in southwestern Iran.
In the middle third millennium BC, Elam emerged as a powerful political entity in the area of southern Lorestan and northern Khuzestan. Susa (level IV) was a central place in Elam and an important gateway between southwestern Iran and southern Mesopotamia. Hamazi was located in the Zagros Mountains to the north or east of Elam, possibly between the Great Zab and the Diyala River, near Halabja.
This is also the area where the still largely unknown Jiroft culture emerged in the third millennium BC, as evidenced by excavation and looting of archaeological sites. The areas further north and to the east were important participants in the international trade of this period due to the presence of tin (central Iran and the Hindu Kush) and lapis lazuli (Turkmenistan and northern Afghanistan). Settlements such as Tepe Sialk, Tureng Tepe, Tepe Hissar, Namazga-Tepe, Altyndepe, Shahr-e Sukhteh, and Mundigak served as local exchange and production centres but do not seem to have been capitals of larger political entities.
Persian Gulf
The further development of maritime trade in the Persian Gulf led to increased contacts between Lower Mesopotamia and other regions. Starting in the previous period, the area of modern-day Oman—known in ancient texts as Magan—had seen the development of the oasis settlement system. This system relied on irrigation agriculture in areas with perennial springs. Magan owed its good position in the trade network to its copper deposits. These deposits were located in the mountains, notably near Hili, where copper workshops and monumental tombs testifying to the area's affluence has been excavated.
Further to the west was an area called Dilmun, which in later periods corresponds to what is today known as Bahrain. However, while Dilmun was mentioned in contemporary ED texts, no sites from this period have been excavated in this area. This may indicate that Dilmun may have referred to the coastal areas that served as a place of transit for the maritime trade network.
Indus valley
The maritime trade in the Gulf extended as far east as the Indian subcontinent, where the Indus Valley civilisation flourished. This trade intensified during the third millennium and reached its peak during the Akkadian and Ur III periods.
The artifacts found in the royal tombs of the First Dynasty of Ur indicate that foreign trade was particularly active during this period, with many materials coming from foreign lands, such as Carnelian likely coming from the Indus or Iran, Lapis Lazuli from Afghanistan, silver from Turkey, copper from Oman, and gold from several locations such as Egypt, Nubia, Turkey or Iran. Carnelian beads from the Indus were found in Ur tombs dating to 2600–2450, in an example of Indus-Mesopotamia relations. In particular, carnelian beads with an etched design in white were probably imported from the Indus Valley, and made according to a technique developed by the Harappans. These materials were used in the manufacture of ornamental and ceremonial objects in the workshops of Ur.
The First Dynasty of Ur had enormous wealth as shown by the lavishness of its tombs. This was probably due to the fact that Ur acted as the main harbour for trade with India, which put her in a strategic position to import and trade vast quantities of gold, carnelian or lapis lazuli. In comparison, the burials of the kings of Kish were much less lavish. High-prowed Sumerian ships may have traveled as far as Meluhha, thought to be the Indus region, for trade.
Government and economy
Administration
Each city was centered around a temple that was dedicated to a particular patron deity. A city was governed by both/either a "lugal" (king) and/or an "ensi" (priest). It was understood that rulers were determined by the deity of the city and rule could be transferred from one city to another. Hegemony from the Nippur priesthood moved between competing dynasties of the Sumerian cities. Traditionally, these included Eridu, Bad-tibira, Larsa, Sippar, Shuruppak, Kish, Uruk, Ur, Adab, and Akshak. Other relevant cities from outside the Tigris–Euphrates river system included Hamazi, Awan (in present-day Iran), and Mari (in present-day Syria but which is credited on the SKL as having "exercised kingship" during the ED II period).
Thorkild Jacobsen defined a "primitive democracy" with reference to Sumerian epics, myths, and historical records. He described a form of government determined by a majority of men who were free citizens. There was little specialisation and only a loose power structure. Kings such as Gilgamesh of the first dynasty of Uruk did not yet hold an autocracy. Rather, they governed together with councils of elders and councils of younger men, who were likely free men bearing arms. Kings would consult the councils on all major decisions, including whether to go to war. Jacobsen's definition of a democracy as a relationship between primitive monarchs and men of the noble classes has been questioned. Jacobsen conceded that the available evidence could not distinguish a "Mesopotamian democracy" from a "primitive oligarchy".
"Lugal" (Sumerian: 𒈗, a Sumerogram ligature of two signs: "𒃲" meaning "big" or "great" and "𒇽" meaning "man") (a Sumerian language title translated into English as either "king" or "ruler") was one of three possible titles affixed to a ruler of a Sumerian city-state. The others were "EN" and "ensi".
The sign for "lugal" became the understood logograph for "king" in general. In the Sumerian language, "lugal" meant either an "owner" of property such as a boat or a field, or alternatively, the "head" of an entity or a family. The cuneiform sign for "lugal" serves as a determinative in cuneiform texts, indicating that the following word would be the name of a king.
The definition of "lugal" during the ED period of Mesopotamia is uncertain. The ruler of a city-state was usually referred to as "ensi". However, the ruler of a confederacy may have been referred to as "lugal". A lugal may have been "a young man of outstanding qualities from a rich landowning family".
Jacobsen made a distinction between a "lugal" as an elected war leader and "EN" as an elected governor concerned with internal issues. The functions of a lugal might include military defense, arbitration in border disputes, and ceremonial and ritualistic activities. At the death of the lugal, he was succeeded by his eldest son. The earliest rulers with the title "lugal" include Enmebaragesi and Mesilim of Kish and Meskalamdug, Mesannepada, and several of Mesannepada's successors at Ur.
"Ensi" (Sumerian: 𒑐𒋼𒋛, meaning "Lord of the Plowland") was a title associated with the ruler or prince of a city. The people understood that the ensi was a direct representative of the city's patron deity. Initially, the term "ensi" may have been specifically associated with rulers of Lagash and Umma. However, in Lagesh, "lugal" sometimes referred to the city's patron deity, "Ningirsu". In later periods, the title "ensi" presupposed subordinance to a "lugal".
"EN" (Sumerian: 𒂗; Sumerian cuneiform for "lord" or "priest") referred to a high priest or priestess of the city's patron deity. It may also have been part of the title of the ruler of Uruk. "Ensi", "EN", and "Lugal" may have been local terms for the ruler of Lagash, Uruk, and Ur, respectively.
Temples
The centers of Eridu and Uruk, two of the earliest cities, developed large temple complexes built of mud-brick. Developed as small shrines in the earliest settlements, by the ED the temples became the most imposing structures in their cities, each dedicated to its own deity.
Each city had at least one major deity. Sumer was divided into about thirteen independent cities which were divided by canals and boundary stones during the ED.
Population
Uruk, which was one of Sumer's largest cities, has been estimated to have had a population of 50,000 – 80,000 at its peak. Given the other cities in Sumer and its large agricultural population, a rough estimate for Sumer's population might have been somewhere between 800,000 and 1,500,000. The global human population at this time has been estimated to have been about 27,000,000.
Law
Code of Urukagina
The énsi Urukagina, of the city-state of Lagash, is best known for his reforms to combat corruption, and the Code of Urukagina is sometimes cited as the earliest known example of a legal code in recorded history. The Code of Urukagina has also been widely hailed as the first recorded example of government reform, as it sought to achieve a higher level of freedom and equality. Although the actual Code of Urukagina text has yet to be discovered, much of its content may be surmised from other references to it that have been found. In the Code of Urukagina, Urukagina exempted widows and orphans from taxes, compelled the city to pay funeral expenses (including the ritual food and drink libations for the journey of the dead into the lower world), and decreed that the rich had to use silver when purchasing from the poor. If the poor did not wish to sell, the powerful man (the rich man or the priest) could not force him to do so. The Code of Urukagina limited the power of both the priesthood and large property owners and established measures against usury, burdensome controls, hunger, theft, murder, and seizure of people's property and persons—as Urukagina stated: "The widow and the orphan were no longer at the mercy of the powerful man."
Despite these attempts to curb the excesses of the elite class, elite or royal women may have had even greater influence and prestige in Urukagina's reign than previously. Urukagina greatly expanded the royal "Household of Women" from about 50 persons to about 1,500 persons and renamed it to "Household of Goddess Bau". He gave it ownership of vast amounts of land confiscated from the former priesthood and placed it under the supervision of Urukagina's wife Shasha, or Shagshag. During the second year of Urukagina's reign, his wife presided over the lavish funeral of his predecessor's queen Baranamtarra, who had been an important personage in her own right.
In addition to such changes, two of Urukagina's other surviving decrees, first published and translated by Samuel Kramer in 1964, have attracted controversy in recent decades:
Urukagina seems to had abolished the former custom of polyandry in his country, on pain of the woman taking multiple husbands being stoned with rocks upon which her crime was written.
A statute that stated: "If a woman says [text illegible...] to a man, her mouth is crushed with burnt bricks."
No comparable laws from Urukagina addressing penalties for adultery by men have survived. The discovery of these fragments has led some modern critics to assert that they provide "the first written evidence of the degradation of women."
Reform Document
The following extracts are taken from the "Reform Document":
"From the border territory of Ningirsu to the sea, no person shall serve as officers."
"For a corpse being brought to the grave, his beer shall be 3 jugs and his bread 80 loaves. 1 bed and 1 lead goat shall the undertaker take away, and 3 ban of barley shall the person(s) take away."
"When to the reeds of Enki a person has been brought, his beer will be 4 jugs, and his bread 420 loaves. 1 barig of barley shall the undertaker take away, and 3 ban of barley shall the persons of ... take away. 1 woman’s headband, and 1 sila of princely fragrance shall the eresh-dingir priestess take away. 420 loaves of bread that have sat are the bread duty, 40 loaves of hot bread are for eating, and 10 loaves of hot bread are the bread of the table. 5 loaves of bread are for the persons of the levy, 2 mud vessels and 1 sadug vessel of beer are for the lamentation singers of Girsu. 490 loaves of bread, 2 mud vessels and 1 sadug vessel of beer are for the lamentation singers of Lagash. 406 of bread, 2 mud vessels, and 1 sadug vessel of beer are for the other lamentation singers. 250 loaves of bread and 1 mud vessel of beer are for the old wailing women. 180 loaves of bread and one mud vessel of beer are for the men of Nigin."
"The blind one who stands in ..., his bread for eating is 1 loaf, 5 loaves of bread are his at midnight, 1 loaf is his bread at midday, and 6 loaves are his bread in the evening."
"60 loaves of bread, 1 mud vessel of beer, and 3 ban of barley are for the person who is to perform as the sagbur priest."
Trade
Imports to Ur came from the Near East and the Old World. Goods such as obsidian from Turkey, lapis lazuli from Badakhshan in Afghanistan, beads from Bahrain, and seals inscribed with the Indus Valley script from India have been found in Ur. Metals were imported. Sumerian stonemasons and jewelers used gold, silver, lapis lazuli, chlorite, ivory, iron, and carnelian. Resin from Mozambique was found in the tomb of Queen Puabi at Ur.
The cultural and trade connections of Ur are reflected by archaeological finds of imported items. In the ED III period, items from geographically distant places were found. These included gold, silver, lapis lazuli and carnelian. These types of items were not found in Mesopotamia.
Gold items were located in graves at the Royal Cemetery of Ur, royal treasuries and temples, indicating prestigious and religious functions. Gold items discovered included personal ornaments, weapons, tools, sheet-metal cylinder seals, fluted bowls, goblets, imitation cockle shells, and sculptures.
Silver was found as items such as belts, vessels, hair ornaments, pins, weapons, cockle shells, and sculptures. There are very few literary references or physical clues as to the sources of the silver.
Lapis lazuli has been found in items such as jewelry, plaques, gaming boards, lyres, ostrich-egg vessels, and also in parts of a larger sculpture known as Ram in a Thicket. Some of the larger objects included a spouted cup, a dagger-hilt, and a whetstone. It indicates high status.
Chlorite stone artifacts from the ED are commonly found. they include disc beads, ornaments, and stone vases. The vases rarely exceed 25 cm in height. They often have human and animal motifs and semiprecious stone inlays. They may have carried precious oils.
Culture
Sculpting
Early Dynastic stone sculptures have mainly been recovered from excavated temples. They can be separated into two groups: three-dimensional prayer statues and perforated bas-reliefs. The so-called Tell Asmar Hoard is a well-known example of Early Dynastic sculpture. It was recovered in a temple and consists of standing figures with their hands folded in prayer or holding a goblet for a libation ritual. Other statues feature seated figures also in devotional postures. Male figures wear a plain or fringed dress, or kaunakes. The statues usually represent notables or rulers. They served as ex-votos and were placed in temples to pray on behalf of the spender. The Sumerian style clearly influenced neighbouring regions, as similar statues have been recovered from sites in Upper Mesopotamia, including Assur, Tell Chuera, and Mari. However, some statues showed greater originality and had less stylistic characteristics in common with Sumerian sculpture.
Bas-reliefs created from perforated stone slabs are another hallmark of Early Dynastic sculpture. They also served a votive purpose, but their exact function is unknown. Examples include the votive relief of king Ur-Nanshe of Lagash and his family found at Girsu and that of Dudu, a priest of Ningirsu. The latter showed mythological creatures such as a lion-headed eagle. The Stele of the Vultures, created by Eannatum of Lagash, is remarkable in that it represents different scenes that together tell the narrative of the victory of Lagash over its rival Umma. Reliefs like these have been found in Lower Mesopotamia and the Diyala region but not in Upper Mesopotamia or Syria.
Metalworking and goldsmithing
Sumerian metallurgy and goldsmithing were highly developed. This is all the more remarkable for a region where metals had to be imported. Known metals included gold, silver, copper, bronze, lead, electrum, and tin. The use of binary, tertiary, and quaternary alloys was already present during the Uruk period. Sumerians used bronze, although the scarcity of tin meant that they used arsenic instead. Metalworking techniques included lost-wax casting, plating, filigree, and granulation.
Numerous metal objects have been excavated from temples and graves, including dishes, weapons, jewelry, statuettes, foundation nails, and various other objects of worship. The most remarkable gold objects come from the Royal Cemetery at Ur, including musical instruments and the complete inventory of Puabi’s tomb. Metal vases have also been excavated at other sites in Lower Mesopotamia, including the Vase of Entemena at Lagash.
Cylinder seals
Cylinder seals were used to authenticate documents like sales and to control access by sealing a lump of clay on doors of storage rooms. The use of cylinder seals increased significantly during the ED period, suggesting an expansion and increased the complexity of administrative activities.
During the preceding Uruk period, a wide variety of scenes were engraved on cylinder seals. This variety disappeared at the start of the third millennium, to be replaced by an almost exclusive focus on mythological and cultural scenes in Lower Mesopotamia and the Diyala region. During the ED I period, seal designs included geometric motifs and stylized pictograms. Later on, combat scenes between real and mythological animals became the dominant theme, together with scenes of heroes fighting animals. Their exact meaning is unclear. Common mythological creatures include anthropomorphic bulls and scorpion-men. Real creatures include lions and eagles. Some anthropomorphic creatures are probably deities, as they wear a horned tiara, which was a symbol of divinity.
Scenes with cultic themes, including banquet scenes, became common during ED II. Another common ED III theme was the so-called god-boat, but its meaning is unclear. During the ED III period, ownership of seals was started to be registered. Glyptic development in Upper Mesopotamia and Syria was strongly influenced by Sumerian art.
Inlays
Examples of inlay have been found at several sites and used materials such as nacre (mother of pearl), white and coloured limestone, lapis lazuli, and marble. Bitumen was used to attach the inlay in wooden frames, but these have not survived in the archaeological record. The inlay-panels usually showed mythological or historical scenes. Like bas-reliefs, these panels allow the reconstruction of early forms of narrative art. However, this type of work seems to have been abandoned in subsequent periods.
The best preserved inlaid object is the Standard of Ur found in one of the royal tombs of this city. It represents two principal scenes on its two sides: a battle and a banquet that probably follows a military victory. The "dairy frieze" found at Tell al-'Ubaid represents, as its name suggests, dairy activities (milking cows, cowsheds, preparing dairy products). It is our source of the most information on this practice in ancient Mesopotamia
Similar mosaic elements were discovered at Mari, where a mother-of-pearl engraver's workshop was identified, and at Ebla where marble fragments were found from a 3-meter-high panel decorating a room of the royal palace. The scenes of the two sites have strong similarities in their style and themes. In Mari the scenes are military (a parade of prisoners) or religious (a ram's sacrifice). In Ebla, they show a military triumph and mythological animals.
Music
The Lyres of Ur (or Harps of Ur) are considered to be the world's oldest surviving stringed instruments. In 1929, archaeologists led by Leonard Woolley discovered the instruments when excavating the Royal Cemetery of Ur between from 1922 and 1934. They discovered pieces of three lyres and one harp in Ur located in what was Ancient Mesopotamia and now is Iraq. They are over 4,500 years old from ancient Mesopotamia during the ED III. The decorations on the lyres are fine examples of the court Art of Mesopotamia of the period.
References
Further reading
Ascalone, Enrico. 2007. Mesopotamia: Assyrians, Sumerians, Babylonians (Dictionaries of Civilizations; 1). Berkeley: University of California Press. (paperback).
Bottéro, Jean, André Finet, Bertrand Lafont, and George Roux. 2001. Everyday Life in Ancient Mesopotamia. Edinburgh: Edinburgh University Press, Baltimore: Johns Hopkins University Press.
Crawford, Harriet E. W. 2004. Sumer and the Sumerians. Cambridge: Cambridge University Press.
Frayne, Douglas. 2008. Pre-Sargonic Period: Early Periods, Volume 1 (2700–2350 BC), University of Toronto Press.
Leick, Gwendolyn. 2002. Mesopotamia: Invention of the City. London and New York: Penguin.
Lloyd, Seton. 1978. The Archaeology of Mesopotamia: From the Old Stone Age to the Persian Conquest. London: Thames and Hudson.
Nemet-Nejat, Karen Rhea. 1998. Daily Life in Ancient Mesopotamia. London and Westport, Conn.: Greenwood Press.
Kramer, Samuel Noah. Sumerian Mythology: A Study of Spiritual and Literary Achievement in the Third Millennium BC.
Roux, Georges. 1992. Ancient Iraq, 560 pages. London: Penguin (earlier printings may have different pagination: 1966, 480 pages, Pelican; 1964, 431 pages, London: Allen and Urwin).
Schomp, Virginia. Ancient Mesopotamia: The Sumerians, Babylonians, And Assyrians.
Sumer: Cities of Eden (Timelife Lost Civilizations). Alexandria, VA: Time-Life Books, 1993 (hardcover, ).
Woolley, C. Leonard. 1929. The Sumerians. Oxford: Clarendon Press.
External links
Iraq’s Ancient Past — Penn Museum
Language
Sumerian Language Page, perhaps the oldest Sumerian website on the web (it dates back to 1996), features compiled lexicon, detailed FAQ, extensive links, and so on.
ETCSL: The Electronic Text Corpus of Sumerian Literature has complete translations of more than 400 Sumerian literary texts.
PSD: The Pennsylvania Sumerian Dictionary, while still in its initial stages, can be searched on-line, from August 2004.
CDLI: Cuneiform Digital Library Initiative a large corpus of Sumerian texts in transliteration, largely from the Early Dynastic and Ur III periods, accessible with images.
Ancient Mesopotamia
Archaeology of Iraq | 0.76768 | 0.995489 | 0.764217 |
Realism (international relations) | Realism, a school of thought in international relations theory, is a theoretical framework that views world politics as an enduring competition among self-interested states vying for power and positioning within an anarchic global system devoid of a centralized authority. It centers on states as rational primary actors navigating a system shaped by power politics, national interest, and a pursuit of security and self-preservation.
Realism involves the strategic use of military force and alliances to boost global influence while maintaining a balance of power. War is seen as an inevitability inherent in the anarchic conditions of world politics. Realism also emphasizes the complex dynamics of the security dilemma, where actions taken for security reasons can unintentionally lead to tensions between states.
Unlike idealism or liberalism, realism underscores the competitive and conflictual nature of global politics. In contrast to liberalism, which champions cooperation, realism asserts that the dynamics of the international arena revolve around states actively advancing national interests and prioritizing security. While idealism leans towards cooperation and ethical considerations, realism argues that states operate in a realm devoid of inherent justice, where ethical norms may not apply.
Early popular proponents of realism included Thucydides (5th century BCE), Machiavelli (16th century), Hobbes (17th century), and Rousseau (18th century). Carl von Clausewitz (early 19th century), another contributor to the realist school of thought, viewed war as an act of statecraft and gave strong emphasis on hard power. Clausewitz felt that armed conflict was inherently one-sided, where typically only one victor can emerge between two parties, with no peace.
Realism became popular again in the 1930s, during the Great Depression. At that time, it polemicized with the progressive, reformist optimism associated with liberal internationalists like U.S. President Woodrow Wilson. The 20th century brand of classical realism, exemplified by theorists such as Reinhold Niebuhr and Hans Morgenthau, has evolved into neorealism—a more scientifically oriented approach to the study of international relations developed during the latter half of the Cold War. In the 21st century, realism has experienced a resurgence, fueled by escalating tensions among world powers. Some of the most influential proponents of political realism today are John Mearsheimer and Stephen Walt.
Overview
Realists fall into three classes based on their view of the essential causes of conflict between states:
Classical realists believe that conflict follows from human nature.
Neorealists attribute conflict to the dynamics of the anarchic state-system.
Neoclassical realists believe that conflict results from both, in combination with domestic politics. Neoclassical realists are also divided between defensive and offensive realism.
Realism entails a spectrum of ideas, which tend to revolve around several central propositions, such as:
State-centrism: states are the central actors in international politics, rather than leaders or international organizations;
Anarchy: the international political system is anarchic, as there is no supranational authority to enforce rules;
Rationality and/or egoism: states act in their rational self-interest within the international system; and
Power: states desire power to ensure self-preservation.
Political scientists sometimes associate realism with Realpolitik,
as both deal with the pursuit, possession, and application of power. Realpolitik, however, is an older prescriptive guideline limited to policy-making, while realism is a wider theoretical and methodological paradigm which aims to describe, explain, and predict events in international relations. As an academic pursuit, realism is not necessarily tied to ideology; it does not favor any particular moral philosophy, nor does it consider ideology to be a major factor in the behavior of nations.
However, realists are generally critical of liberal foreign policy. Garrett Ward Sheldon has characterised the priorities of realists as Machiavellian and seen them as prioritising the seeking of power, although realists have also advocated the idea that powerful states concede spheres of influence to other powerful states.
Common assumptions
The four propositions of realism are as follows.
State-centrism: States are the most important actors.
Anarchy: The international system is anarchic.
No actor exists above states, capable of regulating their interactions; states must arrive at relations with other states on their own, rather than it being dictated to them by some higher controlling entity.
The international system exists in a state of constant antagonism (anarchy).
Egoism: All states within the system pursue narrow self-interests.
States tend to pursue self-interest.
Groups strive to attain as many resources as possible (relative gain).
Power politics: The primary concern of all states is power and security.
States build up their militaries to survive, which may lead to a security dilemma.
Realists believe that mankind is not inherently benevolent but rather self-centered and competitive. This perspective, which is shared by theorists such as Thomas Hobbes, views human nature as egocentric (not necessarily selfish) and conflictual unless there exist conditions under which humans may coexist. It is also disposed of the notion that an individual's intuitive nature is made up of anarchy. In regards to self-interest, these individuals are self-reliant and are motivated in seeking more power. They are also believed to be fearful. This view contrasts with the approach of liberalism to international relations.
The state emphasises an interest in accumulating power to ensure security in an anarchic world. Power is a concept primarily thought of in terms of material resources necessary to induce harm or coerce other states (to fight and win wars). The use of power places an emphasis on coercive tactics being acceptable to either accomplish something in the national interest or avoid something inimical to the national interest. The state is the most important actor under realism. It is unitary and autonomous because it speaks and acts with one voice. The power of the state is understood in terms of its military capabilities. A key concept under realism is the international distribution of power referred to as system polarity. Polarity refers to the number of blocs of states that exert power in an international system. A multipolar system is composed of three or more blocs, a bipolar system is composed of two blocs, and a unipolar system is dominated by a single power or hegemon. Under unipolarity realism predicts that states will band together to oppose the hegemon and restore a balance of power. Although all states seek hegemony under realism as the only way to ensure their own security, other states in the system are incentivised to prevent the emergence of a hegemon through balancing.
States employ the rational model of decision making by obtaining and acting upon complete and accurate information. The state is sovereign and guided by a national interest defined in terms of power. Since the only constraint of the international system is anarchy, there is no international authority and states are left to their own devices to ensure their own security. Realists believe that sovereign states are the principal actors in the international system. International institutions, non-governmental organizations, multinational corporations, individuals and other sub-state or trans-state actors are viewed as having little independent influence. States are inherently aggressive (offensive realism) and obsessed with security (defensive realism). Territorial expansion is only constrained by opposing powers. This aggressive build-up, however, leads to a security dilemma whereby increasing one's security may bring along even greater instability as an opposing power builds up its own arms in response (an arms race). Thus, security becomes a zero-sum game where only relative gains can be made. Moreover, the "relative gains" notion of the realist school implies that states must fight against each other to gain benefits.
Realists believe that there are no universal principles with which all states may guide their actions. Instead, a state must always be aware of the actions of the states around it and must use a pragmatic approach to resolve problems as they arise. A lack of certainty regarding intentions prompts mistrust and competition between states.
Rather than assume that states are the central actors, some realists, such as William Wohlforth and Randall Schweller refer instead to "groups" as the key actors of interest.
Finally, states are sometimes described as "billiard balls" or "black boxes". This analogy is meant to underscore the secondary importance of internal state dynamics and decisionmaking in realist models, in stark contrast to bureaucratic or individual-level theories of international relations.
Realism in statecraft
The ideas behind George F. Kennan's work as a diplomat and diplomatic historian remain relevant to the debate over American foreign policy, which since the 19th century has been characterized by a shift from the Founding Fathers' realist school to the idealistic or Wilsonian school of international relations. In the realist tradition, security is based on the principle of a balance of power and the reliance on morality as the sole determining factor in statecraft is considered impractical. According to the Wilsonian approach, on the other hand, the spread of democracy abroad as a foreign policy is key and morals are universally valid. During the Presidency of Bill Clinton, American diplomacy reflected the Wilsonian school to such a degree that those in favor of the realist approach likened Clinton's policies to social work. Some argue that in Kennan's view of American diplomacy, based on the realist approach, such apparent moralism without regard to the realities of power and the national interest is self-defeating and may lead to the erosion of power, to America's detriment. Others argue that Kennan, a proponent of the Marshall Plan (which gave out bountiful American aid to post-WW2 countries), might agree that Clinton's aid functioned strategically to secure international leverage: a diplomatic maneuver well within the bounds of political realism as described by Hedley Bull.
Realists often hold that statesmen tend towards realism whereas realism is deeply unpopular among the public. When statesmen take actions that divert from realist policies, academic realists often argue that this is due to distortions that stem from domestic politics. However, some research suggests that realist policies are actually popular among the public whereas elites are more beholden to liberal ideas. Abrahamsen suggested that realpolitik for middle powers can include supporting idealism and liberal internationalism.
Historical branches and antecedents
While realism as a formal discipline in international relations did not arrive until World War II, its primary assumptions have been expressed in earlier writings. Realists trace the history of their ideas back to classical antiquity, beginning with Thucydides ( 5th century BCE).
Historian Jean Bethke Elshtain traces the historiography of realism:
The genealogy of realism as international relations, although acknowledging antecedents, gets down to serious business with Machiavelli, moving on to theorists of sovereignty and apologists for the national interest. It is present in its early modern forms with Hobbes's Leviathan (1651).
Modern realism began as a serious field of research in the United States during and after World War II. This evolution was partly fueled by European war migrants like Hans Morgenthau, whose work Politics Among Nations is considered a seminal development in the rise of modern realism. Other influential figures were George F. Kennan (known for his work on containment), Nicholas Spykman (known for his work on geostrategy and containment), Herman Kahn (known for his work on nuclear strategy) and E. H. Carr.
Classical realism
Classical realism states that it is fundamentally the nature of humans that pushes states and individuals to act in a way that places interests over ideologies. Classical realism is an ideology defined as the view that the "drive for power and the will to dominate [that are] held to be fundamental aspects of human nature".
Prominent classical realists:
E. H. Carr
Hans Morgenthau
Reinhold Niebuhr – Christian realism
Raymond Aron
George Kennan
Liberal realism or the English school of rationalism
The English school holds that the international system, while anarchical in structure, forms a "society of states" where common norms and interests allow for more order and stability than that which may be expected in a strict realist view. Prominent English School writer Hedley Bull's 1977 classic, The Anarchical Society, is a key statement of this position.
Prominent liberal realists:
Hedley Bull – argued for both the existence of an international society of states and its perseverance even in times of great systemic upheaval, meaning regional or so-called "world wars"
Martin Wight
Barry Buzan
Neorealism or structural realism
Neorealism derives from classical realism except that instead of human nature, its focus is predominantly on the anarchic structure of the international system. States are primary actors because there is no political monopoly on force existing above any sovereign. While states remain the principal actors, greater attention is given to the forces above and below the states through levels of analysis or structure and agency debate. The international system is seen as a structure acting on the state with individuals below the level of the state acting as agency on the state as a whole.
While neorealism shares a focus on the international system with the English school, neorealism differs in the emphasis it places on the permanence of conflict. To ensure state security, states must be on constant preparation for conflict through economic and military build-up.
Prominent neorealists:
Robert J. Art – neorealism
Robert Gilpin – hegemonic theory
Robert Jervis – defensive realism
John Mearsheimer – offensive realism
Barry Posen – neorealism
Kenneth Waltz – defensive realism
Stephen Walt – defensive realism
Neoclassical realism
Neoclassical realism can be seen as the third generation of realism, coming after the classical authors of the first wave (Thucydides, Niccolò Machiavelli, Thomas Hobbes) and the neorealists (especially Kenneth Waltz). Its designation of "neoclassical", then, has a double meaning:
It offers the classics a renaissance;
It is a synthesis of the neorealist and the classical realist approaches.
Gideon Rose is responsible for coining the term in a book review he wrote in 1998.
The primary motivation underlying the development of neoclassical realism was the fact that neorealism was only useful to explain political outcomes (classified as being theories of international politics), but had nothing to offer about particular states' behavior (or theories of foreign policy). The basic approach, then, was for these authors to "refine, not refute, Kenneth Waltz", by adding domestic intervening variables between systemic incentives and a state's foreign policy decision. Thus, the basic theoretical architecture of neoclassical realism is:
Distribution of power in the international system (independent variable)
Domestic perception of the system and domestic incentives (intervening variable)
Foreign policy decision (dependent variable)
While neoclassical realism has only been used for theories of foreign policy so far, Randall Schweller notes that it could be useful to explain certain types of political outcomes as well.
Neoclassical realism is particularly appealing from a research standpoint because it still retains a lot of the theoretical rigor that Waltz has brought to realism, but at the same time can easily incorporate a content-rich analysis, since its main method for testing theories is the process-tracing of case studies.
Prominent neoclassical realists:
Aaron Friedberg
Randall Schweller
William Wohlforth
Fareed Zakaria
Realist constructivism
Some see a complementarity between realism and constructivism. Samuel Barkin, for instance, holds that "realist constructivism" can fruitfully "study the relationship between normative structures, the carriers of political morality, and uses of power" in ways that existing approaches do not. Similarly, Jennifer Sterling-Folker has argued that theoretical synthesis helps explanations of international monetary policy by combining realism's emphasis of an anarchic system with constructivism's insights regarding important factors from the domestic level. Scholars such as Oded Löwenheim and Ned Lebow have also been associated with realist constructivism.
Criticisms
Democratic peace
Democratic peace theory advocates also that realism is not applicable to democratic states' relations with each other as their studies claim that such states do not go to war with one another. However, realists and proponents of other schools have critiqued this claim, claiming that its definitions of "war" and "democracy" must be tweaked in order to achieve this result. The interactive model of democratic peace observes a gradual influence of both democracy and democratic difference on wars and militarized interstate disputes. A realist government may not consider it in its interest to start a war for little gain, so realism does not necessarily mean constant battles.
Hegemonic peace and conflict
Robert Gilpin developed the theory of hegemonic stability theory within the realist framework, but limited it to the economic field. Niall Ferguson remarked that the theory has offered insights into the way that economic power works, but neglected the military and cultural aspects of power.
John Ikenberry and Daniel Deudney state that the Iraq War, conventionally blamed on liberal internationalism by realists, actually originates more closely from hegemonic realism. The "instigators of the war", they suggest, were hegemonic realists. Where liberal internationalists reluctantly supported the war, they followed arguments linked to interdependence realism relating to arms control. John Mearsheimer states that "One might think..." events including the Bush Doctrine are "evidence of untethered realism that unipolarity made possible," but disagrees and contends that various interventions are caused by a belief that a liberal international order can transcend power politics.
Inconsistent with non-European politics
Scholars have argued that realist theories, in particular realist conceptions of anarchy and balances of power, have not characterized the international systems of East Asia and Africa (before, during and after colonization).
State-centrism
Scholars have criticized realist theories of international relations for assuming that states are fixed and unitary units.
Appeasement
In the mid-20th century, realism was seen as discredited in the United Kingdom due to its association with appeasement in the 1930s. It re-emerged slowly during the Cold War.
Scholar Aaron McKeil pointed to major illiberal tendencies within realism that, aiming for a sense of "restraint" against liberal interventionism, would lead to more proxy wars, and fail to offer institutions and norms for mitigating great power conflict.
Realism as degenerative research programs
John Vasquez applied Imre Lakatos's criteria, and concluded that realist-based research program is seen as degenerating due to the protean character of its theoretical development, an unwillingness to specify what makes the true theory, a continuous adoption of auxiliary propositions to explain away flaws, and lack of strong research findings. Against Vasquez, Stephen Walt argued that Vasquez overlooked the progressive power of realist theory. Kenneth Waltz claimed that Vasquez misunderstood Lakatos.
Abstract theorizing and non-consensus moral principles
The mainstream version of realism is criticized for abstract theorizing at the expense of historical detail and for a non-consensus foundation of the moral principles of the "rules of international conduct"; as evidenced in the case of Russian invasion of Ukraine.
See also
Complex interdependence
Consensus reality
Consequentialism
International legal theory
Game theory
Global justice
Legalism (Chinese philosophy)
Might makes right
Negarchy
Peace through strength
Realpolitik
Moral nihilism
Deterrence theory
References
Further reading
Ashley, Richard K. "Political Realism and the Human Interests", International Studies Quarterly (1981) 25: 204–36.
Barkin, J. Samuel Realist Constructivism: Rethinking International Relations Theory (Cambridge University Press; 2010) 202 pages. Examines areas of both tension and overlap between the two approaches to IR theory.
Bell, Duncan, ed. Political Thought and International Relations: Variations on a Realist Theme. Oxford: Oxford University Press, 2008.
Booth, Ken. 1991. "Security in anarchy: Utopian realism in theory and practice", International Affairs 67(3), pp. 527–545
Crawford; Robert M. A. Idealism and Realism in International Relations: Beyond the Discipline (2000) online edition
Donnelly; Jack. Realism and International Relations (2000) online edition
Gilpin, Robert G. "The richness of the tradition of political realism", International Organization (1984), 38:287–304
Griffiths; Martin. Realism, Idealism, and International Politics: A Reinterpretation (1992) online edition
Guilhot Nicolas, ed. The Invention of International Relations Theory: Realism, the Rockefeller Foundation, and the 1954 Conference on Theory (2011)
Keohane, Robert O., ed. Neorealism and its Critics (1986)
Lebow, Richard Ned. The Tragic Vision of Politics: Ethics, Interests and Orders. Cambridge: Cambridge University Press, 2003.
Mearsheimer, John J., "The Tragedy of Great Power Politics." New York: W.W. Norton & Company, 2001. [Seminal text on Offensive Neorealism]
Meyer, Donald. The Protestant Search for Political Realism, 1919–1941 (1988) online edition
Molloy, Sean. The Hidden History of Realism: A Genealogy of Power Politics. New York: Palgrave, 2006.
Morgenthau, Hans. "Scientific Man versus Power Politics" (1946) Chicago, IL: University of Chicago Press.
"Politics Among Nations: The Struggle for Power and Peace" (1948) New York NY: Alfred A. Knopf.
"In Defense of the National Interest" (1951) New York, NY: Alfred A. Knopf.
"The Purpose of American Politics" (1960) New York, NY: Alfred A. Knopf.
Murray, A. J. H., Reconstructing Realism: Between Power Politics and Cosmopolitan Ethics. Edinburgh: Keele University Press, 1997.
Rösch, Felix. "Unlearning Modernity. A Realist Method for Critical International Relations?." Journal of International Political Theory 13, no. 1 (2017): 81–99.
Rosenthal, Joel H. Righteous Realists: Political Realism, Responsible Power, and American Culture in the Nuclear Age. (1991). 191 pp. Compares Reinhold Niebuhr, Hans J. Morgenthau, Walter Lippmann, George F. Kennan, and Dean Acheson
Scheuerman, William E. 2010. "The (classical) Realist vision of global reform." International Theory 2(2): pp. 246–282.
Schuett, Robert. Political Realism, Freud, and Human Nature in International Relations. New York: Palgrave, 2010.
Smith, Michael Joseph. Realist Thought from Weber to Kissinger (1986)
Tjalve, Vibeke S. Realist Strategies of Republican Peace: Niebuhr, Morgenthau, and the Politics of Patriotic Dissent. New York: Palgrave, 2008.
Williams, Michael C. The Realist Tradition and the Limits of International Relations. Cambridge: Cambridge University Press, 2005. online edition
External links
Political Realism in International Relations in Stanford Encyclopedia of Philosophy
Richard K. Betts, "Realism", YouTube
Political realism
International relations theory | 0.765495 | 0.998315 | 0.764205 |
Blanqueamiento | Blanqueamiento in Spanish, or branqueamento in Portuguese (both meaning whitening), is a social, political, and economic practice used in many post-colonial countries in the Americas and Oceania to "improve the race" (mejorar la raza) towards a supposed ideal of whiteness. The term blanqueamiento is rooted in Latin America and is used more or less synonymously with racial whitening. However, blanqueamiento can be considered in both the symbolic and biological sense. Symbolically, blanqueamiento represents an ideology that emerged from legacies of European colonialism, described by Anibal Quijano's theory of coloniality of power, which caters to white dominance in social hierarchies. Biologically, blanqueamiento is the process of whitening by marrying a lighter-skinned individual to produce lighter-skinned offspring.
Definition
Peter Wade argues that blanqueamiento is a historical process that can be linked to nationalism. When thinking about nationalism, the ideologies behind it stem from national identity, which according to Wade is "a construction of the past and the future", where the past is understood as being more traditional and backwards. For example, past demographics of Puerto Rico were heavily black and Indian-influenced because the country partook in the slave trade and was simultaneously home to many indigenous groups. Therefore, understanding blanqueamiento as it relates to modernization, modernization is then understood as a guidance in the direction away from black and indigenous roots. Modernization then happened as described by Wade as "the increasing integration of blacks and Indians into modern society, where they will mix in and eventually disappear, taking their primitive culture with them". This kind of implementation of blanqueamiento takes place in societies that have historically always been led by 'white' people whose guidance would carry "the country away from its past, which began in Indianness and slavery" with hopes of promoting the intermixing of bodies to develop a predominantly white-skinned society.
As related to mestizaje
The formation of mestizaje emerged in the shift of Latin America towards multiculturalist perspectives and policies. Mestizaje has been considered problematic by many scholars because it sustains racial hierarchies and celebrates blanqueamiento. For example, Swanson argues that although mestizaje is not a physical embodiment of whitening, it is "not so much about mixing, as it about a progressive whitening of the population".
Another possibility when considering mestizaje as it relates to blanqueamiento is by understanding mestizaje as a concept that encourages mixedness, but differs from the concept of blanqueamiento on the basis of the end goal for mestizaje. As Peter Wade states, "it celebrates the idea of difference in a democratic, non-hierarchical form. Rather than envisioning a gradual whitening, it holds up the general image of the mestizo in which racial, regional, and even class differences are submerged into a common identification with mixedness." On the same coin, when thinking about blanqueamiento, the future goal takes up the same theme of mixing. The difference between them is that while mestizaje glorifies the mixing of all people to reach an end goal of having a brown population, blanqueamiento has the end goal of whiteness. The outcome of mestizaje mixing would lead to "the predominance of the mestizo" and is not "construed necessarily as (a) whitened mestizo". Most importantly, both of these ideologies link emerging nationhood with the predominance of the mestizo or the whitened population.
In the early Republican era of Brazil, miscegenation as a form of whitening was looked down upon, as it was thought that the mestiço population retained the inferior qualities of the Indigenous and Afro-Brazilians. It was for this reason that immigration as a form of whitening was preferred rather than through interracial relations. Theorists such as Raimundo Nina Rodrigues believed that education could improve the state of certain groups, such as Muslim Africans, however Indigenous and Afro-Brazilians were excluded from this.
National policy
Blanqueamiento was enacted in national policies of many Latin American countries at the turn of the 20th century. In most cases, these policies promoted European immigration as a means to whiten the population.
Brazil
Branqueamento was circulated in national policy throughout Brazil in the late 19th and early 20th centuries. Branqueamento policies emerged in the aftermath of the abolition of slavery and the beginning of Brazil's first republic (1888–1889). To dilute the black race, the Brazilian government took measures to increase European immigration. More than 1 million Europeans arrived in São Paulo between 1890 and 1914. State and federal governments funded and subsidized immigrant travels from Portugal, Spain, Italy, Russia, Germany, Austria, France, and the Netherlands. Claims that white blood would eventually eliminate black blood were found in accounts of immigration statistics. Created in the late 19th century, Brazil's Directoria Geral de Estatística (DGE) has conducted demographic censuses and managed to measure the progress of whitening as successful in Brazil.
Cuba
At the beginning of the 20th century, the Cuban government created immigration laws that invested more than $1 million into recruiting Europeans into Cuba to whiten the state. High participation of blacks in independence movements threatened white elitist power and when the 1899 census showed that more than of Cuba's population was colored, white migration started to gain support. Political blanqueamiento began in 1902 after the U.S. occupation, where migration of "undesirables" (i.e. blacks) became prohibited in Cuba. Immigration policies supported the migration of entire families. Between 1902 and 1907, nearly 128,000 Spaniards entered Cuba, and officially in 1906, Cuba created its immigration law that funded white migrants. However, many European immigrants did not stay in Cuba and came solely for the sugar harvest, returning to their homes during the off seasons. Although some 780,000 Spaniards migrated between 1902–1931, only 250,000 stayed. By the 1920s, blanqueamiento through national policy had effectively failed.
Social
Blanqueamiento is also associated with food consumption. For example, in Osorno, a Chilean city with a strong German heritage, consumption of desserts, marmalades and kuchens "whitens" the inhabitants of the city.
Economic
Blanqueamiento can also be accomplished through economic achievement. Many scholars have argued that money has the ability to whiten, where wealthier individuals are more likely to be classified as white, regardless of phenotypic appearance. It is by this changing of social status that blacks achieve blanqueamiento. In his study, Marcus Eugenio Oliveira Lima showed that groups of Brazilians succeeded more when whitened.
Blanqueamiento has also been seen as a way to better the economy. In the case of Brazil, immigration policies that would help whiten the nation were seen as progressive ways to modernize and achieve capitalism. In Cuba, blanqueamiento policies limited economic opportunities for African descendants, resulting in their reduced upward mobility in education, property, and employment sectors.
See also
Acculturation
Colonial mentality
Coloniality of power
Creole peoples
Discrimination based on skin color
Gente de razón
Hispanic eugenics
Limpieza de sangre
Hispanidad
History of eugenics
Race and ethnicity in Latin America
Racial passing
Racism in South America
Skin whitening
Stolen Generations and White Australia policy– conceptually similar approach in Australia
Westernization
References
Cultural assimilation
Latin American caste system
Multiracial affairs in Brazil
History of eugenics
White supremacy in South America
Light skin | 0.771629 | 0.990266 | 0.764119 |
Dehumanization | Dehumanization is the denial of full humanity in others along with the cruelty and suffering that accompany it. A practical definition refers to it as the viewing and the treatment of other people as though they lack the mental capacities that are commonly attributed to human beings. In this definition, every act or thought that regards a person as "less than" human is dehumanization.
Dehumanization is one form of incitement to genocide. It has also been used to justify war, judicial and extrajudicial killing, slavery, the confiscation of property, denial of suffrage and other rights, and to attack enemies or political opponents.
Conceptualizations
Behaviorally, dehumanization describes a disposition towards others that debases the others' individuality by either portraying it as an "individual" species or by portraying it as an "individual" object (e.g., someone who acts inhumanely towards humans). As a process, dehumanization may be understood as the opposite of personification, a figure of speech in which inanimate objects or abstractions are endowed with human qualities; dehumanization then is the disendowment of these same qualities or a reduction to abstraction.
In almost all contexts, dehumanization is used pejoratively along with a disruption of social norms, with the former applying to the actor(s) of behavioral dehumanization and the latter applying to the action(s) or processes of dehumanization. For instance, there is dehumanization for those who are perceived as lacking in culture or civility, which are concepts that are believed to distinguish humans from animals. Social norms define humane behavior and reflexively define what is outside of humane behavior or inhumane. Dehumanization differs from inhumane behaviors or processes in its breadth to propose competing social norms. It is an action of dehumanization as the old norms are depreciated to the competing new norms, which then redefine the action of dehumanization. If the new norms lose acceptance, then the action remains one of dehumanization. The definition of dehumanization remains in a reflexive state of a type-token ambiguity relative to both individual and societal scales.
In biological terms, dehumanization can be described as an introduced species marginalizing the human species, or an introduced person/process that debases other people inhumanely.
In political science and jurisprudence, the act of dehumanization is the inferential alienation of human rights or denaturalization of natural rights, a definition contingent upon presiding international law rather than social norms limited by human geography. In this context, a specialty within species does not need to constitute global citizenship or its inalienable rights; the human genome inherits both.
It is theorized that dehumanization takes on two forms: animalistic dehumanization, which is employed on a mostly intergroup basis; and mechanistic dehumanization, which is employed on a mostly interpersonal basis. Dehumanization can occur discursively (e.g., idiomatic language that likens individual human beings to non-human animals, verbal abuse, erasing one's voice from discourse), symbolically (e.g., imagery), or physically (e.g., chattel slavery, physical abuse, refusing eye contact). Dehumanization often ignores the target's individuality (i.e., the creative and exciting aspects of their personality) and can hinder one from feeling empathy or correctly understanding a stigmatized group.
Dehumanization may be carried out by a social institution (such as a state, school, or family), interpersonally, or even within oneself. Dehumanization can be unintentional, especially upon individuals, as with some types of de facto racism. State-organized dehumanization has historically been directed against perceived political, racial, ethnic, national, or religious minority groups. Other minoritized and marginalized individuals and groups (based on sexual orientation, gender, disability, class, or some other organizing principle) are also susceptible to various forms of dehumanization. The concept of dehumanization has received empirical attention in the psychological literature. It is conceptually related to infrahumanization, delegitimization, moral exclusion, and objectification. Dehumanization occurs across several domains; it is facilitated by status, power, and social connection; and results in behaviors like exclusion, violence, and support for violence against others.
"Dehumanisation is viewed as a central component to intergroup violence because it is frequently the most important precursor to moral exclusion, the process by which stigmatized groups are placed outside the boundary in which moral values, rules, and considerations of fairness apply."
David Livingstone Smith, director and founder of The Human Nature Project at the University of New England, argues that historically, human beings have been dehumanizing one another for thousands of years. In his work "The Paradoxes of Dehumanization", Smith proposes that dehumanization simultaneously regards people as human and subhuman. This paradox comes to light, as Smith identifies, because the reason people are dehumanized is so their human attributes can be taken advantage of.
Humanness
In Herbert Kelman's work on dehumanization, humanness has two features: "identity" (i.e., a perception of the person "as an individual, independent and distinguishable from others, capable of making choices") and "community" (i.e., a perception of the person as "part of an interconnected network of individuals who care for each other"). When a target's agency and embeddedness in a community are denied, they no longer elicit compassion or other moral responses and may suffer violence.
Objectification
Psychologist Barbara Fredrickson and Tomi-Ann Roberts argued that the sexual objectification of women extends beyond pornography (which emphasizes women's bodies over their uniquely human mental and emotional characteristics) to society generally. There is a normative emphasis on female appearance that causes women to take a third-person perspective on their bodies. The psychological distance women may feel from their bodies might cause them to dehumanize themselves. Some research has indicated that women and men exhibit a "sexual body part recognition bias", in which women's sexual body parts are better recognized when presented in isolation than in their entire bodies. In contrast, men's sexual body parts are better recognized in the context of their entire bodies than in isolation. Men who dehumanize women as either animals or objects are more liable to rape and sexually harass women and display more negative attitudes toward female rape victims.
Philosopher Martha Nussbaum identified seven components of sexual objectification: instrumentality, denial of autonomy, inertness, fungibility, violability, ownership, and denial of subjectivity.
In this context, instrumentality refers to when the objectified is used as an instrument to the objectifier's benefit. Denial of autonomy occurs in the form of the objectifier underestimating the objectified and denies their capabilities. In the case of inertness, the objectified is treated as if they are lazy and indolent. Fungibility brands the objectified to be easily replaceable. Volability is when the objectifier does not respect the objectified person's personal space or boundaries. Ownership is when the objectified is seen as another person's property. Lastly, the denial of subjectivity is a lack of sympathy for the objectified, or the dismissal of the notion that the objectified has feelings. These seven components cause the objectifier to view the objectified in a disrespectful way, therefore treating them so.
History
Native Americans
Native Americans were dehumanized as "merciless Indian savages" in the United States Declaration of Independence. Following the Wounded Knee massacre in December 1890, author L. Frank Baum wrote:The Pioneer has before declared that our only safety depends upon the total extermination [sic] of the Indians. Having wronged them for centuries we had better, in order to protect our civilization, follow it up by one more wrong and wipe these untamed and untamable creatures from the face of the earth. In this lies safety for our settlers and the soldiers who are under incompetent commands. Otherwise, we may expect future years to be as full of trouble with the redskins as those have been in the past. In Martin Luther King Jr.'s book on civil rights, Why We Can't Wait, he wrote:
Our nation was born in genocide when it embraced the doctrine that the original American, the Indian, was an inferior race. Even before there were large numbers of Negroes on our shores, the scar of racial hatred had already disfigured colonial society. From the sixteenth century forward, blood flowed in battles over racial supremacy. We are perhaps the only nation which tried as a matter of national policy to wipe out its indigenous population. Moreover, we elevated that tragic experience into a noble crusade. Indeed, even today we have not permitted ourselves to reject or to feel remorse for this shameful episode. Our literature, our films, our drama, our folklore all exalt it.
King was an active supporter of the Native American rights movement, which he drew parallels with his own leadership of the civil rights movement. Both movements aimed to overturn dehumanizing attitudes held by members of the public at large against them.
Causes and facilitating factors
Several lines of psychological research relate to the concept of dehumanization. Infrahumanization suggests that individuals think of and treat outgroup members as "less human" and more like animals; while Austrian ethnologist Irenäus Eibl-Eibesfeldt uses the term pseudo-speciation, a term that he borrowed from the psychoanalyst Erik Erikson, to imply that the dehumanized person or persons are regarded as not members of the human species. Specifically, individuals associate secondary emotions (which are seen as uniquely human) more with the ingroup than with the outgroup. Primary emotions (those experienced by all sentient beings, whether human or other animals) are found to be more associated with the outgroup. Dehumanization is intrinsically connected with violence. Often, one cannot do serious injury to another without first dehumanizing him or her in one's mind (as a form of rationalization). Military training is, among other things, systematic desensitization and dehumanization of the enemy, and military personnel may find it psychologically necessary to refer to the enemy as an animal or other non-human beings. Lt. Col. Dave Grossman has shown that without such desensitization it would be difficult, if not impossible, for one human to kill another human, even in combat or under threat to their own lives.
According to Daniel Bar-Tal, delegitimization is the "categorization of groups into extreme negative social categories which are excluded from human groups that are considered as acting within the limits of acceptable norms and values".
Moral exclusion occurs when outgroups are subject to a different set of moral values, rules, and fairness than are used in social relations with ingroup members. When individuals dehumanize others, they no longer experience distress when they treat them poorly. Moral exclusion is used to explain extreme behaviors like genocide, harsh immigration policies, and eugenics, but it can also happen on a more regular, everyday discriminatory level. In laboratory studies, people who are portrayed as lacking human qualities are treated in a particularly harsh and violent manner.
Dehumanized perception occurs when a subject experiences low frequencies of activation within their social cognition neural network. This includes areas of neural networking such as the superior temporal sulcus (STS) and the medial prefrontal cortex (mPFC). A 2001 study by psychologists Chris and Uta Frith suggests that the criticality of social interaction within a neural network has tendencies for subjects to dehumanize those seen as disgust-inducing, leading to social disengagement. Tasks involving social cognition typically activate the neural network responsible for subjective projections of disgust-inducing perceptions and patterns of dehumanization. "Besides manipulations of target persons, manipulations of social goals validate this prediction: Inferring preference, a mental-state inference, significantly increases mPFC and STS activity to these otherwise dehumanized targets." A 2007 study by Harris, McClure, van den Bos, Cohen, and Fiske suggests that a person's choice to dehumanize another person is due to decreased neural activity towards the projected target. This decreased neural activity is identified as low medial prefrontal cortex activation, which is associated with perceiving social information.
While social distance from the outgroup target is a necessary condition for dehumanization, some research suggests that this alone is insufficient. Psychological research has identified high status, power, and social connection as additional factors. Members of high-status groups more often associate humanity with the ingroup than the outgroup, while members of low-status groups exhibit no differences in associations with humanity. Thus, having a high status makes one more likely to dehumanize others. Low-status groups are more associated with human nature traits (e.g., warmth, emotionalism) than uniquely human characteristics, implying that they are closer to animals than humans because these traits are typical of humans but can be seen in other species. In addition, another line of work found that individuals in a position of power were more likely to objectify their subordinates, treating them as a means to one's end rather than focusing on their essentially human qualities. Finally, social connection—thinking about a close other or being in the actual presence of a close other—enables dehumanization by reducing the attribution of human mental states, increasing support for treating targets like animals, and increasing willingness to endorse harsh interrogation tactics. This is counterintuitive because social connection has documented personal health and well-being benefits but appears to impair intergroup relations.
Neuroimaging studies have discovered that the medial prefrontal cortex—a brain region distinctively involved in attributing mental states to others—shows diminished activation to extremely dehumanized targets (i.e., those rated, according to the stereotype content model, as low-warmth and low-competence, such as drug addicts or homeless people).
Race and ethnicity
Racist dehumanization entails that groups and individuals are understood as less than fully human by virtue of their race.
Dehumanization often occurs as a result of intergroup conflict. Ethnic and racial others are often represented as animals in popular culture and scholarship. There is evidence that this representation persists in the American context with African Americans implicitly associated with apes. To the extent that an individual has this dehumanizing implicit association, they are more likely to support violence against African Americans (e.g., jury decisions to execute defendants). Historically, dehumanization is frequently connected to genocidal conflicts in that ideologies before and during the conflict depict victims as subhuman (e.g., rodents). Immigrants may also be dehumanized in this manner.
In 1901, the six Australian colonies assented to federation, creating the modern nation state of Australia and its government. Section 51 (xxvi) excluded Aboriginals from the groups protected by special laws, and section 127 excluded Aboriginals from population counts. The Commonwealth Franchise Act 1902 categorically denied Aboriginals the right to vote. Indigenous Australians were not allowed the social security benefits (e.g., aged pensions and maternity allowances) which were provided to others. Aboriginals in rural areas were discriminated against and controlled as to where and how they could marry, work, live, and their movements.
In the U.S., African Americans were dehumanized by being classified as non-human primates. A California police officer who was also involved in the Rodney King beating described a dispute between an American Black couple as "something right out of Gorillas in the Mist". Franz Boas and Charles Darwin hypothesized that there might be an evolutionary process among primates. Monkeys and apes were least evolved, then savage and deformed anthropoids, which referred to people of African ancestry, to Caucasians as most developed.
Language
Language has been used as an essential tool in the process of dehumanizing others. Examples of dehumanizing language when referring to a person or group of people may include animal, cockroach, rat, vermin, monster, ape, snake, infestation, parasite, alien, savage, and subhuman. Other examples can include racist, sexist, and other derogatory forms of language. The use of dehumanizing language can influence others to view a targeted group as less human or less deserving of humane treatment.
In Unit 731, an imperial Japanese biological and chemical warfare research facility, brutal experiments were conducted on humans who the researchers referred to as 'maruta' (丸太) meaning logs. Yoshio Shinozuka, Japanese army medic who performed several vivisections in the facility said, "We called the victims 'logs.' We didn't want to think of them as people. We didn't want to admit that we were taking lives. So we convinced ourselves that what we were doing was like cutting down a tree."
Words such as migrant, immigrant, and expatriate are assigned to foreigners based on their social status and wealth, rather than ability, achievements, or political alignment. Expatriate is a word to describe the privileged, often light-skinned people newly residing in an area and has connotations that suggest ability, wealth, and trust. Meanwhile, the word immigrant is used to describe people coming to a new location to reside and infers a much less-desirable meaning.
The word "immigrant" is sometimes paired with "illegal", which harbors a profoundly derogatory connotation. Misuse of these terms—they are often used inaccurately—to describe the other, can alter the perception of a group as a whole in a negative way. Ryan Eller, the executive director of the immigrant advocacy group Define American, expressed the problem this way:
A series of language examinations found a direct relation between homophobic epithets and social cognitive distancing towards a group of homosexuals, a form of dehumanization. These epithets (e.g., faggot) were thought to function as dehumanizing labels because they tended to act as markers of deviance. One pair of studies found that subjects were more likely to associate malignant language with homosexuals, and that such language associations increased the physical distancing between the subject and the homosexual. This indicated that the malignant language could encourage dehumanization, cognitive and physical distancing in ways that other forms of malignant language do not. Another study involved a computational linguistic analysis of dehumanizing language regarding LGBTQ individuals and groups in the New York Times from 1986 to 2015. The study used previous psychological research on dehumanization to identify four language categories: (1) negative evaluations of a target group, (2) denial of agency, (3) moral disgust, and (4) likening members of the target group to non-human entities (e.g., machines, animals, vermin). The study revealed that LGBTQ people overall have been increasingly more humanized over time; however, they were found to be humanized less frequently than the New York Time's in-group identifier American.
Aliza Luft notes that the role of dehumanizing language and propaganda plays in violence and genocide is far less significant than other factors such as obedience to authority and peer pressure.
Property takeover
Property scholars define dehumanization as "the failure to recognize an individual's or group's humanity." Dehumanization often occurs alongside property confiscation. When a property takeover is coupled with dehumanization, the result is a dignity taking. There are several examples of dignity takings involving dehumanization.
From its founding, the United States repeatedly engaged in dignity takings from Native American populations, taking indigenous land in an "undeniably horrific, violent, and tragic record" of genocide and ethnocide. As recently as 2013, the degradation of a mountain sacred to the Hopi people—by spraying its peak pot with artificial snow made from wastewater—constituted another dignity taking by the U.S. Forest Service.
The 1921 Tulsa race massacre also constituted a dignity taking involving dehumanization. White rioters dehumanized African Americans by attacking, looting, and destroying homes and businesses in Greenwood, a predominantly Black neighborhood known as "Black Wall Street".
During the Holocaust, mass genocide—a severe form of dehumanization—accompanied the destruction and taking of Jewish property. This constituted a dignity taking.
Jewish settlers in the West Bank have been criticized for dehumanizing Palestinians and land grabbing on illegal settlements. These illegal settlement activities involve systemic settler violence against Palestinians, military orders, and state-sanctioned support. These actions force Palestinians to gradually give up their land and farming activities and gradually choke their sources of dignified income. Israeli soldiers sometimes actively participate in violence against civilians or look on from the sidelines.
Undocumented workers in the United States have also been subject to dehumanizing dignity takings when employers treat them as machines instead of people to justify dangerous working conditions. When harsh conditions lead to bodily injury or death, the property destroyed is the physical body.
Media-driven dehumanization
The propaganda model of Edward S. Herman and Noam Chomsky argues that corporate media are able to carry out large-scale, successful dehumanization campaigns when they promote the goals (profit-making) that the corporations are contractually obliged to maximize. State media are also capable of carrying out dehumanization campaigns, whether in democracies or dictatorships, which are pervasive enough that the population cannot avoid the dehumanizing memes.
War propaganda
National leaders use dehumanizing propaganda to sway public opinion in favor of the military elite's agenda or cause and to repel criticism and proper oversight. The Bush Jr administration used dehumanizing rhetoric to describe Arabs and Muslims collectively as backwards, violent fanatics who "hate us for our freedom" to justify his invasions of Afghanistan and Iraq and covert CIA operations in the Middle East and Africa. The media propaganda portrayed Arabs as a "monolithic evil" in the perception of the unwitting American public. They employed news, media, language, magazine stories, television, and popular culture to portray all Muslims as Arab and all Arabs as violent terrorists which much be feared, fought, and destroyed. Racism was also used by portraying all Arabs as dark-skinned and thus racially inferior and untrustworthy.
Non-state actors
Non-state actors—terrorists in particular—have also resorted to dehumanization to further their cause. The 1960s terrorist group Weather Underground had advocated violence against any authority figure and used the "police are pigs" meme to convince members that they were not harming human beings but merely killing wild animals. Likewise, rhetoric statements such as "terrorists are just scum", is an act of dehumanization.
In science, medicine, and technology
Relatively recent history has seen the relationship between dehumanization and science result in unethical scientific research. The Tuskegee syphilis experiment, Unit 731, and Nazi human experimentation on Jewish people are three such examples. In the former, African Americans with syphilis were recruited to participate in a study about the course of the disease. Even when treatment and a cure were eventually developed, they were withheld from the African-American participants so that researchers could continue their study. Similarly, Nazi scientists during the Holocaust conducted horrific experiments on Jewish people and Shiro Ishii's Unit 731 also did so to Chinese, Russian, Mongolian, American, and other nationalities held captive. Both were justified in the name of research and progress, which is indicative of the far-reaching effects that the culture of dehumanization had upon this society. When this research came to light, efforts were made to protect future research participants, and currently, institutional review boards exist to safeguard individuals from being exploited by scientists.
In a medical context, some dehumanizing practices have become more acceptable. While the dissection of human cadavers was seen as dehumanizing in the Dark Ages (see history of anatomy), the value of dissections as a training aid is such that they are now more widely accepted. Dehumanization has been associated with modern medicine generally and has explicitly been suggested as a coping mechanism for doctors who work with patients at the end of life. Researchers have identified six potential causes of dehumanization in medicine: deindividuating practices, impaired patient agency, dissimilarity (causes which do not facilitate the delivery of medical treatment), mechanization, empathy reduction, and moral disengagement (which could be argued to facilitate the delivery of medical treatment).
In some US states, legislation requires that a woman view ultrasound images of her fetus before having an abortion. Critics of the law argue that merely seeing an image of the fetus humanizes it and biases women against abortion. Similarly, a recent study showed that subtle humanization of medical patients appears to improve care for these patients. Radiologists evaluating X-rays reported more details to patients and expressed more empathy when a photo of the patient's face accompanied the X-rays. It appears that the inclusion of the photos counteracts the dehumanization of the medical process.
Dehumanization has applications outside traditional social contexts. Anthropomorphism (i.e., perceiving mental and physical capacities that reflect humans in nonhuman entities) is the inverse of dehumanization. Waytz, Epley, and Cacioppo suggest that the inverse of the factors that facilitate dehumanization (e.g., high status, power, and social connection) should promote anthropomorphism. That is, a low status, socially disconnected person without power should be more likely to attribute human qualities to pets or inanimate objects than a high-status, high-power, socially connected person.
Researchers have found that engaging in violent video game play diminishes perceptions of both one's own humanity and the humanity of the players who are targets of the game violence. While the players are dehumanized, the video game characters are often anthropomorphized.
Dehumanization has occurred historically under the pretense of "progress in the name of science". During the 1904 Louisiana Purchase Exposition, human zoos exhibited several natives from independent tribes worldwide, most notably a young Congolese man, Ota Benga. Benga's imprisonment was put on display as a public service showcasing "a degraded and degenerate race". During this period, religion was still the driving force behind many political and scientific activities. Because of this, eugenics was widely supported among the most notable U.S. scientific communities, political figures, and industrial elites. After relocating to New York in 1906, public outcry led to the permanent ban and closure of human zoos in the United States.
In philosophy
Danish philosopher Søren Kierkegaard explained his stance of anti-dehumanization in his teachings and interpretations of Christian theology. He wrote in his book Works of Love his understanding to be that "to love one's neighbor means equality… your neighbor is every man… he is your neighbor on the basis of equality with you before God; but this equality absolutely every man has, and he has it absolutely."
In art
Spanish romanticism painter Francisco Goya often depicted subjectivity involving the atrocities of war and brutal violence conveying the process of dehumanization. In the romantic period of painting, martyrdom art was most often a means of deifying the oppressed and tormented, and it was common for Goya to depict evil personalities performing these acts; however, he broke convention by dehumanizing these martyr figures: "...one would not know whom the painting depicts, so determinedly has Goya reduced his subjects from martyrs to meat".
See also
References
External links
Abuse
Harassment and bullying
Genocide
Interpersonal relationships
Moral psychology
Prejudice and discrimination
Social psychology concepts
Social inequality
Terrorism tactics
Violence
Human rights abuses | 0.766571 | 0.996793 | 0.764113 |
Neo-Nazism | Neo-Nazism comprises the post-World War II militant, social, and political movements that seek to revive and reinstate Nazi ideology. Neo-Nazis employ their ideology to promote hatred and racial supremacy (often white supremacy), to attack racial and ethnic minorities (often antisemitism and Islamophobia), and in some cases to create a fascist state.
Neo-Nazism is a global phenomenon, with organized representation in many countries and international networks. It borrows elements from Nazi doctrine, including antisemitism, ultranationalism, racism, xenophobia, ableism, homophobia, anti-communism, and creating a "Fourth Reich". Holocaust denial is common in neo-Nazi circles.
Neo-Nazis regularly display Nazi symbols and express admiration for Adolf Hitler and other Nazi leaders. In some European and Latin American countries, laws prohibit the expression of pro-Nazi, racist, antisemitic, or homophobic views. Nazi-related symbols are banned in many European countries (especially Germany) in an effort to curtail neo-Nazism.
Definition
The term neo-Nazism describes any post-World War II militant, social or political movements seeking to revive the ideology of Nazism in whole or in part.
The term 'neo-Nazism' can also refer to the ideology of these movements, which may borrow elements from Nazi doctrine, including ultranationalism, anti-communism, racism, ableism, xenophobia, homophobia, antisemitism, up to initiating the Fourth Reich. Holocaust denial is a common feature, as is the incorporation of Nazi symbols and admiration of Adolf Hitler.
Neo-Nazism is considered a particular form of far-right politics and right-wing extremism.
Hyperborean racial doctrine
Neo-Nazi writers have posited a spiritual, esoteric doctrine of race, which moves beyond the primarily Darwinian-inspired materialist scientific racism popular mainly in the Anglosphere during the 20th century. Figures influential in the development of neo-Nazi racism, such as Miguel Serrano and Julius Evola (writers who are described by critics of Nazism such as the Southern Poverty Law Center as influential within what it presents as parts of "the bizarre fringes of National Socialism, past and present"), claim that the Hyperborean ancestors of the Aryans were in the distant past, far higher beings than their current state, having suffered from "involution" due to mixing with the "Telluric" peoples; supposed creations of the Demiurge. Within this theory, if the "Aryans" are to return to the Golden Age of the distant past, they need to awaken the memory of the blood. An extraterrestrial origin of the Hyperboreans is often claimed. These theories draw influence from Gnosticism and Tantrism, building on the work of the Ahnenerbe. Within this racist theory, Jews are held up as the antithesis of nobility, purity and beauty.
Ecology and environmentalism
Neo-Nazism generally aligns itself with a blood and soil variation of environmentalism, which has themes in common with deep ecology, the organic movement and animal protectionism. This tendency, sometimes called "ecofascism", was represented in the original German Nazism by Richard Walther Darré who was the Reichsminister of Food from 1933 until 1942.
History
Germany and Austria, 1945–1950s
Following the defeat of Nazi Germany, the political ideology of the ruling party, Nazism, was in complete disarray. The final leader of the National Socialist German Workers' Party (NSDAP) was Martin Bormann. He died on 2 May 1945 during the Battle of Berlin, but the Soviet Union did not reveal his death to the rest of the world, and his ultimate fate remained a mystery for many years. Conspiracy theories emerged about Hitler himself, that he had secretly survived the war and fled to South America or elsewhere.
The Allied Control Council officially dissolved the NSDAP on 10 October 1945, marking the end of "Old" Nazism. A process of denazification began, and the Nuremberg trials took place, where many major leaders and ideologues were condemned to death by October 1946, others committed suicide.
In both the East and West, surviving ex-party members and military veterans assimilated to the new reality and had no interest in constructing a "neo-Nazism". However, during the 1949 West German elections a number of Nazi advocates such as Fritz Rössler had infiltrated the national conservative Deutsche Rechtspartei, which had 5 members elected. Rössler and others left to found the more radical Socialist Reich Party (SRP) under Otto Ernst Remer. At the onset of the Cold War, the SRP favoured the Soviet Union over the United States.
In Austria, national independence had been restored, and the explicitly criminalised the NSDAP and any attempt at restoration. West Germany adopted a similar law to target parties it defined as anti-constitutional; Article 21 Paragraph 2 in the Basic Law, banning the SRP in 1952 for being opposed to liberal democracy.
As a consequence, some members of the nascent movement of German neo-Nazism joined the of which Hans-Ulrich Rudel was the most prominent figure. Younger members founded the modelled after the Hitler Youth. The stood for elections from 1953 until 1961 fetching around 1% of the vote each time. Rudel befriended French-born Savitri Devi, who was a proponent of Esoteric Nazism. In the 1950s she wrote a number of books, such as Pilgrimage (1958), which concerns prominent Third Reich sites, and The Lightning and the Sun (1958), in which she claims that Adolf Hitler was an avatar of the God Vishnu. She was not alone in this reorientation of Nazism towards its Thulean-roots; the , founded by former SS member Wilhelm Kusserow, attempted to promote a new paganism.
In the German Democratic Republic (East Germany) a former member of SA, Wilhelm Adam, founded the National Democratic Party of Germany. It reached out to those attracted by the Nazi Party before 1945 and provide them with a political outlet, so that they would not be tempted to support the far-right again or turn to the anti-communist Western Allies. Joseph Stalin wanted to use them to create a new pro-Soviet and anti-Western strain in German politics. According to top Soviet diplomat Vladimir Semyonov, Stalin even suggested that they could be allowed to continue publishing their own newspaper, Völkischer Beobachter. While in Austria, former SS member Wilhelm Lang founded an esoteric group known as the Vienna Lodge; he popularised Nazism and occultism such as the Black Sun and ideas of Third Reich survival colonies below the polar ice caps.
With the onset of the Cold War, the allied forces had lost interest in prosecuting anyone as part of the denazification. In the mid-1950s this new political environment allowed Otto Strasser, an NS activist on the left of the NSDAP, who had founded the Black Front to return from exile. In 1956, Strasser founded the German Social Union as a Black Front successor, promoting a Strasserite "nationalist and socialist" policy, which dissolved in 1962 due to lack of support. Other Third Reich associated groups were the HIAG and Stille Hilfe dedicated to advancing the interests of Waffen-SS veterans and rehabilitating them into the new democratic society. However, they did not claim to be attempting to restore Nazism, instead functioning as lobbying organizations for their members before the government and the two main political parties (the conservative CDU/CSU and the Nazis' one-time archenemies, the Social Democratic Party)
Many bureaucrats who served under the Third Reich continued to serve in German administration after the war. According to the Simon Wiesenthal Center, many of the more than 90,000 Nazi war criminals recorded in German files were serving in positions of prominence under Chancellor Konrad Adenauer. Not until the 1960s were the former concentration camp personnel prosecuted by West Germany in the Belzec trial, Frankfurt Auschwitz trials, Treblinka trials, Chełmno trials, and the Sobibór trial. However, the government had passed laws prohibiting Nazis from publicly expressing their beliefs.
"Universal National Socialism", 1950s–1970s
Neo-Nazism found expression outside of Germany, including in countries who fought against the Third Reich during the Second World War, and sometimes adopted pan-European or "universal" characteristics, beyond the parameters of German nationalism. The two main tendencies, with differing styles and even worldviews, were the followers of the American Francis Parker Yockey, who was fundamentally anti-American and advocated for a pan-European nationalism, and those of George Lincoln Rockwell, an American conservative.
Yockey, a neo-Spenglerian author, had written Imperium: The Philosophy of History and Politics (1949) dedicated to "the hero of the twentieth century" (namely, Adolf Hitler) and founded the European Liberation Front. He was interested more in the destiny of Europe; to this end, he advocated a National Bolshevik-esque red-brown alliance against American culture and influenced 1960s figures such as SS-veteran Jean-François Thiriart. Yockey was also fond of Arab nationalism, in particular Gamal Abdel Nasser, and saw Fidel Castro's Cuban Revolution as a positive, visiting officials there. Yockey's views impressed Otto Ernst Remer and the radical traditionalist philosopher Julius Evola. He was constantly hounded by the FBI and was eventually arrested in 1960, before committing suicide. Domestically, Yockey's biggest sympathisers were the National Renaissance Party, including James H. Madole, H. Keith Thompson and Eustace Mullins ( of Ezra Pound) and the Liberty Lobby of Willis Carto.
Rockwell, an American conservative, was first politicised in the anti-communism and anti-racial integration movements before becoming anti-Jewish. In response to his opponents calling him a "Nazi", he theatrically appropriated the aesthetic elements of the NSDAP, to "own" the intended insult. In 1959, Rockwell founded the American Nazi Party and instructed his members to dress in imitation SA-style brown shirts, while flying the flag of the Third Reich. In contrast to Yockey, he was pro-American and cooperated with FBI requests, despite the party being targeted by COINTELPRO due to the mistaken belief that they were agents of Nasser's Egypt during a brief intelligence "brown scare". Later leaders of American white nationalism came to politics through the ANP, including a teenage David Duke and William Luther Pierce of the National Alliance, although they soon distanced themselves from explicit self-identification with neo-Nazism.
In 1961, the World Union of National Socialists was founded by Rockwell and Colin Jordan of the British National Socialist Movement, adopting the Cotswold Declaration. French socialite Françoise Dior was involved romantically with Jordan and his deputy John Tyndall and a friend of Savitri Devi, who also attended the meeting. The National Socialist Movement wore quasi-SA uniforms, was involved in streets conflicts with the Jewish 62 Group. In the 1970s, Tyndall's earlier involvement with neo-Nazism would come back to haunt the National Front, which he led, as they attempted to ride a wave of anti-immigration populism and concerns over British national decline. Televised exposes on This Week in 1974 and World in Action in 1978, showed their neo-Nazi pedigree and damaged their electoral chances. In 1967, Rockwell was killed by a disgruntled former member. Matthias Koehl took control of the ANP, and strongly influenced by Savitri Devi, gradually transformed it into an esoteric group known as the New Order.
In Franco's Spain, certain SS refugees most notably Otto Skorzeny, Léon Degrelle and the son of Klaus Barbie became associated with CEDADE (Círculo Español de Amigos de Europa), an organisation which disseminated Third Reich apologetics out of Barcelona. They intersected with neo-Nazi advocates from Mark Fredriksen in France to Salvador Borrego in Mexico.
In the post-fascist Italian Social Movement splinter groups such as Ordine Nuovo and Avanguardia Nazionale, involved in the "Years of Lead" considered Nazism a reference. Franco Freda created a "Nazi-Maoism" synthesis.
In Germany itself, the various Third Reich nostalgic movements coalesced around the National Democratic Party of Germany in 1964 and in Austria the National Democratic Party in 1967 as the primary sympathisers of the NSDAP past, although more publicly cautious than earlier groups.
Holocaust denial and subcultures, 1970s–1990s
Holocaust denial, the claim that six million Jews were not deliberately and systematically exterminated as an official policy of the Third Reich and Adolf Hitler, became a more prominent feature of neo-Nazism in the 1970s. Before this time, Holocaust denial had long existed as a sentiment among neo-Nazis, but it had not yet been systematically articulated as a theory with a bibliographical canon. Few of the major theorists of Holocaust denial (who call themselves "revisionists") can be uncontroversially classified as outright neo-Nazis (though some works such as those of David Irving forward a clearly sympathetic view of Hitler and the publisher Ernst Zündel was deeply tied to international neo-Nazism), however, the main interest of Holocaust denial to neo-Nazis was their hope that it would help them rehabilitate their political ideology in the eyes of the general public. Did Six Million Really Die? (1974) by Richard Verrall and The Hoax of the Twentieth Century (1976) by Arthur Butz are popular examples of Holocaust denial material.
Key developments in international neo-Nazism during this time include the radicalisation of the under former Hitler Youth member Bert Eriksson. They began hosting an annual conference; the "Iron Pilgrimage"; at Diksmuide, which drew kindred ideologues from across Europe and beyond. As well as this, the NSDAP/AO under Gary Lauck arose in the United States in 1972 and challenged the international influence of the Rockwellite WUNS. Lauck's organisation drew support from the National Socialist Movement of Denmark of Povl Riis-Knudsen and various German and Austrian figures who felt that the "National Democratic" parties were too bourgeois and insufficiently Nazi in orientation. This included Michael Kühnen, Christian Worch, Bela Ewald Althans and Gottfried Küssel of the 1977-founded ANS/NS which called for the establishment of a Germanic Fourth Reich. Some ANS/NS members were imprisoned for planning paramilitary attacks on NATO bases in Germany and planning to liberate Rudolf Hess from Spandau Prison. The organisation was officially banned in 1983 by the Minister of the Interior.
During the late 1970s, a British subculture came to be associated with neo-Nazism; the skinheads. Portraying an ultra-masculine, crude and aggressive image, with working-class references, some of the skinheads joined the British Movement under Michael McLaughlin (successor of Colin Jordan), while others became associated with the National Front's Rock Against Communism project which was meant to counter the SWP's Rock Against Racism. The most significant music group involved in this project was Skrewdriver, led by Ian Stuart Donaldson. Together with ex-BM member Nicky Crane, Donaldson founded the international Blood & Honour network in 1987. By 1992 this network, with input from Harold Covington, had developed a paramilitary wing; Combat 18, which intersected with football hooligan firms such as the Chelsea Headhunters. The neo-Nazi skinhead movement spread to the United States, with groups such as the Hammerskins. It was popularised from 1986 onwards by Tom Metzger of the White Aryan Resistance. Since then it has spread across the world. Films such as Romper Stomper (1992) and American History X (1998) would fix a public perception that neo-Nazism and skinheads were synonymous.
New developments also emerged on the esoteric level, as former Chilean diplomat Miguel Serrano built on the works of Carl Jung, Otto Rahn, Wilhelm Landig, Julius Evola and Savitri Devi to bind together and develop already existing theories. Serrano had been a member of the National Socialist Movement of Chile in the 1930s and from the early days of neo-Nazism, he had been in contact with key figures across Europe and beyond. Despite this, he was able to work as an ambassador to numerous countries until the rise of Salvador Allende. In 1984 he published his book Adolf Hitler: The Ultimate Avatar. Serrano claimed that the Aryans were extragalactic beings who founded Hyperborea and lived the heroic life of Bodhisattvas, while the Jews were created by the Demiurge and were concerned only with coarse materialism. Serrano claimed that a new Golden Age can be attained if the Hyperboreans repurify their blood (supposedly the light of the Black Sun) and restore their "blood-memory." As with Savitri Devi before him, Serrano's works became a key point of reference in neo-Nazism.
Lifting of the Iron Curtain, 1990s–present
With the fall of the Berlin Wall and the collapse of the Soviet Union during the early 1990s, neo-Nazism began to spread its ideas in the East, as hostility to the triumphant liberal order was high and revanchism a widespread feeling. In Russia, during the chaos of the early 1990s, an amorphous mixture of KGB hardliners, Orthodox neo-Tsarist nostalgics (i.e., Pamyat) and explicit neo-Nazis found themselves strewn together in the same camp. They were united by opposition to the influence of the United States, against the liberalising legacy of Mikhail Gorbachev's and on the Jewish question, Soviet Zionology merged with a more explicit anti-Jewish sentiment. The most significant organisation representing this was Russian National Unity under the leadership of Alexander Barkashov, where black-uniform clad Russians marched with a red flag incorporating the Swastika under the banner of Russia for Russians. These forces came together in a last gasp effort to save the Supreme Soviet of Russia against Boris Yeltsin during the 1993 Russian constitutional crisis. As well as events in Russia, in newly independent ex-Soviet states, annual commemorations for SS volunteers now took place; particularly in Latvia, Estonia and Ukraine.
The Russian developments excited German neo-Nazism who dreamed of a Berlin–Moscow alliance against the supposedly "decadent" Atlanticist forces; a dream which had been thematic since the days of Remer. Zündel visited Russia and met with ex-KGB general Aleksandr Stergilov and other Russian National Unity members. Despite these initial aspirations, international neo-Nazism and its close affiliates in ultra-nationalism would be split over the Bosnian War between 1992 and 1995, as part of the breakup of Yugoslavia. The split would largely be along ethnic and sectarian lines. The Germans and the French would largely back the Western Catholic Croats (Lauck's NSDAP/AO explicitly called for volunteers, which Kühnen's Free German Workers' Party answered and the French formed the "Groupe Jacques Doriot"), while the Russians and the Greeks would back the Orthodox Serbs (including Russians from Barkashov's Russian National Unity, Eduard Limonov's National Bolshevik Front and Golden Dawn members joined the Greek Volunteer Guard). Indeed, the revival of National Bolshevism was able to steal some of the thunder from overt Russian neo-Nazism, as ultra-nationalism was wedded with veneration of Joseph Stalin in place of Adolf Hitler, while still also flirting with Nazi aesthetics.
Analogous European movements
Outside Germany, in other countries which were involved with the Axis powers and had their own native ultra-nationalist movements, which sometimes collaborated with the Third Reich but were not technically German-style National Socialists, revivalist and nostalgic movements have emerged in the post-war period which, as neo-Nazism has done in Germany, seek to rehabilitate their various loosely associated ideologies. These movements include neo-fascists and post-fascists in Italy; Vichyites, Pétainists and "national Europeans" in France; Ustaše sympathisers in Croatia; neo-Chetniks in Serbia; Iron Guard revivalists in Romania; Hungarists and Horthyists in Hungary and others.
Issues
Ex-Nazis in mainstream politics
The most significant case on an international level was the election of Kurt Waldheim to the Presidency of Austria in 1986. It came to light that Waldheim had been a member of the National Socialist German Students' League, the SA and served as an intelligence officer during the Second World War. Following this he served as an Austrian diplomat and was the Secretary-General of the United Nations from 1972 until 1981. After revelations of Waldheim's past were made by an Austrian journalist, Waldheim clashed with the World Jewish Congress on the international stage. Waldheim's record was defended by Bruno Kreisky, an Austrian Jew who served as Chancellor of Austria. The legacy of the affair lingers on, as Victor Ostrovsky has claimed the Mossad doctored the file of Waldheim to implicate him in war crimes.
Contemporary right-wing populism
Some critics have sought to draw a connection between Nazism and modern right-wing populism in Europe, but the two are not widely regarded as interchangeable by most academics. In Austria, the Freedom Party of Austria (FPÖ) served as a shelter for ex-Nazis almost from its inception. In 1980, scandals undermined Austria's two main parties and the economy stagnated. Jörg Haider became leader of the FPÖ and offered partial justification for Nazism, calling its employment policy effective. In the 1994 Austrian election, the FPÖ won 22 percent of the vote, as well as 33 percent of the vote in Carinthia and 22 percent in Vienna; showing that it had become a force capable of reversing the old pattern of Austrian politics.
Historian Walter Laqueur writes that even though Haider welcomed former Nazis at his meetings and went out of his way to address Schutzstaffel (SS) veterans, the FPÖ is not a fascist party in the traditional sense, since it has not made anti-communism an important issue, and it does not advocate the overthrow of the democratic order or the use of violence. In his view, the FPÖ is "not quite fascist", although it is part of a tradition, similar to that of 19th-century Viennese mayor Karl Lueger, which involves nationalism, xenophobic populism, and authoritarianism. Haider, who in 2005 left the Freedom Party and formed the Alliance for Austria's Future, was killed in a traffic accident in October 2008.
Barbara Rosenkranz, the Freedom Party's candidate in Austria's 2010 presidential election, was controversial for having made allegedly pro-Nazi statements. Rosenkranz is married to Horst Rosenkranz, a key member of a banned neo-Nazi party, who is known for publishing far-right books. Rosenkranz says she cannot detect anything "dishonourable" in her husband's activities.
Around the world
Europe
Armenia
The Armenian-Aryan Racialist Political Movement is a National Socialist movement in Armenia. It was founded in 2021 and supports Aryanism, Antisemitism, and White supremacy.
Belgium
A Belgian neo-Nazi organization, Bloed, Bodem, Eer en Trouw (Blood, Soil, Honour and Loyalty), was created in 2004 after splitting from the international network (Blood and Honour). The group rose to public prominence in September 2006, after 17 members (including 11 soldiers) were arrested under the December 2003 anti-terrorist laws and laws against racism, antisemitism and supporters of censorship. According to Justice Minister Laurette Onkelinx and Interior Minister Patrick Dewael, the suspects (11 of whom were members of the military) were preparing to launch terrorist attacks in order to "destabilize" Belgium. According to the journalist Manuel Abramowicz, of the Resistances, the extremists of the radical right have always had as its aim to "infiltrate the state mechanisms," including the army in the 1970s and the 1980s, through Westland New Post and the Front de la Jeunesse.
A police operation, which mobilized 150 agents, searched five military barracks (in Leopoldsburg near the Dutch border, Kleine-Brogel, Peer, Brussels (Royal military school) and Zedelgem) as well as 18 private addresses in Flanders. They found weapons, munitions, explosives and a homemade bomb large enough to make "a car explode". The leading suspect, B.T., was organizing the trafficking of weapons and was developing international links, in particular with the Dutch far-right movement De Nationale Alliantie.
Bosnia and Herzegovina
The neo-Nazi white nationalist organization Bosanski Pokret Nacionalnog Ponosa (Bosnian Movement of National Pride) was founded in Bosnia and Herzegovina in July 2009. Its model is the Waffen-SS Handschar Division, which was composed of Bosniak volunteers. It proclaimed its main enemies to be "Jews, Roma, Serbian Chetniks, the Croatian separatists, Josip Broz Tito, Communists, homosexuals and blacks". Its ideology is a mixture of Bosnian nationalism, National Socialism and white nationalism. It says "Ideologies that are not welcome in Bosnia are: Zionism, Islamism, communism, capitalism. The only ideology good for us is Bosnian nationalism because it secures national prosperity and social justice..." The group is led by a person nicknamed Sauberzwig, after the commander of the 13th SS Handschar. The group's strongest area of operations is in the Tuzla area of Bosnia.
Bulgaria
The primary neo-Nazi political party to receive attention in post-WWII Bulgaria is the Bulgarian National Union – New Democracy.
On 13 February of every year since 2003, Bulgarian neo-Nazis and like-minded far-right nationalists gather at Sofia to honor Hristo Lukov, a late World War II general known for his antisemitic and pro-Nazi stance. From 2003 to 2019, the annual event was hosted by Bulgarian National Union.
Croatia
Neo-Nazis in Croatia base their ideology on the writings of Ante Pavelić and the Ustaše, a fascist anti-Yugoslav separatist movement. The Ustaše regime committed a genocide against Serbs, Jews and Roma. At the end of World War II, many Ustaše members fled to the West, where they found sanctuary and continued their political and terrorist activities (which were tolerated due to Cold War hostilities).
In 1999, Zagreb's Square of the Victims of Fascism was renamed Croatian Nobles Square, provoking widespread criticism of Croatia's attitude towards the Holocaust. In 2000, the Zagreb City Council again renamed the square into Square of the Victims of Fascism. Many streets in Croatia were renamed after the prominent Ustaše figure Mile Budak, which provoked outrage amongst the Serbian minority. Since 2002, there has been a reversal of this development, and streets with the name of Mile Budak or other persons connected with the Ustaše movement are few or non-existent. A plaque in Slunj with the inscription "Croatian Knight Jure Francetić" was erected to commemorate Francetić, the notorious Ustaše leader of the Black Legion. The plaque remained there for four years, until it was removed by the authorities.
In 2003, Croatian penal code was amended with provisions prohibiting the public display of Nazi symbols, the propagation of Nazi ideology, historical revisionism and holocaust denial but the amendments were annulled in 2004 since they were not enacted in accordance with a constitutionally prescribed procedure. Nevertheless, since 2006 Croatian penal code explicitly prohibits any type of hate crime based on race, color, gender, sexual orientation, religion or national origin.
There have been instances of hate speech in Croatia, such as the use of the phrase ("[Hang] Serbs on the willow trees!"). In 2004, an Orthodox church was spray-painted with pro-Ustaše graffiti. During some protests in Croatia, supporters of Ante Gotovina and other at the time suspected war criminals (all acquitted in 2012) have carried nationalist symbols and pictures of Pavelić. On 17 May 2007, a concert in Zagreb by Thompson, a popular Croatian singer, was attended by 60,000 people, some of them wearing Ustaše uniforms. Some gave Ustaše salutes and shouted the Ustaše slogan "Za dom spremni" ("For the homeland – ready!"). This event prompted the Simon Wiesenthal Center to publicly issue a protest to the Croatian president. Cases of displaying Ustashe memorabilia have been recorded at the Bleiburg commemoration held annually in Austria.
Czech Republic
The government of the Czech Republic strictly punishes neo-Nazism (Czech: Neonacismus). According to a report by the Ministry of the Interior of the Czech Republic, neo-Nazis committed more than 211 crimes in 2013. The Czech Republic has various neo-Nazi groups. One of them is the group Wotan Jugend, based in Germany.
Denmark
The National Socialist Movement of Denmark was formed in 1991, and was formally a neo nazi party, that would actively promote the nazi ideology in Denmark. The party did not gain any political influence, and were regarded as a failed political project by neo nazi expert Frede Farmand. Long time party leader Johnni Hansen was replaced by Esben Rohde Kristensen in 2010, which resulted in a large amount of party members leaving the party. While the party never has been formally dissolved, there has been very little activity from its core member since 2010. Former neo nazi Daniel Carlsen formed the small national party Party of the Danes in 2011, which officially rejected nazism, but were none the less categorized as such by professor in politics Peter Nedergaard.It was dissolved in 2017 after its founder Daniel Stockholm announced retirement from politics.
Estonia
In 2006, Roman Ilin, a Jewish theatre director from St. Petersburg, Russia, was attacked by neo-Nazis when returning from a tunnel after a rehearsal. Ilin subsequently accused Estonian police of indifference after filing the incident. When a dark-skinned French student was attacked in Tartu, the head of an association of foreign students claimed that the attack was characteristic of a wave of neo-Nazi violence. An Estonian police official, however, stated that there were only a few cases involving foreign students over the previous two years. In November 2006, the Estonian government passed a law banning the display of Nazi symbols.
The 2008 United Nations Human Rights Council Special Rapporteur's Report noted that community representatives and non-governmental organizations devoted to human rights had pointed out that neo-Nazi groups were active in Estonia—particularly in Tartu—and had perpetrated acts of violence against non-European minorities.
The neo-Nazi terrorist organization Feuerkrieg Division was found and operates in the country, with some members of the Conservative People's Party of Estonia having been linked to the Feuerkrieg Division.
Finland
In Finland, neo-Nazism is often connected to the 1930s and 1940s fascist and pro-Nazi Patriotic People's Movement (IKL), its youth movement Blues-and-Blacks and its predecessor Lapua Movement. Post-war fascist groups such as Patriotic People's Movement (1993), Patriotic People's Front, Patriotic National Movement, Blue-and-Black Movement and many others consciously copy the style of the movement and look up to its leaders as inspiration. A Finns Party councillor and police officer in Seinäjoki caused small scandal wearing the fascist blue-and-black uniform.
During the Cold War, all partied deemed fascist were banned according to the Paris Peace Treaties and all former fascist activists had to find new political homes. Despite Finlandization, many continued in public life. Three former members of the Waffen SS served as ministers; the Finnish SS Battalion officers Sulo Suorttanen (Centre Party) and Pekka Malinen (People's Party) as well as Mikko Laaksonen (Social Democrat), a soldier in the Finnish SS-Company, formed of pro-Nazi defectors. Neo-Nazi activism was limited to small illegal groups like the clandestine Nazi occultist group led by Pekka Siitoin who made headlines after arson and bombing of the printing houses of the Communist Party of Finland. His associates also sent letter bombs to leftists, including to the headquarters of the Finnish Democratic Youth League. Another group called the "New Patriotic People's Movement" bombed the left-wing Kansan Uutiset newspaper and the embassy of communist Bulgaria. Member of the Nordic Realm Party Seppo Seluska was convicted of the torture and murder of a gay Jewish person.
The skinhead culture gained momentum during the late 1980s and peaked during the late 1990s. In 1991, Finland received a number of Somali immigrants who became the main target of Finnish skinhead violence in the following years, including four attacks using explosives and a racist murder. Asylum seeker centres were attacked, in Joensuu skinheads would force their way into an asylum seeker centre and start shooting with shotguns. At worst Somalis were assaulted by 50 skinheads at the same time.
The most prominent neo-Nazi group is the Nordic Resistance Movement, which is tied to multiple murders, attempted murders and assaults of political enemies was found in 2006 and proscribed in 2019. The second biggest Finnish party, the Finns Party politicians have frequently supported far-right and neo-Nazi movements such as the Finnish Defense League, Soldiers of Odin, Nordic Resistance Movement, Rajat Kiinni (Close the Borders), and Suomi Ensin (Finland First).
The NRM, Finns party and other far-right nationalist parties organize an annual torch march demonstration in Helsinki in memory of the Finnish SS-battalion on the Finnish independence day which ends at the Hietaniemi cemetery where members visit the tomb of Carl Gustaf Emil Mannerheim and the monument to the Finnish SS Battalion. The event is protested by antifascists, leading to counterdemonstrators being violently assaulted by NRM members who act as security. The demonstration attracts close to 3,000 participants according to the estimates of the police and hundreds of officers patrol Helsinki to prevent violent clashes.
France
In France, the most enthusiastic collaborationists during the German occupation of France had been the National Popular Rally of Marcel Déat (former SFIO members) and the French Popular Party of Jacques Doriot (former French Communist Party members). These two groups, like the Germans, saw themselves as combining ultra-nationalism and socialism. In the south there existed the vassal state of Vichy France under the military "Hero of the Verdun", Marshal Philippe Pétain whose emphasised an authoritarian Catholic conservative politics. Following the liberation of France and the creation of the Fourth French Republic, collaborators were prosecuted during the and nearly 800 put to death for treason under Charles de Gaulle.
In the aftermath of the Second World War, the main concern of the French radical right was the collapse of the French Empire, in particular the Algerian War, which led to the creation of the OAS. Outside of this, individual fascistic activists such as Maurice Bardèche (brother-in-law of Robert Brasillach), as well as SS-veterans Saint-Loup and René Binet, were active in France and involved in the European Social Movement and later the New European Order, alongside similar groups from across Europe. Early neo-fascist groups included Jeune Nation, which introduced the Celtic cross into use by radical right groups (an association which would spread internationally). A "neither East, nor West" pan-Europeanism was most popular among French fascistic activists until the late 1960s, partly motivated by feelings of national vulnerability following the collapse of their empire; thus the Belgian SS-veteran Jean-François Thiriart's group Jeune Europe also had a considerable French contingent.
It was the 1960s, during the Fifth French Republic, that a considerable upturn in French neo-fascism occurred; some of it in response to the Protests of 1968. The most explicitly pro-Nazi of these was the FANE of Mark Fredriksen. Neo-fascist groups included Pierre Sidos' Occident, the Ordre Nouveau (which was banned after violent clashes with the Trotskyist LCR) and the student-based Groupe Union Défense. A number of these activists such as François Duprat were instrumental in founding the Front National under Jean-Marie Le Pen; but the FN also included a broader selection from the French hard-right, including not only these neo-fascist elements, but also Catholic integrists, monarchists, Algerian War veterans, Poujadists and national-conservatives. Others from these neo-fascist micro-groups formed the Parti des forces nouvelles working against Le Pen.
Within the FN itself, Duprat founded the FANE-backed Groupes nationalistes révolutionnaires faction, until his 1978 assassination. The subsequent history of the French hard right has been the conflict between the national-conservative controlled FN and "national revolutionary" (fascistic and National Bolshevik) splinter or opposition groups. The latter include groups in the tradition of Thiriart and Duprat, such as the Parti communautaire national-européen, Troisième voie, the Nouvelle Résistance of Christian Bouchet, Unité Radicale and most recently Bloc identitaire. Direct splits from the FN include the 1987 founded FANE-revival Parti nationaliste français et européen, which was disbanded in 2000. Neo-Nazi organizations are outlawed in the Fifth French Republic, yet a significant number of them still exist.
Germany
Following the failure of the National Democratic Party of Germany in the election of 1969, small groups committed to the revival of Nazi ideology began to emerge in Germany. The NPD splintered, giving rise to paramilitary Wehrsportgruppe. These groups attempted to organize under a national umbrella organization, the Action Front of National Socialists/National Activists. Neo-Nazi movements in East Germany began as a rebellion against the Communist regime; the banning of Nazi symbols helped neo-Nazism to develop as an anti-authoritarian youth movement. Mail order networks developed to send illegal Nazi-themed music cassettes and merchandise to Germany.
Turks in Germany have been victims of neo-Nazi violence on several occasions. In 1992, two young girls were killed in the Mölln arson attack along with their grandmother; nine others were injured. In 1993, five Turks were killed in the Solingen arson attack. In response to the fire Turkish youth in Solingen rioted chanting "Nazis out!" and "We want Nazi blood". In other parts of Germany police had to intervene to protect skinheads from assault. The Hoyerswerda riots and Rostock-Lichtenhagen riots targeting migrants and ethnic minorities living in Germany also took place during the 1990s.
Between 2000 and 2007, eight Turkish immigrants, one Greek German and a German policewoman were murdered by the neo-Nazi National Socialist Underground. The NSU has its roots in the former East German area of Thuringia, which The Guardian identified as "one of the heartlands of Germany's radical right". The German intelligence services have been criticized for extravagant distributions of cash to informants within the far-right movement. Tino Brandt publicly boasted on television that he had received around €100,000 in funding from the German state. Though Brandt did not give the state "useful information", the funding supported recruitment efforts in Thuringia during the early 1990s. (Brandt was eventually sentenced to five and a half years in prison on for 66 counts of child prostitution and child sexual abuse).
Police were only able to locate the killers when they were tipped off following a botched bank robbery in Eisenach. As the police closed in on them, the two men committed suicide. They had evaded capture for 13 years. Beate Zschäpe, who had been living with the two men in Zwickau, turned herself in to the German authorities a few days later. Zschäpe's trial began in May 2013; she was charged with nine counts of murder. She pleaded "not guilty". According to The Guardian, the NSU may have enjoyed protection and support from certain "elements of the state". Anders Behring Breivik, a fan of Zschäpe's, reportedly sent her a letter from prison in 2012.
According to the annual report of Germany's interior intelligence service (Verfassungsschutz) for 2012, at the time there were 26,000 right-wing extremists living in Germany, including 6,000 neo-Nazis. In January 2020, Combat 18 was banned in Germany, and raids directed against the organization were made across the country. In March 2020, United German Peoples and Tribes, which is part of Reichsbürger, a neo-Nazi movement that rejects the German state as a legal entity, was raided by the German police. Holocaust denial is a crime, according to the German Criminal Code (Strafgesetzbuch § 86a) and § 130 (public incitement).
Greece
The far-right political party Golden Dawn (Χρυσή Αυγή – Chrysi Avyi) is generally labelled neo-Nazi, although the group rejects this label. A few Golden Dawn members participated in the Bosnian War in the Greek Volunteer Guard (GVG) and were present in Srebrenica during the Srebrenica massacre. The party has its roots in Papadopoulos' regime.
There is often collaboration between the state and neo-Nazi elements in Greece. In 2018, during the trial of sixty-nine members of the Golden Dawn party, evidence was presented of the close ties between the party and the Hellenic Police.
Golden Dawn has spoken out in favour of the Assad regime in Syria, and the Strasserist group Black Lily have claimed to have sent mercenaries to Syria to fight alongside the Syrian regime, specifically mentioning their participation in the Battle of al-Qusayr. In the 6 May 2012 legislative election, Golden Dawn received 6.97% of the votes, entering the Greek parliament for the first time with 21 representatives, but when the elected parties were unable to form a coalition government a second election was held in June 2012. Golden Dawn received 6.92% of the votes in the June election and entered the Greek parliament with 18 representatives.
Since 2008, neo-Nazi violence in Greece has targeted immigrants, leftists and anarchist activists. In 2009, certain far-right groups announced that Agios Panteleimonas in Athens was off limits to immigrants. Neo-Nazi patrols affiliated with the Golden Dawn party began attacking migrants in this neighborhood. The violence continued escalating through 2010. In 2013, after the murder of anti-fascist rapper Pavlos Fyssas, the number of hate crimes in Greece declined for several years until 2017. Many of the crimes in 2017 have been attributed to other groups like the Crypteia Organisation and Combat 18 Hellas.
Hungary
In Hungary, the historical political party which allied itself ideologically with German National Socialism and drew inspiration from it, was the Arrow Cross Party of Ferenc Szálasi. They referred to themselves explicitly as National Socialists and within Hungarian politics this tendency is known as Hungarism. After the Second World War, exiles such as Árpád Henney kept the Hungarist tradition alive. Following the fall of the Hungarian People's Republic in 1989, which was a Marxist–Leninist state and a member of the Warsaw Pact, many new parties emerged. Amongst these was the Hungarian National Front of István Győrkös, which was a Hungarist party and considered itself the heirs of Arrow Cross-style National Socialism (a self-description they explicitly embraced). In the 2000s, Győrkös' movement moved closer to a national bolshevist and neo-Eurasian position, aligned with Aleksandr Dugin, cooperating with the Hungarian Workers' Party. Some Hungarists opposed this and founded the Pax Hungarica Movement.
In modern Hungary, the ultranationalist Jobbik is regarded by some scholars as a neo-Nazi party; for example, it has been termed as such by Randolph L. Braham. The party denies being neo-Nazi, although "there is extensive proof that the leading members of the party made no effort to hide their racism and anti-Semitism." Rudolf Paksa, a scholar of the Hungarian far-right, describes Jobbik as "anti-Semitic, racist, homophobic and chauvinistic" but not as neo-Nazi because it does not pursue the establishment of a totalitarian regime. Historian Krisztián Ungváry writes that "It is safe to say that certain messages of Jobbik can be called open neo-Nazi propaganda. However, it is quite certain that the popularity of the party is not due to these statements."
Italy
During the 1950s, the neo-fascist Italian Social Movement moved closer to bourgeois conservative politics on the domestic front, which led to radical youths founding hardline splinter groups, such as Pino Rauti's Ordine Nuovo (later succeeded by Ordine Nero) and Stefano Delle Chiaie's Avanguardia Nazionale. These organisations were influenced by the esotericism of Julius Evola and considered the Waffen-SS and Romanian leader Corneliu Zelea Codreanu a reference, moving beyond Italian fascism. They were implicated in paramiliary attacks during the late 1960s to the early 1980s, such as the Piazza Fontana bombing. Delle Chiaie had even assisted Junio Valerio Borghese in a failed 1970 coup attempt known as the Golpe Borghese, which attempted to reinstate a fascist state in Italy.
Ireland
The National Socialist Irish Workers Party, a small party, was active between 1968 and the late 1980s, producing neo-Nazi propaganda pamphlets and sending threatening messages to Jews and Black people living in Ireland.
Netherlands
Noteworthy neo-Nazi movements and parties in the Netherlands include the National European Social Movement (NESB), the Dutch People's Union (NVU), the National Alliance (NA), and the Nationalist People's Movement (NVB). Individuals of note have included Waffen-SS volunteer and NESB founder Paul van Tienen, war-time collaborator and NESB co-founder Jan Wolthuis, former NVU member Bernhard Postma, the "Black Widow" Florentine Rost van Tonningen, former NVU leader Joop Glimmerveen, CP/CP'86 member and NVB leader Wim Beaux, former CP/CP'86 member and NA leader Jan Teijn, former NVU member and "Hitler-lookalike" Stefan Wijkamp, former CP'86 member and current NVU leader Constant Kusters, and former NVU member and NA leader Virginia Kapić.
Both the General Intelligence and Security Service and non-governmental initiatives such as the far-left anti-fascist research group Kafka research neo-Nazism and other forms of political extremism and have attested to the local presence of international movements such as Blood & Honour, Combat 18, the Racial Volunteer Force, and The Base, and expressed concern at the online dissemination of alt-right and far-right accelerationist thought in the Netherlands.
Poland
Under the Polish Constitution promoting any totalitarian system such as Nazism, fascism, or communism, as well as inciting violence and/or racial hatred is illegal. This was further re-enforced in the Polish Penal Code where discrediting any group or persons on national, religious, or racial grounds carries a sentence of 3 years.
Although several small far-right and anti-semitic organisations exist, most notably NOP and ONR (both of which exist legally), they frequently adhere to Polish nationalism and National Democracy, in which Nazism is generally considered to be against ultra-nationalist principles, and although they are classed as nationalist and fascist movements, they are at the same time considered anti-Nazi. Some of their elements may resemble neo-Nazi features, but these groups frequently dissociate themselves from Nazi elements, claiming that such acts are unpatriotic and they argue that Nazism misappropriated or slightly altered several pre-existing symbols and features, such as distinguishing the Roman salute from the Nazi salute.
Self-declared neo-Nazi movements in Poland frequently treat Polish culture and traditions with contempt, are anti-Christian and translate various texts from German, meaning they are considered movements favouring Germanisation.
According to several reporter investigations, the Polish government turns a blind eye to these groups, and they are free to spread their ideology, frequently dismissing their existence as conspiracy theories, dismissing acts political provocations, deeming them too insignificant to pose a threat, or attempting to justify or diminish the seriousness of their actions.
Russia
Some observers have noted a subjective irony of Russians embracing Nazism, because one of Hitler's ambitions at the start of World War II was the Generalplan Ost (Master Plan East) which envisaged to exterminate, expel, or enslave most or all Slavs from central and eastern Europe (e.g., Russians, Ukrainians, Poles etc.). At the end of the Nazi invasion of the Soviet Union, over 25 million Soviet citizens had died.
The first reports of neo-Nazi organizations in the USSR appeared in the second half of the 1950s. In some cases, the participants were attracted primarily by the aesthetics of Nazism (rituals, parades, uniforms, the cult of physical fitness, architecture). Other organizations were more interested in the ideology of the Nazis, their program, and the image of Adolf Hitler. The formation of neo-Nazism in the USSR dates back to the turn of the 1960s and 1970s; during this period, these organizations still preferred to operate underground.
Modern Russian neo-paganism took shape in the second half of the 1970s and is associated with the activities of supporters of antisemitism, especially the Moscow Arabist Valery Yemelyanov (also known as "Velemir") and the former dissident and neo-Nazi activist Alexey Dobrovolsky (also known as "Dobroslav").
In Soviet times, the founder of the movement of Peterburgian Vedism (a branch of Slavic neopaganism) Viktor Bezverkhy (Ostromysl) revered Hitler and Heinrich Himmler and propagated racial and antisemitic theories in a narrow circle of his students, calling for the deliverance of mankind from "inferior offspring", allegedly arising from interracial marriages. He called such "inferior people" "bastards", referred to them as "Zhyds, Indians or gypsies and mulattoes" and believed that they prevent society from achieving social justice.
The first public manifestations of neo-Nazis in Russia took place in 1981 in Kurgan, and then in Yuzhnouralsk, Nizhny Tagil, Sverdlovsk, and Leningrad.
In 1982, on Hitler's birthday, a group of Moscow high school students held a Nazi demonstration on Pushkinskaya Square.
Russian National Unity (RNE) was a Neo-Nazi group founded in 1990 and was led by Alexander Barkashov, who claimed to have members in 250 cities. RNE adopted the swastika as its symbol, and sees itself as the avant-garde of a coming national revolution. It is critical of other major far-right organizations, such as the Liberal Democratic Party of Russia (LDPR). As of 1997, the members RNE were called Soratnik (comrades in arms), receive combat training at locations near Moscow, and many of them work as security officers or armed guards. RNE was banned in 1999 by Moscow's court in 1999, after which the group faded away.
In 2007, it was claimed that Russian neo-Nazis accounted for "half of the world's total".
On 15 August 2007, Russian authorities arrested a student for allegedly posting a video on the Internet which appears to show two migrant workers being beheaded in front of a red and black swastika flag. Alexander Verkhovsky, the head of a Moscow-based center that monitors hate crime in Russia, said, "It looks like this is the real thing. The killing is genuine ... There are similar videos from the Chechen war. But this is the first time the killing appears to have been done intentionally."
Atomwaffen Division Russland is a neo-Nazi terrorist group in Russia found by Russian officials to have been tied to multiple mass murder plots. AWDR was founded by former members of defunct National Socialist Society responsible for 27 murders and AWDR is connected to local chapter of the Order of Nine Angles responsible for rapes, ritual murders and drug trafficking. The Russian authorities raided an Atomwaffen compound in Ulan-Ude and uncovered illegal weapons and explosives.
Serbia
An example of neo-Nazism in Serbia is the group Nacionalni stroj. In 2006 charges were brought against 18 leading members. Besides political parties, there are a few militant neo-Nazi organizations in Serbia, such as Blood & Honour Serbia and Combat 18.
Slovakia
The Slovak political party Kotlebists – People's Party Our Slovakia, which is represented in the National Council and European Parliament, is widely characterized as neo-Nazi. Kotleba has softened its image over time and now disputes that is fascist or neo-Nazi, even suing a media outlet that described it as neo-Nazi. As of 2020, the party spokesperson was Ondrej Durica, a former member of the neo-Nazi band (White Resistance). 2020 candidate Andrej Medvecky was convicted of attacking a black man while shouting racial slurs; another candidate, Anton Grňo, was fined for making a fascist salute. The party still celebrates 14 March, the anniversary of the founding of the fascist first Slovak Republic. In 2020, party leader Marian Kotleba was facing trial for writing checks for 1,488 euros, alleged to be a reference to Fourteen Words and Heil Hitler.
Spain
Spanish neo-Nazism is often connected to the country's Francoist and Falangist past, and nurtured by the ideology of the National Catholicism.
According to a study by the newspaper ABC, black people are the ones who have suffered the most attacks by neo-Nazi groups, followed by Maghrebis and Latin Americans. They have also caused deaths in the anti-fascist group, such as the murder of the Madrid-born sixteen-year-old Carlos Palomino on 11 November 2007, stabbed with a knife by a soldier in the Legazpi metro station (Madrid).
There have been other neo-Nazi cultural organizations such as the Spanish Circle of Friends of Europe (CEDADE) and the Circle of Indo-European Studies (CEI).
The extreme right has little electoral support, with the presence of these groups of 0.36% (if the Plataforma per Catalunya (PxC) party is excluded with 66007 votes (0.39%), according to the voting data of the European elections of 2014. The first extreme right party FE de las JONS obtains 0.13% of the votes (21 577 votes), after doubling its results after the crisis; this is followed by the far-right party La España en Marcha (LEM) with 0.1% of the votes, National Democracy (DN) of the far-right with 0.08%, Republican Social Movement (MSR) (far-right) with 0.05% of the votes.
Sweden
Neo-Nazi activities in Sweden have previously been limited to white supremacist groups, few of which have a membership over a few hundred members. The main neo-Nazi organization is the Nordic Resistance Movement, a political movement which engages in martial arts training and paramilitary exercises and which has been called a terrorist group. They are also active in Norway and Denmark; the branch in Finland was banned in 2019.
Switzerland
The neo-Nazi and white power skinhead scene in Switzerland has seen significant growth in the 1990s and 2000s. It is reflected in the foundation of the Partei National Orientierter Schweizer in 2000, which resulted in an improved organizational structure of the neo-Nazi and white supremacist scene.
Ukraine
In 1991, the Social-National Party of Ukraine (SNPU) was founded. The party combined radical nationalism and neo-Nazi features. The SNPU was characterized as a radical right-wing populist party that combined elements of ethnic ultranationalism and anti-communism. During the 1990s, it was accused of neo-Nazism due to the party's recruitment of skinheads and usage of neo-Nazi symbols. When Oleh Tyahnybok was elected party leader in 2004, he made efforts to moderate the party's image by changing the party's name to All-Ukrainian Association "Svoboda", changing its symbols and expelling neo-Nazi and neofascist groups. Some commentators continued to consider it neo-Nazi: in 2016, The Nation reported that "in Ukrainian municipal elections held [in October 2015], the neo-Nazi Svoboda party won 10 percent of the vote in Kyiv and placed second in Lviv. The Svoboda party's candidate won the mayoral election in the city of Konotop." In 2015, the Svoboda party mayor in Konotop reportedly had the number "14/88" displayed on his car and refused to display the city's official flag because it contains a star of David, and has implied that Jews were responsible for the Holodomor.
The topic of Ukrainian nationalism and its alleged relationship to neo-Nazism came to the fore in polemics about the more radical elements involved in the Euromaidan protests and subsequent Russo-Ukrainian War from 2014 onward. Some Russian, Latin American, U.S. and Israeli media have portrayed the Ukrainian nationalists in the conflict as neo-Nazi.
The Azov Battalion, founded in 2014, has been described as a far-right militia, with connections to neo-Nazism and members wearing neo-Nazi and SS symbols and regalia, as well as expressing neo-Nazi views.
According to Vyacheslav Likhachev of the , members of far-right (including neo-Nazi) groups played an important role on the pro-Russian side, arguably more so than on the Ukrainian side, especially during early 2014. Members and former members of the National Bolshevik Party, Russian National Unity (RNU), Eurasian Youth Union, and Cossack groups participated in recruitment of the separatists. A former RNU member, Pavel Gubarev, was founder of the Donbas People's Militia and first "governor" of the Donetsk People's Republic. RNU is particularly linked to the Russian Orthodox Army, one of a number of separatist units described as "pro-Tsarist" and "extremist" Orthodox nationalists. 'Rusich' is part of the Wagner Group, a Russian mercenary group in Ukraine which has been linked to far-right extremism. Afterward, the pro-Russian far-right groups became less important in Donbas and the need for Russian radical nationalists started to disappear.
The radical nationalist group С14, whose members openly expressed neo-Nazi views, gained notoriety in 2018 for being involved in violent attacks on Romany camps.
United Kingdom
In 1962, the British neo-Nazi activist Colin Jordan formed the National Socialist Movement (NSM) which later became the British Movement (BM) in 1968.
John Tyndall, a long-term neo-Nazi activist in the UK, led a break-away from the National Front to form an openly neo-Nazi party named the British National Party. In the 1990s, the party formed a group for protecting its meetings named Combat 18, which later grew too violent for the party to control and began to attack members of the BNP who were not perceived as supportive of neo-Nazism. Under the subsequent leadership of Nick Griffin, the BNP distanced itself from neo-Nazism, although many members (including Griffin himself) have been accused of links to other neo-Nazi groups.
Sonnenkrieg Division is a neo-Nazi terrorist organization in the United Kingdom, linked to international Atomwaffen Division network. Multiple members have been jailed for plotting terror attacks against minorities. Sonnenkrieg Division has been proscribed as a terrorist organization in United Kingdom and Australia. Sonnenkrieg Division is also closely tied with the Order of Nine Angles linked to the Murders of Bibaa Henry and Nicole Smallman.
The UK has also been a source of neo-Nazi music, such as the band Skrewdriver.
Asia
Iran
Several neo-Nazi groups were active in Iran, although they are now defunct. Advocates of Nazism continue to exist in Iran and are mainly based on the Internet.
Israel
Neo-Nazi activity is not common or widespread in Israel, and the few reported activities have all been the work of extremists, who were punished severely. One notable case is that of Patrol 36, a cell in Petah Tikva made up of eight teenage immigrants from the former Soviet Union who had been attacking foreign workers and gay people, and vandalizing synagogues with Nazi images. These neo-Nazis were reported to have operated in cities across Israel, and have been described as being influenced by the rise of neo-Nazism in Europe; mostly influenced by similar movements in Russia and Ukraine, as the rise of the phenomenon is widely credited to immigrants from those two states, the largest sources of emigration to Israel. Widely publicized arrests have led to a call to reform the Law of Return to permit the revocation of Israeli citizenship for—and the subsequent deportation of—neo-Nazis.
Japan
Since 1982, the neo-Nazi National Socialist Japanese Workers' Party has operated in Japan, currently under the leadership of Kazunari Yamada, who has praised Hitler and denied the Holocaust.
Mongolia
From 2008, Mongolian neo-Nazi groups have defaced buildings in Ulaanbaatar, smashed Chinese shopkeepers' windows, and killed Chinese immigrants. The neo-Nazi Mongols' targets for violence are Chinese, Koreans, Mongol women who have sex with Chinese men, and LGBT people. They wear Nazi uniforms and revere the Mongol Empire and Genghis Khan. Though Tsagaan Khass leaders say they do not support violence, they are self-proclaimed Nazis. "Adolf Hitler was someone we respect. He taught us how to preserve national identity," said the 41-year-old co-founder, who calls himself Big Brother. "We don't agree with his extremism and starting the Second World War. We are against all those killings, but we support his ideology. We support nationalism rather than fascism." Some have ascribed it to poor historical education.
Taiwan
The National Socialism Association (NSA) is a neo-Nazi political organisation founded in Taiwan in September 2006 by Hsu Na-chi, at that time a 22-year-old female political science graduate of Soochow University. The NSA has an explicit stated goal of obtaining the power to govern the state. The Simon Wiesenthal Centre condemned the National Socialism Association on 13 March 2007 for championing the former Nazi dictator and blaming democracy for social unrest in Taiwan.
Turkey
A neo-Nazi group existed in 1969 in İzmir, when a group of former Republican Villagers Nation Party members (precursor party of the Nationalist Movement Party) founded the association "Nasyonal Aktivite ve Zinde İnkişaf" (National Activity and Vigorous Development). The club maintained two combat units. The members wore SA uniforms and used the Hitler salute. One of the leaders (Gündüz Kapancıoğlu) was re-admitted to the Nationalist Movement Party in 1975.
Apart from neo-fascist Grey Wolves and the Turkish ultranationalist Nationalist Movement Party, there are some neo-Nazi organizations in Turkey such as the Turkish Nazi Party or the National Socialist Party of Turkey, which are mainly based on the Internet.
National Front Party (Ulusal Cephe Partisi) adheres to neo-Nazism, spreads Nazi material translated into Turkish and targets Jews, Arabs and Africans. National Front Party has about 1000 members and is affiliated with the racist Victory Party (Turkey).
The neo-Nazi Ataman Brotherhood (Ataman Kardeşliği) patrols streets in Turkey and attacks Syrian and Afghan refugees.
Americas
Brazil
Several Brazilian neo-Nazi gangs appeared in the 1990s in Southern and Southeastern Brazil, regions with mostly white people, with their acts gaining more media coverage and public notoriety in the 2010s. Some members of Brazilian neo-Nazi groups have been associated with football hooliganism. Their targets have included African, South American and Asian immigrants; Jews, Muslims, Catholics and atheists; Afro-Brazilians and internal migrants with origins in the northern regions of Brazil (who are mostly brown-skinned or Afro-Brazilian); homeless people, prostitutes; recreational drug users; feminists and—more frequently reported in the media—gay people, bisexuals, and transgender and third-gender people. News of their attacks has played a role in debates about anti-discrimination laws in Brazil (including to some extent hate speech laws) and the issues of sexual orientation and gender identity.
Canada
Neo-Nazism in Canada began with the formation of the Canadian Nazi Party in 1965. In the 1970s and 1980s, neo-Nazism continued to spread in the country as organizations including the Western Guard Party and Church of the Creator (later renamed Creativity) promoted white supremacist ideals. Founded in the United States in 1973, Creativity calls for white people to wage racial holy war (Rahowa) against Jews and other perceived enemies.
Don Andrews founded the Nationalist Party of Canada in 1977. The purported goals of the unregistered party are "the promotion and maintenance of European Heritage and Culture in Canada," but the party is known for anti-Semitism and racism. Many influential neo-Nazi Leaders, such as Wolfgang Droege, were affiliated with the party, but many of its members left to join the Heritage Front, which was founded in 1989.
Droege founded the Heritage Front in Toronto at a time when leaders of the white supremacist movement were "disgruntled about the state of the radical right" and wanted to unite unorganized groups of white supremacists into an influential and efficient group with common objectives. Plans for the organization began in September 1989, and the formation of the Heritage Front was formally announced a couple of months later in November. In the 1990s, George Burdi of Resistance Records and the band Rahowa popularized the Creativity movement and the white power music scene.
On September 18, 2020, Toronto Police arrested 34-year-old Guilherme "William" Von Neutegem and charged him with the murder of Mohamed-Aslim Zafis. Zafis was the caretaker of a local mosque who was found dead with his throat cut. The Toronto Police Service said the killing is possibly connected to the stabbing murder of Rampreet Singh a few days prior a short distance from the spot where Zafis' murder took place. Von Neutegem is a member of the Order of Nine Angles and social media accounts established as belonging to him promote the group and included recordings of Von Neutegem performing satanic chants. In his home there was also an altar with the symbol of the O9A adorning a monolith. According to Evan Balgord of the Canadian Anti-Hate Network, they are aware of more O9A members in Canada and their affiliated organization Northern Order. Northern Order is a proscribed neo-Nazi terrorist organization in Canada. NO members have been arrested for trafficking explosives and firearms, and NO has active members of the Canadian Armed Forces as its members and even a member of the CJIRU was identified as a member.
Controversy and dissention has left many Canadian neo-Nazi organizations dissolved or weakened.
Chile
After the dissolution of the National Socialist Movement of Chile (MNSCH) in 1938, notable former members of MNSCH migrated into Partido Agrario Laborista (PAL), obtaining high positions. Not all former MNSCH members joined the PAL; some continued to form parties that followed the MNSCH model until 1952. A new old-school Nazi party was formed in 1964 by school teacher Franz Pfeiffer. Among the activities of this group were the organization of a Miss Nazi beauty contest and the formation of a Chilean branch of the Ku Klux Klan. The party disbanded in 1970. Pfeiffer attempted to restart it in 1983 in the wake of a wave of protests against the Augusto Pinochet regime.
Nicolás Palacios considered the "Chilean race" to be a mix of two bellicose master races: the Visigoths of Spain and the Mapuche (Araucanians) of Chile. Palacios traces the origins of the Spanish component of the "Chilean race" to the coast of the Baltic Sea, specifically to Götaland in Sweden, one of the supposed homelands of the Goths. Palacios claimed that both the blonde-haired and the bronze-coloured Chilean Mestizo share a "moral physonomy" and a masculine psychology. He opposed immigration from Southern Europe, and argued that Mestizos who are derived from south Europeans lack "cerebral control" and are a social burden.
Costa Rica
Several fringe neo-Nazi groups have existed in Costa Rica, some with online presence since around 2003. The groups normally target Jewish Costa Ricans, Afro-Costa Ricans, Communists, gay people and especially Nicaraguan and Colombian immigrants. In 2012 the media discovered the existence of a neo-Nazi police officer inside the Public Force of Costa Rica, for which he was fired and would later commit suicide in April 2016 due to lack of job opportunities and threats from anti-fascists.
In 2015, the Simon Wiesenthal Center asked the Costa Rican government to shut down a store in San José that sells Nazi paraphernalia, Holocaust denial books and other products associated with Nazism.
In 2018, a series of pages on the social network Facebook of neo-Nazi inclination openly or discreetly carried out a vast campaign instigating xenophobic hatred by recycling old news or posting fake news to take advantage of an anti-immigrant sentiment after three homicides of tourists allegedly committed by migrants (although from one of the homicides the suspect is Costa Rican). A rally against the country's migration policy was held on 19 August 2018, in which neo-Nazi and hooligans took part. Although not all participants were linked these groups and the majority of participants were peaceful, the protest turned violent and the Public Force intervened with 44 arrested (36 Costa Ricans and the rest Nicaraguans). Authorities confiscated sharp weapons, Molotov cocktails and other items from the neo-Nazis, who also carried swastika flags. A subsequent anti-xenophobic march and solidarity with the Nicaraguan refugees was organized a week later with more assistance. A second anti-migration demonstration, with the explicit exclusion of neo-Nazis and hooligans, was carried out in September with similar assistance. In 2019 Facebook pages of extreme right-wing tendencies and anti-immigration position as Deputy 58, Costa Rican Resistance and Salvation Costa Rica called an anti-government demonstration on 1 May with small attendance.
Peru
Peru has been home to a handful of neo-Nazi groups, most notably the National Socialist Movement "Peru Awake", the National Socialist Tercios of New Castile, and the Peruvian National Socialist Union.
United States
There are several neo-Nazi groups in the United States. The National Socialist Movement (NSM), with about 400 members in 32 states, is currently the largest neo-Nazi organization in the US. After World War II, new organizations formed with varying degrees of support for Nazi principles. The National States' Rights Party, founded in 1958 by Edward Reed Fields and J. B. Stoner, countered racial integration in the Southern United States with Nazi-inspired publications and iconography. The American Nazi Party, founded by George Lincoln Rockwell in 1959, achieved high-profile coverage in the press through its public demonstrations.
The ideology of James H. Madole, leader of the National Renaissance Party, was influenced by Blavatskian Theosophy. Helena Blavatsky developed a racial theory of evolution, holding that the white race was the "fifth rootrace" called the Aryan Race. According to Blavatsky, Aryans had been preceded by Atlanteans who had perished in the flood that sunk the continent Atlantis. The three races that preceded the Atlanteans, in Blavatsky's view, were proto-humans; these were the Lemurians, Hyperboreans and the first Astral rootrace. It was on this foundation that Madole based his claims that the Aryan Race has been worshiped as "White Gods" since time immemorial and proposed a governance structure based on the Hindu Laws of Manu and its hierarchical caste system.
The First Amendment to the United States Constitution guarantees freedom of speech, which the courts have interpreted very broadly to include hate speech, severely limiting the government's authority to suppress it. This allows political organizations great latitude in expressing Nazi, racist, and antisemitic views. A landmark First Amendment case was National Socialist Party of America v. Village of Skokie, in which neo-Nazis threatened to march in a predominantly Jewish suburb of Chicago. The march never took place in Skokie, but the court ruling allowed the neo-Nazis to stage a series of demonstrations in Chicago.
The Institute for Historical Review, formed in 1978, is a Holocaust denial body associated with neo-Nazism.
Organizations which report upon neo-Nazi activities in the U.S., which may involve attacking and harassing minorities, include the American organizations Anti-Defamation League and the Southern Poverty Law Center.
In 2020, the FBI reclassified neo-Nazis to the same threat level as ISIS. Chris Wray, the Director of the Federal Bureau of Investigation, stated "Not only is the terror threat diverse, it's unrelenting."
In 2022, famous rapper Kanye West stated that he identifies as a Nazi, denying the Holocaust and praising the policies of Adolf Hitler.
Uruguay
In 1998, a group of people belonging to the "Joseph Goebbels Movement" tried to burn down a synagogue, which also served as a Hebrew school, in the Pocitos neighborhood of Montevideo in Uruguay; an antisemitic pamphlet signed by the group was found in the building after the quick action of firefighters saved it. Another group, the racist and antisemitic neo-Nazi group, founded in 1996, said when they were interviewed by the newspaper La República de Montevideo that they had no involvement with the attack on the synagogue, but revealed that they maintain contacts with a group called ("White Power"), also Uruguayan, as well as with neo-Nazi groups from Argentina and several European countries. Through the Internet they have received the solidarity of the Patria pro-fascist group, based in Spain. They also said that in the city of Canelones, Uruguay, fifty kilometers from Montevideo, there is a clandestine "Aryan church" which uses rituals taken from the Ku Klux Klan. The declared that they did not tolerate interracial or gay couples. One of the militants said in the interview that "... if we see a black man with a white woman, we break them up ...". Other neo-Nazi incidents in Uruguay in 1998 included the bombing of a Jewish-owned small business in February, which injured two people, and the appearance of posters celebrating the anniversary of Hitler's birthday in April.
Africa
South Africa
Several groups in South Africa, such as and Blanke Bevrydingsbeweging, have often been described as neo-Nazi.
Eugène Terre'Blanche was a prominent South African neo-Nazi leader who was murdered in 2010.
Oceania
There were a number of now-defunct Australian neo-Nazi groups, such as the Australian National Socialist Party (ANSP), which was formed in 1962 and merged into the National Socialist Party of Australia (1968–1970s), originally a splinter group, in 1968, and Jack van Tongeren's Australian Nationalist Movement.
The National Socialist Network (NSN) is an Australian neo-Nazi political organisation formed from two far-right organisations, the Lads Society and the Antipodean Resistance, in 2020.
White supremacist organisations active in Australia as of 2016 included local chapters of the Aryan Nations. Blair Cottrell, former leader of the United Patriots Front, has tried to distance himself from neo-Nazism, but he has nevertheless been accused of expressing "pro-Nazi views". Australian Security Intelligence Organisation director Mike Burgess said in February 2020 that neo-Nazis pose a "real threat" to Australia's security. Burgess maintained that there is a growing threat from the extreme right, and that its supporters "regularly meet to salute Nazi flags, inspect weapons, train in combat and share their hateful ideology". In June 2022, the Australian state Victoria banned display of the swastika symbol. Under the new law, individuals who intentionally exhibit the symbol may face up to a year in jail or a A$22,000 (£12,300; $15,000) fine. The state of Victoria already has laws against hate speech, but they have been criticized for having weaknesses. The call for reform of these laws grew stronger in 2020 when a couple flew a swastika flag over their home, causing outrage in the community."
In New Zealand, historical neo-Nazi organisations include Unit 88 and the National Socialist Party of New Zealand. White nationalist organisations such as the New Zealand National Front and Action Zealandia have faced accusations of neo-Nazism.
See also
The Believer2001 film by Henry Bean
The Daily StormerUS neo-Nazi commentary & message board
White separatismApartheid-type ideology
List of neo-Nazi bands
List of neo-Nazi organizations
List of white nationalist organizations
References
Informational notes
Citations
Bibliography
Primary sources
Imperium by Francis Parker Yockey (using the pen name Ulick Varange, 1947, )
The Lightning and the Sun by Savitri Devi, (1958 (written 1948–56); )
White Power by George Lincoln Rockwell (1967; John McLaughlin, 1996, )
This Time The World by George Lincoln Rockwell (1961; Liberty Bell Publications, 2004, )
National Socialism: Vanguard of the Future, Selected Writings of Colin Jordan
Merrie England – 2000 by Colin Jordan
The Turner Diaries by William Pierce (under the pseudonym Andrew Macdonald), novel (1978, ) .
Siege: The Collected Writings of James Mason edited and introduced by Michael M. Jenkins (Storm Books, 1992) or introduced by Ryan Schuster (Black Sun Publications, )
Hunter by William Pierce (under the pseudonym Andrew Macdonald), novel (National Vanguard Books, 1984, )
Faith of the Future by Matt Koehl (New Order; Rev edition, 1995, )
Serpent's Walk by Randolph D. Calverhall (pseudonym), novel (National Vanguard Books, 1991, )
The Nexus periodical edited by Kerry Bolton
Deceived, Damned & Defiant – The Revolutionary Writings of David Lane by David Lane, foreword by Ron McVan, preface by Katja Lane (Fourteen Word Press, 1999, )
Resistance Magazine published by National Vanguard Books
Academic surveys
The Beast Reawakens by Martin A. Lee, (New York: Little, Brown and Company, 1997, )
Fascism (Oxford Readers) by Roger Griffin (1995, )
Beyond Eagle and Swastika: German nationalism since 1945 by Kurt P. Tauber (Wesleyan University Press; [1st ed.] edition, 1967)
Biographical Dictionary of the Extreme Right Since 1890 edited by Philip Rees, (1991, )
Hitler's Priestess: Savitri Devi, the Hindu-Aryan Myth, and Neo-Nazism by Nicholas Goodrick-Clarke (1998, and )
Dreamer of the Day: Francis Parker Yockey and the Postwar Fascist International by Kevin Coogan, (Autonomedia, Brooklyn, NY 1998, )
Hate: George Lincoln Rockwell and the American Nazi Party by William H. Schmaltz (Potomac Books, 2000, )
American Fuehrer: George Lincoln Rockwell and the American Nazi Party by Frederick J. Simonelli (University of Illinois Press, 1999, )
Fascism in Britain: A History, 1918–1985 by Richard C. Thurlow (Olympic Marketing Corp, 1987, )
Fascism Today: A World Survey by Angelo Del Boca and Mario Giovana (Pantheon Books, 1st American edition, 1969)
Germany's New Nazis by the Anglo-Jewish Association (Jewish Chronicle Publications, 1951)
The New Germany and the Old Nazis by Tete Harens Tetens (Random House, 1961)
Swastika and the Eagle: Neo-Naziism in America Today by Clifford L Linedecker (A & W Pub, 1982, )
The Silent Brotherhood: Inside America's Racist Underground by Kevin Flynn and Gary Gerhardt (Signet Book; Reprint edition, 1995, )
"White Power, White Pride!": The White Separatist Movement in the United States by Betty A. Dobratz with Stephanie L. Shanks-Meile (hardcover, Twayne Publishers, 1997, ); a.k.a. The White Separatist Movement in the United States: White Power White Pride (paperback, Johns Hopkins Univ. Press, 2000, )
Encyclopedia of White Power: A Sourcebook on the Radical Racist Right by Jeffrey Kaplan (Rowman & Littlefield Pub Inc, 2000, )
Blood in the Face: The Ku Klux Klan, Aryan Nations, Nazi Skinheads, and the Rise of a New White Culture by James Ridgeway (Thunder's Mouth Press; 2nd edition, 1995, )
A Hundred Little Hitlers: The Death of a Black Man, the Trial of a White Racist, and the Rise of the Neo-Nazi Movement in America by Elinor Langer (Metropolitan Books, 2003, )
The Racist Mind: Portraits of American Neo-Nazis and Klansmen by Raphael S. Ezekiel (Penguin (Non-Classics); Reprint edition, 1996, )
Black Sun: Aryan Cults, Esoteric Nazism and the Politics of Identity by Nicholas Goodrick-Clarke (2001, )
Free to Hate: The Rise of the Right in Post-Communist Eastern Europe by Paul Hockenos (Routledge; Reprint edition, 1994, )
The Dark Side of Europe: The Extreme Right Today by Geoff Harris, (Edinburgh University Press; New edition, 1994, )
The Far Right in Western and Eastern Europe by Luciano Cheles, Ronnie Ferguson, and Michalina Vaughan (Longman Publishing Group; 2nd edition, 1995, )
The Radical Right in Western Europe: A Comparative Analysis by Herbert Kitschelt (University of Michigan Press; Reprint edition, 1997, )
Shadows Over Europe: The Development and Impact of the Extreme Right in Western Europe edited by Martin Schain, Aristide Zolberg, and Patrick Hossay (Palgrave Macmillan; 1st edition, 2002, )
The Fame of a Dead Man's Deeds: An Up-Close Portrait of White Nationalist William Pierce by Robert S. Griffin (Authorhouse, 2001, )
Nation and Race: The Developing Euro-American Racist Subculture by Jeffrey Kaplan, Tore Bjorgo (Northeastern University Press, 1998, )
Gods of the Blood: The Pagan Revival and White Separatism by Mattias Gardell (Duke University Press, 2003, )
The Nazi conception of law (Oxford pamphlets on world affairs) by J. Walter Jones, Clarendon (1939)
External links
Neo-Nazism at Jewish Virtual Library
Occultism in Nazism
Political theories
Identity politics
White supremacy
Homophobia
Racism
Ableism | 0.764382 | 0.999549 | 0.764038 |
Axial Age | Axial Age (also Axis Age, from the German ) is a term coined by the German philosopher Karl Jaspers. It refers to broad changes in religious and philosophical thought that occurred in a variety of locations from about the 8th to the 3rd century BCE.
According to Jaspers, during this period, universalizing modes of thought appeared in Persia, India, China, the Levant, and the Greco-Roman world, in a striking parallel development, without any obvious admixture between these disparate cultures. Jaspers identified key thinkers from this age who had a profound influence on future philosophies and religions, and identified characteristics common to each area from which those thinkers emerged.
The historical validity of the Axial Age is disputed. Some criticisms of Jaspers include the lack of a demonstrable common denominator between the intellectual developments that are supposed to have emerged in unison across ancient Greece, the Levant, India, and China; lack of any radical discontinuity with "preaxial" and "postaxial" periods; and exclusion of pivotal figures that do not fit the definition (for example, Jesus, Muhammad, and Akhenaten).
Despite these criticisms, the Axial Age continues to be an influential idea, with many scholars accepting that profound changes in religious and philosophical discourse did indeed take place but disagreeing as to the underlying reasons. To quote Robert Bellah and Hans Joas, "The notion that in significant parts of Eurasia the middle centuries of the first millennium BCE mark a significant transition in human cultural history, and that this period can be referred to as the Axial Age, has become widely, but not universally, accepted."
Origin of the idea of Axial Age
Jaspers introduced the concept of an Axial Age in his book (The Origin and Goal of History), published in 1949. The simultaneous appearance of thinkers and philosophers in different areas of the world had been remarked by numerous authors since the 18th century, notably by the French Indologist Abraham Hyacinthe Anquetil-Duperron. Jaspers explicitly cited some of these authors, including Victor von Strauß (1859) and Ernst von Lasaulx (1870). He was unaware of the first fully nuanced theory from 1873 by John Stuart Stuart-Glennie, forgotten by Jaspers' time, and which Stuart-Glennie termed "the moral revolution". Stuart-Glennie and Jaspers both claimed that the Axial Age should be viewed as an objective empirical fact of history, independently of religious considerations. Jaspers argued that during the Axial Age, "the spiritual foundations of humanity were laid simultaneously and independently in China, India, Persia, Judea, and Greece. And these are the foundations upon which humanity still subsists today."
Jaspers identified a number of key thinkers as having had a profound influence on future philosophies and religions, and identified characteristics common to each area from which those thinkers emerged. Jaspers held up this age as unique and one to which the rest of the history of human thought might be compared.
Characteristics
Jaspers presented his first outline of the Axial age by a series of examples:
Jaspers described the Axial Age as "an interregnum between two ages of great empire, a pause for liberty, a deep breath bringing the most lucid consciousness". It has also been suggested that the Axial Age was a historically liminal period, when "old certainties had lost their validity and new ones were still not ready".
Jaspers had a particular interest in the similarities in circumstance and thought of its figures. Similarities included an engagement in the quest for human meaning and the rise of a new élite class of religious leaders and thinkers in China, India and the Mediterranean.
Individual thinkers each laid spiritual foundations within a framework of a changing social environment. Jaspers argues that the characteristics appeared under similar political circumstances: China, India, the Middle East and the Occident each comprised multiple small states engaged in internal and external struggles. The three regions all gave birth to, and then institutionalized, a tradition of travelling scholars,
who roamed from city to city to exchange ideas. After the Spring and Autumn period (8th to 5th centuries BCE) and the Warring States period (5th to 3rd centuries BCE), Taoism and Confucianism emerged in China. In other regions, the scholars largely developed extant religious traditions; in India, Hinduism, Buddhism, and Jainism; in Persia, Zoroastrianism; in the Levant, Judaism; and in Greece, Sophism and other classical philosophies.
Many of the cultures of the Axial Age have been considered second-generation societies because they developed on the basis of societies which preceded them.
Thinkers and movements
In China, the Hundred Schools of Thought (c. 6th century BCE) were in contention and Confucianism and Taoism arose during this era, and in this area it remains a profound influence on social and religious life.
Zoroastrianism, another of Jaspers' examples, is one of the first monotheistic religions. William W. Malandra and R. C. Zaehner, suggest that Zoroaster may indeed have been an early contemporary of Cyrus the Great living around 550 BCE. Mary Boyce and other leading scholars who once supported much earlier dates for Zarathustra/Zoroaster have recently changed their position on when he likely lived, so that there is an emerging consensus regarding him as a contemporary or near-contemporary of Cyrus the Great.
Jainism propagated the religion of sramanas (previous Tirthankaras) and influenced Indian philosophy by propounding the principles of ahimsa (non-violence), karma, samsara, and asceticism. Mahavira (24th Tirthankara in the 5th century BCE), known as a fordmaker of Jainism and a contemporary with the Buddha, lived during this age.
Buddhism, also of the sramana tradition of India, was another of the world's most influential philosophies, founded by Siddhartha Gautama, or the Buddha, who lived c. 5th century BCE; its spread was aided by Ashoka, who lived late in the period.
Rabbinic Judaism accounts for its hard shift away from idolatry/polytheism (which was more common among Biblical Israelites) by mythologizing the eradication of the Evil Inclination for idolatry which was said to occur in the early Second Temple period. It has been argued that this development in monotheism relates to the axial shifts described by Jaspers.
Jaspers' axial shifts included the rise of Platonism (c. 4th century BCE) and Neoplatonism (3rd century AD), which would later become a major influence on the Western world through both Christianity and secular thought throughout the Middle Ages and into the Renaissance.
Reception
In addition to Jaspers, the philosopher Eric Voegelin referred to this age as The Great Leap of Being, constituting a new spiritual awakening and a shift of perception from societal to individual values. Thinkers and teachers like the Buddha, Pythagoras, Heraclitus, Parmenides, and Anaxagoras contributed to such awakenings which Plato would later call anamnesis, or a remembering of things forgotten.
David Christian notes that the first "universal religions" appeared in the age of the first universal empires and of the first all-encompassing trading networks. This conclusion overlooks the fact that Venus statues, for example, are found across much of Eurasia, and date back many millennia before the first empires. What some regard as the emergence of religion is more likely the emergence of institutionalized and codified religion.
Anthropologist David Graeber has pointed out that "the core period of Jasper's Axial age ... corresponds almost exactly to the period in which coinage was invented. What's more, the three parts of the world where coins were first invented were also the very parts of the world where those sages lived; in fact, they became the epicenters of Axial Age religious and philosophical creativity." Drawing on the work of classicist Richard Seaford and literary theorist Marc Shell on the relation between coinage and early Greek thought, Graeber argues that an understanding of the rise of markets is necessary to grasp the context in which the religious and philosophical insights of the Axial Age arose. The ultimate effect of the introduction of coinage was, he argues, an "ideal division of spheres of human activity that endures to this day: on the one hand the market, on the other, religion".
German sociologist Max Weber played an important role in Jaspers' thinking. Shmuel Eisenstadt argues in the introduction to The Origins and Diversity of Axial Age Civilizations that Weber's work in his The Religion of China: Confucianism and Taoism, The Religion of India: The Sociology of Hinduism and Buddhism and Ancient Judaism provided a background for the importance of the period, and notes parallels with Eric Voegelin's Order and History. In the same book, Shmuel Eisenstadt analyses economic circumstances relating to the coming of the Axial Age in Greece.
Wider acknowledgement of Jaspers' work came after it was presented at a conference and published in Daedalus in 1975, and Jaspers' suggestion that the period was uniquely transformative generated important discussion among other scholars, such as Johann Arnason. Religious historian Karen Armstrong explored the period in her book The Great Transformation, and the theory has been the focus of numerous academic conferences. In literature, Gore Vidal in his novel Creation covers much of this Axial Age through the fictional perspective of a Persian adventurer.
Usage of the term has expanded beyond Jaspers' original formulation. Yves Lambert argues that the Enlightenment was a Second Axial Age, including thinkers such as Isaac Newton and Albert Einstein, wherein relationships between religion, secularism, and traditional thought are changing. A collective History of the Axial Age has been published in 2019: generally the authors contested the existence of an "identifiable Axial Age confined to a few Eurasian hotspots in the last millennium BCE" but tended to accept “axiality” as a cluster of traits emerging time and again whenever societies reached a certain threshold of scale and level of complexity.
Besides time, usage of the term has expanded beyond the original field. A philosopher, Jaspers focused on philosophical development of the Age. Historians Hermann Kulke and Max Ostrovsky demonstrated that the Age is even more Axial in historical and geopolitical senses. Jaspers, in fact, noted the tip of the iceberg. Pre-Axial cultures, he wrote, were dominated by the river valley civilizations while by the end of the Axial Age rose universal empires which dominated history for centuries since. With the researches of Kulke and Ostrovsky the whole iceberg emerged. Universal empires did not come by the end of the Axial Age. The first of them, Persia came at the peak of the Axial Age and conquered Mesopotamia and Egypt. Both ceased to be civilizations in themselves and became provinces in a completely new form of imperial system which stretched from India to Greece. Thus the Bronze Age civilizations were succeeded by Axial civilizations with their universal empires. Before forming another universal empire, the Chinese civilization expanded at the peak of the Axial Age, turning the original core into Country in the Middle (Chung-kuo). The new geopolitical setting of China changed less in the following two millennia than it did in the Axial Age. The Axial Age formed two major geopolitical systems, a wider China and a much vaster Indo-Mediterranean system. The two were separated from each other by Tibet which limited their political and military contacts but both systems were linked by the Silk Road creating a trans-Eurasian trade belt stretching from the Pacific to the Atlantic.
Several scholars supposed ecological prime trigger for the rise of this Axial belt Stephen Sanderson researched religious evolution in the Axial Age, arguing that religions and religious change in general are essentially biosocial adaptations to changing environments. Ostrovsky suggests increased fertility in the rainy zones of the Eurasian temperate belt. He regards the Axial belt of civilizations as the embryo of the present Global North. It shifted northward during the Middle Ages due to climatic change and after the Seafaring Revolution penetrated to the temperate North America. "But from historical point of view, it is the same imperial belt which first appeared in the Axial Age."
The validity of the concept has been called into question. In 2006 Diarmaid MacCulloch called the Jaspers thesis "a baggy monster, which tries to bundle up all sorts of diversities over four very different civilisations, only two of which had much contact with each other during the six centuries that (after adjustments) he eventually singled out, between 800 and 200 BCE". Jaspers himself had already noted this on page 2 of The Origin and Goal of History, where he says that one of the puzzles of the Axial Age is precisely that of a similar phenomenon simultaneously occurring in three civilizations which had no contact with each other. In 2013, another comprehensive critique appears in Iain Provan's book Convenient Myths: The Axial Age, Dark Green Religion, and the World That Never Was.
See also
Ancient philosophy
References
Citations
General and cited references
. A semi-historic description of the events and milieu of the Axial Age.
.
. Originally published as .
.
Eisenstadt, S. N., ed. (1986). The Origins and Diversity of Axial Age Civilizations. SUNY Press. .
Joas, Hans, and Robert N. Bellah, eds. (2012), The Axial Age and Its Consequences, Belknap Press, .
Halton, Eugene (2014), From the Axial Age to the Moral Revolution: John Stuart-Glennie, Karl Jaspers, and a New Understanding of the Idea, New York: Palgrave Macmillan. .
.
Further reading
Daedalus (Spring 1975). 4:2: Wisdom, Revelation and Doubt: Perspectives on the First Millennium BCE. .
.
.
Yves Lambert (1999). "Religion in Modernity as a New Axial Age: Secularization or New Religious Forms?". Oxford University Press: Sociology of Religion Vol. 60 No. 3. pp. 303–333. A general model of analysis of the relations between religion and modernity, where modernity is conceived as a new axial age.
Rodney Stark (2007). Discovering God: A New Look at the Origins of the Great Religions. New York: HarperOne.
Gore Vidal (1981). Creation. New York: Random House. A novel narrated by the fictional grandson of Zoroaster in 445 BCE, describing encounters with the central figures of the Axial Age during his travels.
Mark D. Whitaker (2009). Ecological Revolution: The Political Origins of Environmental Degradation and the Environmental Origins of Axial Religions; China, Japan, Europe Lambert. Dr. Whitaker's research received a grant award from the U.S. National Science Foundation in association with the American Sociological Association.
External links
The Axial Age and Its Consequences, a 2008 conference in Erfurt, Germany.
Ancient philosophy
Historical eras
Iron Age
Karl Jaspers
Religion in ancient history | 0.766991 | 0.996085 | 0.763988 |
World map | A world map is a map of most or all of the surface of Earth. World maps, because of their scale, must deal with the problem of projection. Maps rendered in two dimensions by necessity distort the display of the three-dimensional surface of the Earth. While this is true of any map, these distortions reach extremes in a world map. Many techniques have been developed to present world maps that address diverse technical and aesthetic goals.
Charting a world map requires global knowledge of the Earth, its oceans, and its continents. From prehistory through the Middle Ages, creating an accurate world map would have been impossible because less than half of Earth's coastlines and only a small fraction of its continental interiors were known to any culture. With exploration that began during the European Renaissance, knowledge of the Earth's surface accumulated rapidly, such that most of the world's coastlines had been mapped, at least roughly, by the mid-1700s and the continental interiors by the twentieth century.
Maps of the world generally focus either on political features or on physical features. Political maps emphasize territorial boundaries and human settlement. Physical maps show geographical features such as mountains, soil type, or land use. Geological maps show not only the surface, but characteristics of the underlying rock, fault lines, and subsurface structures. Choropleth maps use color hue and intensity to contrast differences between regions, such as demographic or economic statistics.
Map projections
All world maps are based on one of several map projections, or methods of representing a globe on a plane. All projections distort geographic features, distances, and directions in some way. The various map projections that have been developed provide different ways of balancing accuracy and the unavoidable distortion inherent in making world maps.
Perhaps the best-known projection is the Mercator Projection, originally designed as a nautical chart.
Thematic maps
A thematic map shows geographical information about one or a few focused subjects. These maps "can portray physical, social, political, cultural, economic, sociological, agricultural, or any other aspects of a city, state, region, nation, or continent".
Historical maps
Early world maps cover depictions of the world from the Iron Age to the Age of Discovery and the emergence of modern geography during the early modern period. Old maps provide information about places that were known in past times, as well as the philosophical and cultural basis of the map, which were often much different from modern cartography. Maps are one means by which scientists distribute their ideas and pass them on to future generations.
See also
Wikipedia's clickable world map
Global Map
Globe
International Map of the World
List of map projections
List of world map changes
Mappa mundi
Maps of the world
Rhumbline network
Theorema Egregium
Time zone
References
Further reading
Edson, Evelyn (2011). The World Map, 1300–1492: The Persistence of Tradition and Transformation. JHU Press.
Harvey, P. D. A. (2006). The Hereford world map: medieval world maps and their context. British Library. | 0.764177 | 0.999699 | 0.763947 |
Constructivism (international relations) | In international relations (IR), constructivism is a social theory that asserts that significant aspects of international relations are shaped by ideational factors. The most important ideational factors are those that are collectively held; these collectively held beliefs construct the interests and identities of actors.
In contrast to other prominent IR approaches and theories (such as realism and rational choice), constructivists see identities and interests of actors as socially constructed and changeable; identities are not static and cannot be exogenously assumed. Similar to rational choice, constructivism does not make broad and specific predictions about international relations; it is an approach to studying international politics, not a substantive theory of international politics. Constructivist analysis can only provide substantive explanations or predictions once the relevant actors and their interests have been identified, as well as the content of social structures.
The main theories competing with constructivism are variants of realism, liberalism, and rational choice that emphasize materialism (the notion that the physical world determines political behavior on its own), and individualism (the notion that individual units can be studied apart from the broader systems that they are embedded in). Whereas other prominent approaches conceptualize power in material terms (e.g. military and economic capabilities), constructivist analyses also see power as the ability to structure and constitute the nature of social relations among actors.
Development
Nicholas Onuf has been credited with coining the term constructivism to describe theories that stress the socially constructed character of international relations. Since the late 1980s to early 1990s, constructivism has become one of the major schools of thought within international relations.
The earliest constructivist works focused on establishing that norms mattered in international politics. Peter J. Katzenstein's edited volume The Culture of National Security compiled works by numerous prominent and emerging constructivists, showing that constructivist insights were important in the field of security studies, an area of International Relations in which realists had been dominant.
After establishing that norms mattered in international politics, later veins of constructivism focused on explaining the circumstances under which some norms mattered and others did not. Swathes of constructivist research have focused on norm entrepreneurs: international organizations and law: epistemic communities; speech, argument, and persuasion; and structural configuration as mechanisms and processes for social construction.
Alexander Wendt is the most prominent advocate of social constructivism in the field of international relations. Wendt's 1992 article "Anarchy is What States Make of It: the Social Construction of Power Politics" laid the theoretical groundwork for challenging what he considered to be a flaw shared by both neorealists and neoliberal institutionalists, namely, a commitment to a (crude) form of materialism. By attempting to show that even such a core realist concept as "power politics" is socially constructed—that is, not given by nature and hence, capable of being transformed by human practice—Wendt opened the way for a generation of international relations scholars to pursue work on a wide range of issues from a constructivist perspective. Wendt further developed these ideas in his central work, Social Theory of International Politics (1999). Following up on Wendt, Martha Finnemore offered the first "sustained, systematic empirical argument in support of the constructivist claim that international normative structures matter in world politics" in her 1996 book National Interests in International Society.
There are several strands of constructivism. On the one hand, there are "conventional" constructivist scholars such as Kathryn Sikkink, Peter Katzenstein, Elizabeth Kier, Martha Finnemore, and Alexander Wendt, who use widely accepted methodologies and epistemologies. Their work has been widely accepted within the mainstream IR community and generated vibrant scholarly discussions among realists, liberals, and constructivists. These scholars hold that research oriented around causal explanations and constitutive explanations is appropriate. Wendt refers to this form of constructivism as "thin" constructivism. On the other hand, there are "critical" radical constructivists who take discourse and linguistics more seriously, and adopt non-positivist methodologies and epistemologies.A third strand, known as critical constructivism, takes conventional constructivists to task for systematically downplaying or omitting class factors. Despite their differences, all strands of constructivism agree that neorealism and neoliberalism pay insufficient attention to social construction in world politics.
Theory
Constructivism primarily seeks to demonstrate how core aspects of international relations are, contrary to the assumptions of neorealism and neoliberalism, socially constructed. This means that they are given their form by ongoing processes of social practice and interaction. Alexander Wendt calls two increasingly accepted basic tenets of constructivism "that the structures of human association are determined primarily by shared ideas rather than material forces, and that the identities and interests of purposive actors are constructed by these shared ideas rather than given by nature." This does not mean that constructivists believe international politics is "ideas all the way down", but rather is characterized both by material factors and ideational factors.
Central to constructivism are the notions that ideas matter, and that agents are socially constructed (rather than given).
Constructivist research is focused both on causal explanations for phenomena, as well as analyses of how things are constituted. In the study of national security, the emphasis is on the conditioning that culture and identity exert on security policies and related behaviors. Identities are necessary in order to ensure at least some minimal level of predictability and order. The object of the constructivist discourse can be conceived as the arrival, a fundamental factor in the field of international relations, of the recent debate on epistemology, the sociology of knowledge, the agent/structure relationship, and the ontological status of social facts.
The notion that international relations are not only affected by power politics, but also by ideas, is shared by writers who describe themselves as constructivist theorists. According to this view, the fundamental structures of international politics are social rather than strictly material. This leads to social constructivists to argue that changes in the nature of social interaction between states can bring a fundamental shift towards greater international security.
Challenging realism
During constructivism's formative period, neorealism was the dominant discourse of international relations. Much of constructivism's initial theoretical work challenged basic neorealist assumptions. Neorealists are fundamentally causal structuralists. They hold that the majority of important content to international politics is explained by the structure of the international system, a position first advanced in Kenneth Waltz's Man, the State, and War and fully elucidated in his core text of neorealism, Theory of International Politics. Specifically, international politics is primarily determined by the fact that the international system is anarchic – it lacks any overarching authority, instead it is composed of units (states) which are formally equal – they are all sovereign over their own territory. Such anarchy, neorealists argue, forces States to act in certain ways, specifically, they can only rely on themselves for security (they have to self-help). The way in which anarchy forces them to act in such ways, to defend their own self-interest in terms of power, neorealists argue, explains most of international politics. Because of this, neorealists tend to disregard explanations of international politics at the "unit" or "state" level. Kenneth Waltz attacked such a focus as being reductionist.
Constructivism, particularly in the formative work of Wendt, challenges this assumption by showing that the causal powers attributed to "structure" by neorealists are in fact not "given", but rest on the way in which structure is constructed by social practice. Removed from presumptions about the nature of the identities and interests of the actors in the system, and the meaning that social institutions (including anarchy) have for such actors, Wendt argues neorealism's "structure" reveals very little: "it does not predict whether two states will be friends or foes, will recognize each other's sovereignty, will have dynastic ties, will be revisionist or status quo powers, and so on". Because such features of behavior are not explained by anarchy, and require instead the incorporation of evidence about the interests and identities held by key actors, neorealism's focus on the material structure of the system (anarchy) is misplaced. Wendt goes further than this – arguing that because the way in which anarchy constrains states depends on the way in which states conceive of anarchy, and conceive of their own identities and interests, anarchy is not necessarily even a self-help system. It only forces states to self-help if they conform to neorealist assumptions about states as seeing security as a competitive, relative concept, where the gain of security for any one state means the loss of security for another. If states instead hold alternative conceptions of security, either "co-operative", where states can maximise their security without negatively affecting the security of another, or "collective" where states identify the security of other states as being valuable to themselves, anarchy will not lead to self-help at all. Neorealist conclusions, as such, depend entirely on unspoken and unquestioned assumptions about the way in which the meaning of social institutions are constructed by actors. Crucially, because neorealists fail to recognize this dependence, they falsely assume that such meanings are unchangeable, and exclude the study of the processes of social construction which actually do the key explanatory work behind neorealist observations.
As a criticism of neorealism and neoliberalism (which were the dominant strands of IR theory during the 1980s), constructivism tended to be lumped in with all approaches that criticized the so-called "neo-neo" debate. Constructivism has therefore often been conflated with critical theory. However, while constructivism may use aspects of critical theory and vice versa, the mainstream variants of constructivism are positivist.
In a response to constructivism, John Mearsheimer has argued that ideas and norms only matter on the margins, and that appeals by leaders to norms and morals often reflect self-interest.
Identities and interests
As constructivists reject neorealism's conclusions about the determining effect of anarchy on the behavior of international actors, and move away from neorealism's underlying materialism, they create the necessary room for the identities and interests of international actors to take a central place in theorising international relations. Now that actors are not simply governed by the imperatives of a self-help system, their identities and interests become important in analysing how they behave. Like the nature of the international system, constructivists see such identities and interests as not objectively grounded in material forces (such as dictates of the human nature that underpins classical realism) but the result of ideas and the social construction of such ideas. In other words, the meanings of ideas, objects, and actors are all given by social interaction. People give objects their meanings and can attach different meanings to different things.
Martha Finnemore has been influential in examining the way in which international organizations are involved in these processes of the social construction of actor's perceptions of their interests. In National Interests In International Society, Finnemore attempts to "develop a systemic approach to understanding state interests and state behavior by investigating an international structure, not of power, but of meaning and social value". "Interests", she explains, "are not just 'out there' waiting to be discovered; they are constructed through social interaction". Finnemore provides three case studies of such construction – the creation of Science Bureaucracies in states due to the influence of the UNESCO, the role of the Red Cross in the Geneva Conventions and the World Bank's influence of attitudes to poverty.
Studies of such processes are examples of the constructivist attitude towards state interests and identities. Such interests and identities are central determinants of state behaviour, as such studying their nature and their formation is integral in constructivist methodology to explaining the international system. But it is important to note that despite this refocus onto identities and interests—properties of states—constructivists are not necessarily wedded to focusing their analysis at the unit-level of international politics: the state. Constructivists such as Finnemore and Wendt both emphasize that while ideas and processes tend to explain the social construction of identities and interests, such ideas and processes form a structure of their own which impact upon international actors. Their central difference from neorealists is to see the structure of international politics in primarily ideational, rather than material, terms.
Norms
Constructivist scholars have explored in-depth the role of norms in world politics. Abram Chayes and Antonia Handler Chayes have defined “norms” as “a broad class of prescriptive statements – rules, standards, principles, and so forth – both procedural and substantive” that are “prescriptions for action in situations of choice, carrying a sense of obligation, a sense that they ought to be followed”.
Norm-based constructivist approaches generally assume that actors tend to adhere to a “logic of appropriateness”. That means that actors follow “internalized prescriptions of what is socially defined as normal, true, right, or good, without, or in spite of calculation of consequences and expected utility”. This logic of appropriateness stands in contrast to the rational choice “logic of consequences”, where actors are assumed to choose the most efficient means to reach their goals on the basis of a cost-benefit analysis.
Constructivist norm scholarship has investigated a wide range of issue areas in world politics. For example, Peter Katzenstein and the contributors to his edited volume, The Culture of National Security, have argued that states act on security choices not only in the context of their physical capabilities but also on the basis of normative understandings. Martha Finnemore has suggested that international organizations like the World Bank or UNESCO help diffuse norms which, in turn, influence how states define their national interests. Finnemore and Kathryn Sikkink have explored how norms affect political change. In doing so, they have stressed the connections between norms and rationality, rather than their opposition to each other. They have also highlighted the importance of “norm entrepreneurs” in advocating and spreading certain norms.
Some scholars have investigated the role of individual norms in world politics. For instance, Audie Klotz has examined how the global norm against apartheid developed across different states (the United Kingdom, the United States, and Zimbabwe) and institutions (the Commonwealth, the Organization of African Unity, and the United Nations). The emergence and institutionalization of this norm, she argued, has contributed to the end of the apartheid regime in South Africa. Nina Tannenwald has made the case that the non-use of nuclear weapons since 1945 can be attributed to the strength of a nuclear weapons taboo, i.e., a norm against the use of nuclear weapons. She has argued that this norm has become so deeply embedded in American political and social culture that nuclear weapons have not been employed, even in cases when their use would have made strategic or tactical sense. Michael Barnett has taken an evolutionary approach to trace how the norm of political humanitarianism emerged.
Martha Finnemore and Kathryn Sikkink distinguish between three types of norms:
Regulative norms: they "order and constrain behavior"
Constitutive norms: they "create new actors, interests, or categories of action"
Evaluative and prescriptive norms: they have an "oughtness" quality to them
Finnemore, Sikkink, Jeffrey W. Legro and others have argued that the robustness (or effectiveness) of norms can be measured by factors such as:
specificity: norms that are clear and specific are more likely to be effective
longevity: norms with a history are more likely to be effective
universality: norms that make general claims (rather than localized and particularistic claims) are more likely to be effective
prominence: norms that are widely accepted among powerful actors are more likely to be effective
Jeffrey Checkel argues that there are two common types of explanations for the efficacy of norms:
Rationalism: actors comply with norms due to coercion, cost-benefit calculations, and material incentives
Constructivism: actors comply with norms due to social learning and socialization
In terms of specific norms, constructivist scholars have shown how the following norms emerged:
International law: A transnational network of lawyers succeeded in codifying new international legal principles and regulate power politics over the course of the 19th and 20th centuries.
Humanitarian intervention: Over time, conceptions of who was "human" changed, which led states to increasingly engage in humanitarian interventions in the 20th century.
Nuclear taboo: A norm against nuclear weapons developed since 1945.
Ban on landmines: Activism by transnational advocacy groups led to a norm prohibiting landmines.
Norms of sovereignty.
Norms against assassination.
Election monitoring.
Taboo against the weaponization of water.
Anti-whaling norm.
Anti-torture norm.
Research areas
Many constructivists analyse international relations by looking at goals, threats, fears, cultures, identities, and other elements of "social reality" as social facts. In an important edited volume, The Culture of National Security, constructivist scholars—including Elizabeth Kier, Jeffrey Legro, and Peter Katzenstein – challenged many realist assumptions about the dynamics of international politics, particularly in the context of military affairs. Thomas J. Biersteker and Cynthia Weber applied constructivist approaches to understand the evolution of state sovereignty as a central theme in international relations, and works by Rodney Bruce Hall and Daniel Philpott (among others) developed constructivist theories of major transformations in the dynamics of international politics. In international political economy, the application of constructivism has been less frequent. Notable examples of constructivist work in this area include Kathleen R. McNamara's study of European Monetary Union and Mark Blyth's analysis of the rise of Reaganomics in the United States.
By focusing on how language and rhetoric are used to construct the social reality of the international system, constructivists are often seen as more optimistic about progress in international relations than versions of realism loyal to a purely materialist ontology, but a growing number of constructivists question the "liberal" character of constructivist thought and express greater sympathy for realist pessimism concerning the possibility of emancipation from power politics.
Constructivism is often presented as an alternative to the two leading theories of international relations, realism and liberalism, but some maintain that it is not necessarily inconsistent with one or both. Wendt shares some key assumptions with leading realist and neorealist scholars, such as the existence of anarchy and the centrality of states in the international system. However, Wendt renders anarchy in cultural rather than materialist terms; he also offers a sophisticated theoretical defense of the state-as-actor assumption in international relations theory. This is a contentious issue within segments of the IR community as some constructivists challenge Wendt on some of these assumptions (see, for example, exchanges in Review of International Studies, vol. 30, 2004). It has been argued that progress in IR theory will be achieved when Realism and Constructivism can be aligned or even synthesized. An early example of such synthesis was Jennifer Sterling-Folker's analysis of the United States’ international monetary policy following the Bretton Woods system. Sterling-Folker argued that the U.S. shift towards unilateralism is partially accounted for by realism's emphasis of an anarchic system, but constructivism helps to account for important factors from the domestic or second level of analysis.
Recent developments
A significant group of scholars who study processes of social construction self-consciously eschew the label "constructivist". They argue that "mainstream" constructivism has abandoned many of the most important insights from linguistic turn and social-constructionist theory in the pursuit of respectability as a "scientific" approach to international relations. Even some putatively "mainstream" constructivists, such as Jeffrey Checkel, have expressed concern that constructivists have gone too far in their efforts to build bridges with non-constructivist schools of thought.
A growing number of constructivists contend that current theories pay inadequate attention to the role of habitual and unreflective behavior in world politics, the centrality of relations and processes in constructing world politics, or both.
Advocates of the "practice turn" take inspiration from work in neuroscience, as well as that of social theorists such as Pierre Bourdieu, that stresses the significance of habit and practices in psychological and social life - essentially calling for greater attention and sensitivity towards the 'every day' and 'taken for granted' activities of international politics Some scholars have adopted the related sociological approach known as Actor-Network Theory (ANT), which extends the early focus of the Practice Turn on the work of Pierre Bourdieu towards that of Bruno Latour and others. Scholars have employed ANT in order to disrupt traditional world political binaries (civilised/barbarian, democratic/autocratic, etc.), consider the implications of a posthuman understanding of IR, explore the infrastructures of world politics, and consider the effects of technological agency.
Notable constructivists in international relations
Emanuel Adler
Michael Barnett
Thomas J. Biersteker
Mark Blyth
Jeffrey T. Checkel
Martha Finnemore
Ernst B. Haas
Peter M. Haas
Ian Hacking
Ted Hopf
Peter J. Katzenstein
Margaret Keck
Judith Kelley
Friedrich Kratochwil
Richard Ned Lebow
Daniel H. Nexon
Qin Yaqing
Nicholas Onuf
Erik Ringmar
Thomas Risse
John Ruggie
Chris Reus-Smit
Kathryn Sikkink
J. Ann Tickner
Ole Wæver
Alexander Wendt
Critique by emotional choice theorists
Proponents of emotional choice theory argue that constructivist approaches neglect the emotional underpinnings of social interactions, normative behavior, and decision-making in general. They point out that the constructivist paradigm is generally based on the assumption that decision-making is a conscious process based on thoughts and beliefs. It presumes that people decide on the basis of reflection and deliberation. However, cumulative research in neuroscience suggests that only a small part of the brain's activities operate at the level of conscious thinking. The vast majority of its activities consist of unconscious appraisals and emotions.
The significance of emotions in decision-making has generally been ignored by constructivist perspectives, according to these critics. Moreover, emotional choice theorists contend that the constructivist paradigm has difficulty incorporating emotions into its models, because it cannot account for the physiological dynamics of emotions. Psychologists and neurologists have shown that emotions are based on bodily processes over which individuals have only limited control. They are inextricably intertwined with people's brain functions and autonomic nervous systems, which are typically outside the scope of standard constructivist models.
Emotional choice theory seeks to capture not only the social but also the physiological and dynamic character of emotions. It posits that emotion plays a key role in normative action. Emotions endow norms and identities with meaning. If people feel strongly about norms, they are particularly likely to adhere to them. Rules that cease to resonate at an affective level, however, often come to lose their prescriptive power. Emotional choice theorists note that recent findings in neurology suggest that humans generally feel before they think. So emotions may lead them to prioritize the constructivist “logic of appropriateness” over the rationalist “logic of consequences,” or vice versa. Emotions may also infuse the logic of appropriateness and inform actors how to adjudicate between different norms.
See also
Constructivism (philosophy of science)
Constructivism (psychological school)
Emotional choice theory
English school of international relations theory
International legal theories
Logic of appropriateness
References
External links
Read an Interview with Social Constructivist Alexander Wendt
Constructivism (international relations)
International relations theory | 0.76737 | 0.995496 | 0.763914 |
Femininity | Femininity (also called womanliness) is a set of attributes, behaviors, and roles generally associated with women and girls. Femininity can be understood as socially constructed, and there is also some evidence that some behaviors considered feminine are influenced by both cultural factors and biological factors. To what extent femininity is biologically or socially influenced is subject to debate. It is conceptually distinct from both the female biological sex and from womanhood, as all humans can exhibit feminine and masculine traits, regardless of sex and gender.
Traits traditionally cited as feminine include gracefulness, gentleness, empathy, humility, and sensitivity, though traits associated with femininity vary across societies and individuals, and are influenced by a variety of social and cultural factors.
Overview and history
Despite the terms femininity and masculinity being in common usage, there is little scientific agreement about what femininity and masculinity are. Among scholars, the concept of femininity has varying meanings.
Professor of English Tara Williams has suggested that modern notions of femininity in English-speaking society began during the medieval period at the time of the bubonic plague in the 1300s. Women in the Early Middle Ages were referred to simply within their traditional roles of maiden, wife, or widow. After the Black Death in England wiped out approximately half the population, traditional gender roles of wife and mother changed, and opportunities opened up for women in society. The words femininity and womanhood are first recorded in Chaucer around 1380.
In 1949, French intellectual Simone de Beauvoir wrote that "no biological, psychological or economic fate determines the figure that the human female presents in society" and "one is not born, but rather becomes, a woman". The idea was picked up in 1959 by Canadian-American sociologist Erving Goffman and in 1990 by American philosopher Judith Butler, who theorized that gender is not fixed or inherent but is rather a socially defined set of practices and traits that have, over time, grown to become labelled as feminine or masculine. Goffman argued that women are socialized to present themselves as "precious, ornamental and fragile, uninstructed in and ill-suited for anything requiring muscular exertion" and to project "shyness, reserve and a display of frailty, fear and incompetence".
Scientific efforts to measure femininity and masculinity were pioneered by psychologists Lewis Terman and Catherine Cox Miles in the 1930s. Their M–F model was adopted by other researchers and psychologists. The model posited that femininity and masculinity were innate and enduring qualities, not easily measured, opposite to one another, and that imbalances between them led to mental disorders.
Alongside the women's movement of the 1970s, researchers began to move away from the M–F model, developing an interest in androgyny. The Bem Sex Role Inventory and the Personal Attributes Questionnaire were developed to measure femininity and masculinity on separate scales. Using such tests, researchers found that the two dimensions varied independently of one another, casting doubt on the earlier view of femininity and masculinity as opposing qualities.
Second-wave feminists, influenced by de Beauvoir, believed that although biological differences between females and males were innate, the concepts of femininity and masculinity had been culturally constructed, with traits such as passivity and tenderness assigned to women and aggression and intelligence assigned to men. Girls, second-wave feminists said, were then socialized with toys, games, television, and school into conforming to feminine values and behaviors. In her significant 1963 book The Feminine Mystique, American feminist Betty Friedan wrote that the key to women's subjugation lay in the social construction of femininity as childlike, passive, and dependent, and called for a "drastic reshaping of the cultural image of femininity."
Behavior and personality
Traits such as nurturance, sensitivity, sweetness, supportiveness, gentleness,
warmth, passivity, cooperativeness, expressiveness, modesty, humility, empathy, affection, tenderness, and being emotional, kind, helpful, devoted, and understanding have been cited as stereotypically feminine. The defining characteristics of femininity vary between and even within societies.
The relationship between feminine socialization and heterosexual relationships has been studied by scholars, as femininity is related to women's and girls' sexual appeal to men and boys. Femininity is sometimes linked with sexual objectification. Sexual passiveness, or sexual receptivity, is sometimes considered feminine while sexual assertiveness and sexual desire are sometimes considered masculine.
Scholars have debated the extent to which gender identity and gender-specific behaviors are due to socialization versus biological factors. Social and biological influences are thought to be mutually interacting during development. Studies of prenatal androgen exposure have provided some evidence that femininity and masculinity are partly biologically determined. Other possible biological influences include evolution, genetics, epigenetics, and hormones (both during development and in adulthood).
In 1959, researchers such as John Money and Anke Ehrhardt proposed the prenatal hormone theory. Their research argues that sexual organs bathe the embryo with hormones in the womb, resulting in the birth of an individual with a distinctively male or female brain; this was suggested by some to "predict future behavioral development in a masculine or feminine direction". This theory, however, has been criticized on theoretical and empirical grounds and remains controversial. In 2005, scientific research investigating sex differences in psychology showed that gender expectations and stereotype threat affect behavior, and a person's gender identity can develop as early as three years of age. Money also argued that gender identity is formed during a child's first three years.
People who exhibit a combination of both masculine and feminine characteristics are considered androgynous, and feminist philosophers have argued that gender ambiguity may blur gender classification. Modern conceptualizations of femininity also rely not just upon social constructions, but upon the individualized choices made by women.
Philosopher Mary Vetterling-Braggin argues that all characteristics associated with femininity arose from early human sexual encounters which were mainly male-forced and female-unwilling, because of male and female anatomical differences. Others, such as Carole Pateman, Ria Kloppenborg, and Wouter J. Hanegraaff, argue that the definition of femininity is the result of how females must behave in order to maintain a patriarchal social system.
In his 1998 book Masculinity and Femininity: the Taboo Dimension of National Cultures, Dutch psychologist and researcher Geert Hofstede wrote that only behaviors directly connected with procreation can, strictly speaking, be described as feminine or masculine, and yet every society worldwide recognizes many additional behaviors as more suitable to females than males, and vice versa. He describes these as relatively arbitrary choices mediated by cultural norms and traditions, identifying "masculinity versus femininity" as one of five basic dimensions in his theory of cultural dimensions. Hofstede describes as feminine behaviors including service, permissiveness, and benevolence, and describes as feminine those countries stressing equality, solidarity, quality of work-life, and the resolution of conflicts by compromise and negotiation.
In Carl Jung's school of analytical psychology, the anima and animus are the two primary anthropomorphic archetypes of the unconscious mind. The anima and animus are described by Jung as elements of his theory of the collective unconscious, a domain of the unconscious that transcends the personal psyche. In the unconscious of the male, it finds expression as a feminine inner personality: anima; equivalently, in the unconscious of the female, it is expressed as a masculine inner personality: animus.
Clothing and appearance
In Western cultures, the ideal of feminine appearance has traditionally included long, flowing hair, clear skin, a narrow waist, and little or no body hair or facial hair. In other cultures, however, some expectations are different. For example, in many parts of the world, underarm hair is not considered unfeminine. Today, the color pink is strongly associated with femininity, whereas in the early 1900s pink was associated with boys and blue with girls.
These feminine ideals of beauty have been criticized as restrictive, unhealthy, and even racist. In particular, the prevalence of anorexia and other eating disorders in Western countries has frequently been blamed on the modern feminine ideal of thinness.
In many Muslim countries, women are required to cover their heads with a hijab (veil). It is considered a symbol of feminine modesty and morality. Some, however, see it as a symbol of objectification and oppression.
In history
Cultural standards vary on what is considered feminine. For example, in 16th century France, high heels were considered a distinctly masculine type of shoe, though they are currently considered feminine.
In Ancient Egypt, sheath and beaded net dresses were considered female clothing, while wraparound dresses, perfumes, cosmetics, and elaborate jewelry were worn by both men and women. In Ancient Persia, clothing was generally unisex, though women wore veils and headscarves. Women in Ancient Greece wore himations; and in Ancient Rome women wore the palla, a rectangular mantle, and the maphorion.
The typical feminine outfit of aristocratic women of the Renaissance was an undershirt with a gown and a high-waisted overgown, and a plucked forehead and beehive or turban-style hairdo.
Body alteration
Body alteration is the deliberate altering of the human body for aesthetic or non-medical purpose. One such purpose has been to induce perceived feminine characteristics in women.
For centuries in Imperial China, smaller feet were considered to be a more aristocratic characteristic in women. The practice of foot binding was intended to enhance this characteristic, though it made walking difficult and painful.
In a few parts of Africa and Asia, neck rings are worn in order to elongate the neck. In these cultures, a long neck characterizes feminine beauty. The Padaung of Burma and Tutsi women of Burundi, for instance, practice this form of body modification.
Traditional roles
Femininity as a social construct relies on a binary gender system that treats men and masculinity as different from, and opposite to, women and femininity. In patriarchal societies, including Western ones, conventional attitudes to femininity contribute to the subordination of women, as women are seen as more compliant, vulnerable, and less prone to violence.
Gender stereotypes influence traditional feminine occupations, resulting in microaggression toward women who break traditional gender roles. These stereotypes include that women have a caring nature, have skill at household-related work, have greater manual dexterity than men, are more honest than men, and have a more attractive physical appearance. Occupational roles associated with these stereotypes include: midwife, teacher, accountant, data entry clerk, cashier, salesperson, receptionist, housekeeper, cook, maid, social worker, and nurse. Occupational segregation maintains gender inequality and the gender pay gap. Certain medical specializations, such as surgery and emergency medicine, are dominated by a masculine culture and have a higher salary.
Leadership is associated with masculinity in Western culture and women are perceived less favorably as potential leaders. However, some people have argued that feminine-style leadership, which is associated with leadership that focuses on help and cooperation, is advantageous over masculine leadership, which is associated with focusing on tasks and control. Female leaders are more often described by Western media using characteristics associated with femininity, such as emotion.
Explanations for occupational imbalance
Psychologist Deborah L. Best argues that primary sex characteristics of men and women, such as the ability to bear children, caused a historical sexual division of labor and that gender stereotypes evolved culturally to perpetuate this division.
The practice of bearing children tends to interrupt the continuity of employment. According to human capital theory, this retracts from the female investment in higher education and employment training. Richard Anker of the International Labour Office argues human capital theory does not explain the sexual division of labor because many occupations tied to feminine roles, such as administrative assistance, require more knowledge, experience, and continuity of employment than low-skilled masculinized occupations, such as truck driving. Anker argues the feminization of certain occupations limits employment options for women.
Role congruity theory
Role congruity theory proposes that people tend to view deviations from expected gender roles negatively. It supports the empirical evidence that gender discrimination exists in areas traditionally associated with one gender or the other. It is sometimes used to explain why people have a tendency to evaluate behavior that fulfills the prescriptions of a leader role less favorably when it is enacted by a woman.
Religion and politics
Asian religions
Shamanism may have originated as early as the Paleolithic period, predating all organized religions. Archeological finds have suggested that the earliest known shamans were female, and contemporary shamanic roles such as the Korean mudang continue to be filled primarily by women.
In Hindu traditions, Devi is the female aspect of the divine. Shakti is the divine feminine creative power, the sacred force that moves through the entire universe and the agent of change. She is the female counterpart without whom the male aspect, which represents consciousness or discrimination, remains impotent and void. As the female manifestation of the supreme lord, she is also called Prakriti, the basic nature of intelligence by which the Universe exists and functions. In Hinduism, the universal creative force Yoni is feminine, with inspiration being the life force of creation.
In Taoism, the concept of yin represents the primary force of the female half of yin and yang. The yin is also present, to a smaller proportion, in the male half. The yin can be characterized as slow, soft, yielding, diffuse, cold, wet, and passive.
Abrahamic theology
Although the Abrahamic God is typically described in masculine terms—such as father or king—many theologians argue that this is not meant to indicate the gender of God. According to the Catechism of the Catholic Church, God "is neither man nor woman: he is God". Several recent writers, such as feminist theologian Sallie McFague, have explored the idea of "God as mother", examining the feminine qualities attributed to God. For example, in the Book of Isaiah, God is compared to a mother comforting her child, while in the Book of Deuteronomy, God is said to have given birth to Israel.
The Book of Genesis describes the divine creation of the world out of nothing or ex nihilo. In Wisdom literature and in the wisdom tradition, wisdom is described as feminine. In many books of the Old Testament, including Wisdom and Sirach, wisdom is personified and called she. According to David Winston, because wisdom is God's "creative agent," she must be intimately identified with God.
The Wisdom of God is feminine in Hebrew: Chokmah, in Arabic: Hikmah, in Greek: Sophia, and in Latin: Sapientia. In Hebrew, both Shekhinah (the Holy Spirit and divine presence of God) and Ruach HaKodesh (divine inspiration) are feminine.
In Christian Kabbalah, Chokmah (wisdom and intuition) is the force in the creative process that God used to create the heavens and the earth. Binah (understanding and perception) is the great mother, the feminine receiver of energy and giver of form. Binah receives the intuitive insight from Chokmah and dwells on it in the same way that a mother receives the seed from the father, and keeps it within her until it's time to give birth. The intuition, once received and contemplated with perception, leads to the creation of the Universe.
Communism
Communist revolutionaries initially depicted idealized womanhood as muscular, plainly dressed and strong, with good female communists shown as undertaking hard manual labour, using guns, and eschewing self-adornment. Contemporary Western journalists portrayed communist states as the enemy of traditional femininity, describing women in communist countries as "mannish" perversions. In revolutionary China in the 1950s, Western journalists described Chinese women as "drably dressed, usually in sloppy slacks and without makeup, hair waves or nail polish" and wrote that "Glamour was communism's earliest victim in China. You can stroll the cheerless streets of Peking all day, without seeing a skirt or a sign of lipstick; without thrilling to the faintest breath of perfume; without hearing the click of high heels, or catching the glint of legs sheathed in nylon." In communist Poland, changing from high heels to worker's boots symbolized women's shift from the bourgeois to socialism."
Later, the initial state portrayals of idealized femininity as strong and hard-working began to also include more traditional notions such as gentleness, caring and nurturing behaviour, softness, modesty and moral virtue, requiring good communist women to become "superheroes who excelled in all spheres", including working at jobs not traditionally regarded as feminine in nature.
Communist ideology explicitly rejected some aspects of traditional femininity that it viewed as bourgeois and consumerist, such as helplessness, idleness and self-adornment. In Communist countries, some women resented not having access to cosmetics and fashionable clothes. In her 1993 book of essays How We Survived Communism & Even Laughed, Croatian journalist and novelist Slavenka Drakulic wrote about "a complaint I heard repeatedly from women in Warsaw, Budapest, Prague, Sofia, East Berlin: 'Look at us – we don't even look like women. There are no deodorants, perfumes, sometimes even no soap or toothpaste. There is no fine underwear, no pantyhose, no nice lingerie[']" and "Sometimes I think the real Iron Curtain is made of silky, shiny images of pretty women dressed in wonderful clothes, of pictures from women's magazines ... The images that cross the borders in magazines, movies or videos are therefore more dangerous than any secret weapon, because they make one desire that 'otherness' badly enough to risk one's life trying to escape."
As communist countries such as Romania and the Soviet Union began to liberalize, their official media began representing women in more conventionally feminine ways compared with the "rotund farm workers and plain-Jane factory hand" depictions they had previously been publishing. As perfumes, cosmetics, fashionable clothing, and footwear became available to ordinary women in the Soviet Union, East Germany, Poland, Yugoslavia and Hungary, they began to be presented not as bourgeois frivolities but as signs of socialist modernity. In China, with the economic liberation started by Deng Xiaoping in the 1980s, the state stopped discouraging women from expressing conventional femininity, and gender stereotypes and commercialized sexualization of women which had been suppressed under communist ideology began to rise.
In men
In many cultures, men who display qualities considered feminine are often stigmatized and labeled as weak. Effeminate men are often associated with homosexuality, although femininity is not necessarily related to a man's sexual orientation. Because men are pressured to be masculine and heterosexual, feminine men are assumed to be gay or queer because of how they perform their gender. This assumption limits the way one is allowed to express one's gender and sexuality.
Cross-dressing and drag are two public performances of femininity by men that have been popularly known and understood throughout many western cultures. Men who wear clothing associated with femininity are often called cross-dressers. A drag queen is a man who wears flamboyant women's clothing and behaves in an exaggeratedly feminine manner for entertainment purposes.
Feminist views
Feminist philosophers such as Judith Butler and Simone de Beauvoir contend that femininity and masculinity are created through repeated performances of gender; these performances reproduce and define the traditional categories of sex and/or gender.
Many second-wave feminists reject what they regard as constricting standards of female beauty, created for the subordination and objectifying of women and self-perpetuated by reproductive competition and women's own aesthetics.
Others, such as lipstick feminists and some other third-wave feminists, argue that feminism should not devalue feminine culture and identity, and that symbols of feminine identity such as make-up, suggestive clothing and having a sexual allure can be valid and empowering personal choices for both sexes.
Julia Serano notes that masculine girls and women face much less social disapproval than feminine boys and men, which she attributes to sexism. Serano argues that women wanting to be like men is consistent with the idea that maleness is more valued in contemporary culture than femaleness, whereas men being willing to give up masculinity in favour of femininity directly threatens the notion of male superiority as well as the idea that men and women should be opposites. To support her thesis, Serano cites the far greater public scrutiny and disdain experienced by male-to-female cross-dressers compared with that faced by women who dress in masculine clothes, as well as research showing that parents are likelier to respond negatively to sons who like Barbie dolls and ballet or wear nail polish than they are to daughters exhibiting comparably masculine behaviours. Serano notes that some behaviors, such as frequent smiling or avoiding eye contact with strangers, are considered feminine because they are practised disproportionately by women, and likely have resulted from women's attempts to negotiate through a world which is sometimes hostile to them.
See also
Butch and femme
Effeminacy
Femboy
Feminine beauty ideal
Feminine psychology
Feminism
Feminization (sociology)
Gender expression
Gender identity
Gender role
Gender studies
Girly girl
Lipstick feminism
Lipstick lesbian
Marianismo
Masculinity
Nature versus nurture
Otokonoko
Sex–gender distinction
Social construction of gender
Sociology of gender
Transfeminine
References
External links | 0.765748 | 0.997589 | 0.763902 |