title
stringlengths 3
68
| text
stringlengths 685
186k
| relevans
float64 0.76
0.82
| popularity
float64 0.93
1
| ranking
float64 0.75
0.81
|
---|---|---|---|---|
Historical fantasy | Historical fantasy is a category of fantasy and genre of historical fiction that incorporates fantastic elements (such as magic) into a more "realistic" narrative. There is much crossover with other subgenres of fantasy; those classed as Arthurian, Celtic, or Dark Ages could just as easily be placed in historical fantasy. Stories fitting this classification generally take place prior to the 20th century.
Films of this genre may have plots set in biblical times or classical antiquity. They often have plots based very loosely on mythology or legends of Greek-Roman history, or the surrounding cultures of the same era.
Overview
Historical fantasy usually takes one of four common approaches:
Magic, mythical creatures, such as dragons, or other supernatural elements, such as magic rings, co-exist invisibly with the mundane world, with the majority of people being unaware of it. In this, it has a close similarity to contemporary fantasy. This commonly overlaps with the secret history trope. Alternatively, the author's narrative shows or implies that by the present day, magic will have "retreated" from the world or been hidden to all but a few initiates so as to allow history to revert to the familiar version we know. An example of this can be found in Lord Dunsany's The Charwoman's Shadow, which takes place in Spain, but which ends with the magician in it removing himself and all creatures of romance from the world, thereby ending the Golden Age.
It also can include an alternative history, in which the past or present has been significantly changed when an actual historical event turned out differently.
The story takes place in a secondary world with specific and recognizable parallels to a known place (or places) and a definite historical period, rather than taking the geographic and historical "mix and match" favoured by other works of secondary world fantasy. However, many, if not most, works by fantasy authors derive ideas and inspiration from real events, making the borders of this approach unclear.
Historical fantasy may also be set in a fictional world which resembles a period from history but is not that actual history.
All four approaches have overlapped in the subgenre of steampunk commonly associated with science fiction literature. However, not all steampunk fantasy belongs to the historical fantasy subgenre.
Subgenres
Arabian fantasy
After Antoine Galland's translation of One Thousand and One Nights became popular in Europe, many writers wrote fantasy based on Galland's romantic image of the Middle East and North Africa. Early examples included the satirical tales of Anthony Hamilton, and Zadig by Voltaire. English-language work in the Arabian fantasy genre includes Rasselas (1759) by Samuel Johnson, The Tales of the Genii by James Ridley (1764), Vathek by William Thomas Beckford (1786), George Meredith's The Shaving of Shagpat (1856), Khaled (1891) by F. Marion Crawford, and James Elroy Flecker's Hassan (1922).
In the late 1970s, interest in the subgenre revived with Hasan (1977) by Piers Anthony. This was followed by several other novels reworking Arabian legend: the metafictional The Arabian Nightmare (1983) by Robert Irwin, Diana Wynne Jones' children's novel Castle in the Air (1990), Tom Holt's humorous Djinn Rummy (1995) and Hilari Bell's Fall of a Kingdom.
Celtic fantasy
Celtic fantasy has links to historical fantasy and Celtic historical fiction. Celtic historical fantasy includes such works as Katharine Kerr's Deverry series, or Teresa Edgerton's Green Lion trilogy. These works are (loosely) based on ancient Celtic cultures. The separate folklore of Ireland, Wales, and Scotland has sometimes been used indiscriminately, sometimes with great effect, as in Paul Hazel's Finnbranch trilogy, Yearwood (1980), Undersea, (1982) and Winterking (1985); other writers have distinguished between the three to use a single source.
Notable works inspired by Irish mythology included James Stephens' The Crock of Gold (1912), Lord Dunsany's The Curse of the Wise Woman (1933), Flann O'Brien's humorous At Swim-Two-Birds (1939), Pat O'Shea's The Hounds of the Morrigan (1985) and novels by Peter Tremayne, Morgan Llywelyn and Gregory Frost.
The Welsh tradition has been particularly influential, which has to do with its connection to King Arthur and its collection in a single work, the epic Mabinogion. One influential retelling of this was the fantasy work of Evangeline Walton: The Island of the Mighty, The Children of Llyr, The Song of Rhiannon, and Prince of Annwn. A notable amount of fiction has been written in the Welsh area of Celtic fantasy; other notable authors of Welsh Celtic fantasy include Kenneth Morris, John Cowper Powys, Vaughan Wilkins, Lloyd Alexander, Alan Garner, and Jenny Nimmo.
Scottish Celtic fantasy is less common, but James Hogg, John Francis Campbell (The Celtic Dragon Myth, 1911), Fiona MacLeod, William Sharp, George Mackay Brown and Deborah Turner Harris all wrote material based on Scottish myths and legends.
Fantasy based on the Breton folklore branch of Celtic mythology does not often appear in the English language. However, several noted writers have utilized such material; Robert W. Chambers' The Demoiselle d'Ys (from The King in Yellow, 1895) and A. Merritt in Creep, Shadow! (1934) both drew on the Breton legend of the lost city of Ys, while "The Lay of Aotrou and Itroun" (1930) by J. R. R. Tolkien is a narrative poem based on the Breton legend of the Corrigan.
Classical fantasy
Classical fantasy is a subgenre of historical fantasy based on the Greek and Roman myths. Symbolism from classical mythology is enormously influential on Western culture, but it was not until the 19th century that it was used in the context of literary fantasy. Richard Garnett (The Twilight of the Gods and Other Tales, 1888, revised 1903) and John Kendrick Bangs (Olympian Nights, 1902) used the Greek myths for satirical purposes.
20th-century writers who made extensive use of the subgenre included John Erksine, who continued the satirical tradition of classical fantasy in such works as The Private Life of Helen of Troy (1925) and Venus, the Lonely Goddess (1949). Eden Phillpotts used Greek myths to make philosophical points in such fantasies as Pan and the Twins (1922) and Circe's Island (1925). Jack Williamson's The Reign of Wizardry (Unknown Worlds, 1940) is an adventure story based on the legend of Theseus. Several of Thomas Burnett Swann's novels draw on Greek and Roman myth, including Day of the Minotaur (1966). The Firebrand (1986) by Marion Zimmer Bradley and Olympic Games (2004) by Leslie What are both classical fantasy tales with feminist undertones. Guy Gavriel Kay who has made a career out of historical fantasy, set his two novels in The Sarantine Mosaic series in a parallel world heavily mirroring Justinian I's Byzantium.
Fantasy of manners
Fantasy of manners, aka "mannerpunk," is a subgenre that takes place within a strict, elaborate, and hierarchical social structure. Inspired by the social novels and the comedy of manners of such authors as Jane Austen and Oscar Wilde, fantasy of manners involves class struggles among genteel characters in urban environments, and while duels are permitted, witty repartee often substitutes for physical conflict. Examples of fantasy of manners include Swordspoint by Ellen Kushner and Jonathan Strange & Mr Norrell by Susanna Clarke.
Fantasy steampunk
Fantasy steampunk is another subgenre of historical fantasy, generally set in the Victorian or Edwardian eras. Steam technology, mixed with Victorian or Gothic-style architecture and technology, is the most widely recognized interpretation of this genre. One of the most popular characteristics of steampunk is the appearance of naked clockwork, rusty gears, and engines. Typically, gunpowder fantasy also includes elements of real-world technology such as steam power, telegraphy, and in some cases early telephones or combustion engines. Some works in this genre are alternate history.
Philip Pullman's The Golden Compass is an example of a Fantasy steampunk novel, along with The Half-Made World by Felix Gilman and The Anubis Gates by Tim Powers.
Gaslamp fantasy
Gaslamp fantasy is a subgenre to both steampunk and historical fantasy that takes place in an alternative universe based on Victorian or Edwardian eras. However, magic plays a more important role than the era's mechanical technology.
Gunpowder fantasy
Sometimes called "muskets and magic". Gunpowder fantasy is generally set in a world with technology roughly equivalent to Early modern Europe (16th through 18th centuries), particularly the latter era. Gunpowder fantasy combines elements of high fantasy (magic, mythical creatures, races like elves, epic scale) with guns like muskets and rifles. It is a relatively new subgenre, but has been picking up popularity. It varies from medieval fantasy by inclusion of gunpowder. It varies from steampunk in that it stays away from the fantastic inventions (airships, machines, etc.) that are common in steampunk. Similar to steampunk, gunpowder fantasy is considered a step below its more popular cousin.
Gunpowder fantasy examples include Solomon Kane series (1928–) created by Robert E. Howard, Monster Blood Tattoo Series by D. M. Cornish (2006–2010), Fullmetal Alchemist by Hiromu Arakawa (2001–2010), Terrarch Tetralogy by William King (2011–), and The Powder Mage trilogy, Brian McClellan (2013–2015), The Shadow Campaigns (2013–2018) by Django Wexler.
Medieval fantasy
Medieval fantasy encompasses works where aspects of medieval history such as legends from the Middle Ages, and aesthetics such as medievalisms, overlap with fantasy. According to the Getty Museum, it is contrasted from folklore which is set in a "familiar world with stock characters and plots". Subgenres of fantasy such as Gothic fiction, sword and sorcery, fairy tales, high fantasy, and low fantasy, can also overlap with medieval fantasy.
The broad genre of medieval fantasy is common among role-playing games and high fantasy literature. Notable examples of medieval fantasy games the Getty Museum has listed include the Legend of Zelda series (1986-) and Dungeons & Dragons (1974). Examples of literature listed include the Lord of the Rings trilogy (1954-1955) and A Song of Ice and Fire (1996-).
Prehistoric fantasy
Stories set in prehistoric times and depicting the lives of prehistoric people. Prehistoric fantasy examples include the Earth's Children series by Jean M. Auel (1980-2011) and the Chronicles of Ancient Darkness by Michelle Paver.
Wuxia
Wǔxiá, literally meaning "martial (arts) heroes", is a subgenre of the quasi-fantasy and martial arts genre in literature, television and cinema. Wǔxiá figures prominently in the popular culture of Chinese-speaking areas, and the most important writers have devoted followings.
The wǔxiá genre is a blend of the philosophy of xiá (俠, "honor code", "an ethical person", "a hero"), and China's long history in wǔshù ("kung fu" or "martial arts"). A martial artist who follows the code of xiá is called a swordsman, or xiákè (俠客/侠客, literally "chivalrous guest"). Japan's samurai bushidō traditions, England's knight chivalry traditions, and America's gunslinger Western traditions all share some aspects with China's swordsman xiá traditions. The swordsman, however, need not serve a lord or hold any military power and they are not required to be from an aristocratic class.
See also
Alternate history
References
fantasy
Fantasy genres
Historical novels subgenres
pl:Powieść historyczno-fantastyczna | 0.777921 | 0.992006 | 0.771702 |
Civilizing mission | The civilizing mission (; ; ) is a political rationale for military intervention and for colonization purporting to facilitate the Westernization or Japanization of indigenous peoples, especially in the period from the 15th to the 20th centuries. As a principle of Western culture, the term was most prominently used in justifying French colonialism in the late-15th to mid-20th centuries. The civilizing mission was the cultural justification for the colonization of French Algeria, French West Africa, French Indochina, Portuguese Angola and Portuguese Guinea, Portuguese Mozambique and Portuguese Timor, among other colonies. The civilizing mission also was a popular justification for the British and German colonialism. In the Russian Empire, it was also associated with the Russian conquest of Central Asia and the Russification of that region. The Western colonial powers claimed that, as Christian nations, they were duty bound to disseminate Western civilization to what they perceived as heathen, primitive cultures. It was also applied by the Empire of Japan, which colonized Korea.
Origins
In the eighteenth century, Europeans saw history as a linear, inevitable, and perpetual process of sociocultural evolution led by Western Europe. From the reductionist cultural perspective of Western Europe, colonialists saw non-Europeans as "backward nations", as people intrinsically incapable of socioeconomic progress. In France, the philosopher Marquis de Condorcet formally postulated the existence of a European "holy duty" to help non-European peoples "which, to civilize themselves, wait only to receive the means from us, to find brothers among Europeans, and to become their friends and disciples".
Modernization theory — progressive transition from traditional, premodern society to modern, industrialized society — proposed that the economic self-development of a non-European people is incompatible with retaining their culture (mores, traditions, customs). That breaking from their old culture is prerequisite to socioeconomic progress, by way of practical revolutions in the social, cultural, and religious institutions, which would change their collective psychology and mental attitude, philosophy and way of life, or to disappear. Therefore, development criticism sees economic development as a continuation of the civilizing mission. That to become civilized invariably means to become more "like us", therefore "civilizing a people" means that every society must become a capitalist consumer society, by renouncing their native culture to become Westernized. Cultivation of land and people has been a similarly employed concept, used instead of civilizing in German speaking colonial contexts to press for colonization and cultural imperialism through "extensive cultivation" and "culture work".
According to Jennifer Pitts, there was considerable skepticism among French and British liberal thinkers (such as Adam Smith, Jeremy Bentham, Edmund Burke, Denis Diderot and Marquis de Condorcet) about empire in the 1780s. However, by the mid-19th century, liberal thinkers such as John Stuart Mill and Alexis de Tocqueville endorsed empire on the basis of the civilizing mission.
By state
Britain
Although the British did not invent the term, the notion of a "civilizing mission" was equally important for them to justify colonialism. It was used to legitimatize British rule over the colonized, especially when the colonial enterprise was not very profitable.
The British used their sports as a tool to spread their values and culture among native populations, as well as a way of emphasizing their own dominance, as they were the owners of the rules of these sports and were naturally more experienced at playing these games. Test cricket, for example, was seen as a sport that inherently involved values of fair play and civilizedness. In some cases, British sports served a purpose of providing exercise and integration across social boundaries for native populations. The growth of British sports led to a natural decline of the colonized peoples' sports, creating fear amongst some that a loss of their native culture might hamper their ability to resist colonial rule. Over time, colonized peoples ended up seeing British sports as a venue to prove their equality to the British, and victories against the British in sports gave momentum to nascent independence movements.
The idea that the British were bringing civilization to the uncivilized areas of the world is famously expressed in Rudyard Kipling's poem The White Man's Burden.
France
Alice Conklin explained in her works that the French colonial empire coincided with the apparently opposite concept of "Republic".
Netherlands
United States
The concept of a "civilizing mission" would also be adopted by the United States during the age of New Imperialism in the late 19th and early 20th centuries. Such projects would include US annexation of the Philippines during the aftermath of the Spanish-American War in 1898. The McKinley administration would declare that the US position within the Philippines was to “oversee the establishment of a civilian government” on the model of the United States. That would be done through adopting a civilizing process that would entail a "medical reformation" and other socioeconomic reforms. The Spanish health system had broken down after the 1898 war and was replaced with an American military model, which was made up of public health institutions. The "medical reformation" was done with "military rigor" as part of a civilizing process in which American public health officers set out to train native Filipinos the "correct techniques of the body." The process of "rationalized hygiene" was a technique for colonizing in the Philippines, as part of the American physicists assurance that the colonized Philippines was inhabited with propriety. Other "sweeping reforms and ambitious public works projects" would include the implementation of a free public school system, as well as architecture to develop "economic growth and civilizing influence" as an important component of McKinley's "benevolent assimilation."
Similar "civilizing" tactics were also incorporated into the American colonization of Puerto Rico in 1900. They would include extensive reform such as the legalization of divorce in 1902 in an attempt to instill American social mores into the island’s populace to "legitimatize the emerging colonial order."
Purported benefits for the colonized nation included "greater exploitation of natural resources, increased production of material goods, raised living standards, expanded market profitability and sociopolitical stability".
However, the occupation of Haiti in 1915 would also show a darker side to the American "civilizing mission." The historian Mary Renda has argued that the occupation of Haiti was solely for the "purposes of economic exploitation and strategic advantage," rather than to provide Haiti with "protection, education and economic support."
Portugal
After consolidating its territory in the 13th century through a Reconquista of the Muslim states of Western Iberia, the Kingdom of Portugal started to expand overseas. In 1415, Islamic Ceuta was occupied by the Portuguese during the reign of John I of Portugal. Portuguese expansion in North Africa was the beginning of a larger process eventually known as the Portuguese Overseas Expansion, under which the Kingdom's goals included the expansion of Christianity into Muslim lands and the desire of nobility for epic acts of war and conquest with the support of the Pope.
As the Portuguese extended their influence around the coast to Mauritania, Senegambia (by 1445) and Guinea, they created trading posts. Rather than become direct competitors to the Muslim merchants, they used expanding market opportunities in Europe and the Mediterranean to increase trade across the Sahara. In addition, Portuguese merchants gained access to the African interior via the Senegal and Gambia rivers, which crossed long-standing trans-Saharan routes. The Portuguese brought in copper ware, cloth, tools, wine and horses. Trade goods soon also included arms and ammunition. In exchange, the Portuguese received gold (transported from mines of the Akan deposits), pepper (a trade which lasted until Vasco da Gama reached India in 1498) and ivory. It was not until they reached the Kongo coast in the 1480s that they moved beyond Muslim trading territory in Africa.
Forts and trading posts were established along the coast. Portuguese sailors, merchants, cartographers, priests and soldiers had the task of taking over the coastal areas, settling, and building churches, forts and factories, as well as exploring areas unknown to Europeans. A Company of Guinea was founded as a Portuguese governmental institution to control the trade, and called Casa da Guiné or Casa da Guiné e Mina from 1482 to 1483, and Casa da Índia e da Guiné in 1499.
The first of the major European trading forts, Elmina, was founded on the Gold Coast in 1482 by the Portuguese. Elmina Castle (originally known as the "São Jorge da Mina Castle") was modeled on the Castelo de São Jorge, one of the earliest royal residences in Lisbon. Elmina, which means "the port", became a major trading center. By the beginning of the colonial era, there were forty such forts operating along the coast. Rather than being icons of colonial domination, the forts acted as trading posts—they rarely saw military action—the fortifications were important, however, when arms and ammunition were being stored prior to trade. The 15th-century Portuguese exploration of the African coast, is commonly regarded as the harbinger of European colonialism, and also marked the beginnings of the Atlantic slave trade, Christian missionary evangelization and the first globalization processes which were to become a major element of the European colonialism until the end of the 18th century.
Although the Portuguese Empire's policy regarding native peoples in the less technologically advanced places around the world (most prominently in Brazil) had always been devoted to enculturation, including teaching and evangelization of the indigenous populations, as well as the creation of novel infrastructure to openly support these roles, it reached its largest extent after the 18th century in what was then Portuguese Africa and Portuguese Timor. New cities and towns, with their Europe-inspired infrastructure, which included administrative, military, healthcare, educational, religious, and entrepreneurial halls, were purportedly designed to accommodate Portuguese settlers.
The Portuguese explorer Paulo Dias de Novais founded Luanda in 1575 as "São Paulo de Loanda", with a hundred families of settlers and four hundred soldiers. Benguela, a Portuguese fort from 1587 which became a town in 1617, was another important early settlement they founded and ruled. The Portuguese would establish several settlements, forts and trading posts along the coastal strip of Africa. In the Island of Mozambique, one of the first places where the Portuguese permanently settled in Sub-Saharan Africa, they built the Chapel of Nossa Senhora de Baluarte, in 1522, now considered the oldest European building in the southern hemisphere. Later the hospital, a majestic neo-classical building constructed in 1877 by the Portuguese, with a garden decorated with ponds and fountains, was for many years the biggest hospital south of the Sahara.
Estatuto do Indigenato
The establishment of a dual, racialized civil society was formally recognized in Estatuto do Indigenato (The Statute of Indigenous Populations) adopted in 1929, and was based in the subjective concept of civilization versus tribalism. Portugal's colonial authorities were totally committed to develop a fully multiethnic "civilized" society in its African colonies, but that goal or "civilizing mission", would only be achieved after a period of Europeanization or enculturation of the native black tribes and ethnocultural groups. It was a policy which had already been stimulated in the former Portuguese colony of Brazil. Under Portugal's Estado Novo regime, headed by António de Oliveira Salazar, the Estatuto established a distinction between the "colonial citizens", subject to Portuguese law and entitled to citizenship rights and duties effective in the "metropole", and the indigenas (natives), subject to both colonial legislation and their customary, tribal laws.
Between the two groups, there was a third small group, the assimilados, comprising native blacks, mulatos, Asians, and mixed-race people, who had at least some formal education, were not subjected to paid forced labor, were entitled to some citizenship rights, and held a special identification card that differed from the one imposed on the immense mass of the African population (the indigenas), a card that the colonial authorities conceived of as a means of controlling the movements of forced labor (CEA 1998). The indigenas were subject to the traditional authorities, who were gradually integrated into the colonial administration and charged with solving disputes, managing the access to land, and guaranteeing the flows of workforce and the payment of taxes. As several authors have pointed out (Mamdani 1996; Gentili 1999; O'Laughlin 2000), the Indigenato regime was the political system that subordinated the immense majority of native Africans to local authorities entrusted with governing, in collaboration with the lowest echelon of the colonial administration, the "native" communities described as tribes and assumed to have a common ancestry, language, and culture.
After World War II, as communist and anti-colonial ideologies spread out across Africa, many clandestine political movements were established in support of independence. Regardless it was exaggerated anti-Portuguese/anti-"Colonial" propaganda, a dominant tendency in Portuguese Africa, or a mix of both, these movements claimed that since policies and development plans were primarily designed by the ruling authorities for the benefit of the territories' ethnic Portuguese population, little attention was paid to local tribal integration and the development of its native communities. According to the official guerrilla statements, this affected a majority of the indigenous population who suffered both state-sponsored discrimination and enormous social pressure. Many felt they had received too little opportunity or resources to upgrade their skills and improve their economic and social situation to a degree comparable to that of the Europeans. Statistically, Portuguese Africa's Portuguese whites were indeed wealthier and more skilled than the black indigenous majority, but the late 1950s, the 1960s and principally the early 1970s, were being testimony of a gradual change based in new socioeconomic developments and equalitarian policies for all.
Colonial wars
The Portuguese Colonial War began in Portuguese Angola on 4 February 1961, in an area called the Zona Sublevada do Norte (ZSN or the Rebel Zone of the North), consisting of the provinces of Zaire, Uíge and Cuanza Norte. The U.S.-backed UPA wanted national self-determination, while for the Portuguese, who had settled in Africa and ruled considerable territory since the 15th century, their belief in a multi-racial, assimilated overseas empire justified going to war to prevent its breakup and protect its populations. Portuguese leaders, including António de Oliveira Salazar, defended the policy of multiracialism, or Lusotropicalism, as a way of integrating Portuguese colonies, and their peoples, more closely with Portugal itself. For the Portuguese ruling regime, the overseas empire was a matter of national interest. In Portuguese Africa, trained Portuguese black Africans were allowed to occupy positions in several occupations including specialized military, administration, teaching, health, and other posts in the civil service and private businesses, as long as they had the right technical and human qualities. In addition, intermarriage of black women with white Portuguese men was a common practice since the earlier contacts with the Europeans. The access to basic, secondary, and technical education was being expanded and its availability was being increasingly opened to both the indigenous and European Portuguese of the territories.
Examples of this policy include several black Portuguese Africans who would become prominent individuals during the war or in the post-independence, and who had studied during the Portuguese rule of the territories in local schools or even in Portuguese schools and universities in the mainland (the metropole) - Samora Machel, Mário Pinto de Andrade, Marcelino dos Santos, Eduardo Mondlane, Agostinho Neto, Amílcar Cabral, Joaquim Chissano, and Graça Machel are just a few examples. Two large state-run universities were founded in Portuguese Africa in the early 1960s (the Universidade de Luanda in Angola and the Universidade de Lourenço Marques in Mozambique, awarding a wide range of degrees from engineering to medicine), during a time that in the European mainland only four public universities were in operation, two of them in Lisbon (which compares with the 14 Portuguese public universities today). Several figures in Portuguese society, including one of the most idolized sports stars in Portuguese football history, a black football player from Portuguese East Africa named Eusébio, were other examples of assimilation and multiracialism.
Since 1961, with the beginning of the colonial wars in its overseas territories, Portugal had begun to incorporate black Portuguese Africans in the war effort in Angola, Portuguese Guinea, and Portuguese Mozambique based on concepts of multi-racialism and preservation of the empire. African participation on the Portuguese side of the conflict ranged from marginal roles as laborers and informers to participation in highly trained operational combat units, including platoon commanders. As the war progressed, the use of African counterinsurgency troops increased; on the eve of the military coup of 25 April 1974, Africans accounted for more than 50 percent of Portuguese forces fighting the war. Due to the technological gap between both civilizations and the centuries-long colonial era, Portugal was a driving force in the development and shaping of all Portuguese Africa since the 15th century.
In the 1960s and early 1970s, in order to counter the increasing insurgency of the nationalistic guerrillas and show to the Portuguese people and the world that the overseas territories were totally under control, the Portuguese government accelerated its major development programs to expand and attempted to upgrade the infrastructure of the overseas territories in Africa by creating new roads, railways, bridges, dams, irrigation systems, schools and hospitals to stimulate an even higher level of economic growth and support from the populace. As part of this redevelopment program, construction of the Cahora Bassa Dam began in 1969 in the Overseas Province of Mozambique (the official designation of Portuguese Mozambique by then). This particular project became intrinsically linked with Portugal's concerns over security in the overseas colonies. The Portuguese government viewed the construction of the dam as a testimony to Portugal's "civilizing mission" and intended for the dam to reaffirm Mozambican belief in the strength and security of the Portuguese colonial government.
Brazil
When the Portuguese explorers arrived in 1500, the Amerindians were mostly semi-nomadic tribes, with the largest population living on the coast and along the banks of major rivers. Unlike Christopher Columbus who thought he had reached India, the Portuguese sailor Vasco da Gama had already reached India sailing around Africa two years before Pedro Álvares Cabral reached Brazil. Nevertheless, the word índios ("Indians") was by then established to designate the peoples of the New World and remains so (it is used to this day in the Portuguese language, the people of India being called indianos).
Initially, the Europeans saw the natives as noble savages, and miscegenation began straight away. Tribal warfare and cannibalism convinced the Portuguese that they should "civilize" the Amerindians, even if one of the four groups of Aché people in Paraguay practiced cannibalism regularly until the 1960s. When the Kingdom of Portugal's explorers discovered Brazil in the 15th century and started to colonize its new possessions in the New World, the territory was inhabited by various indigenous peoples and tribes which had developed neither a writing system nor school education.
The Society of Jesus (Jesuits) has been since its founding in 1540 as a missionary order. Evangelization was one of the primary goals of the Jesuits; however, they were also committed to an education both in Europe and overseas. Their missionary activities, both in the cities and in the countryside, were complemented by a strong commitment to education. This took the form of the opening of schools for young boys, first in Europe, but soon extended to both America and Asia. The foundation of Catholic missions, schools, and seminaries was another consequence of the Jesuit involvement in education. As the spaces and cultures where the Jesuits were presently varied considerably, their evangelizing methods diverged by location. However, the Society's engagement in trade, architecture, science, literature, languages, arts, music, and religious debate corresponded, in fact, to the common and foremost purpose of Christianization.
By the middle of the 16th century, the Jesuits were present in West Africa, South America, Ethiopia, India, China, and Japan. In a period of history when the world had a largely illiterate population, the Portuguese Empire, was home to one of the first universities founded in Europe - the University of Coimbra, which currently is still one of the oldest universities. Throughout the centuries of Portuguese rule, Brazilian students, mostly graduated in the Jesuit missions and seminaries, were allowed and even encouraged to enroll at higher education in mainland Portugal. By 1700, and reflecting a larger transformation of the Portuguese Empire, the Jesuits had decisively shifted their activity from the East Indies to Brazil. In the late 18th century, the Portuguese minister of the kingdom Marquis of Pombal attacked the power of the privileged nobility and the church and expelled the Jesuits from Portugal and its overseas possessions. Pombal seized the Jesuit schools and introduced educational reforms all over the empire.
In 1772, even before the establishment of the Science Academy of Lisbon (1779), one of the first learned societies of both Brazil and the Portuguese Empire, the Sociedade Scientifica, was founded in Rio de Janeiro. Furthermore, in 1797, the first botanic institute was founded in Salvador, Bahia. During the late 18th century, the Escola Politécnica (then the Real Academia de Artilharia, Fortificação e Desenho) of Rio de Janeiro was created in 1792 through a decree issued by the Portuguese authorities as a higher education school for the teaching of the sciences and engineering. It belongs today to the Universidade Federal do Rio de Janeiro and is the oldest engineering school of Brazil, and one of the oldest in Latin America. A royal letter of November 20, 1800 by the King John VI of Portugal established in Rio de Janeiro the Aula Prática de Desenho e Figura, the first institution in Brazil dedicated to teaching the arts. During colonial times, the arts were mainly religious or utilitarian and were learned in a system of apprenticeship. A Decree of August 12, 1816, created an Escola Real de Ciências, Artes e Ofícios (Royal School of Sciences, Arts and Crafts), which established an official education in the fine arts and was the foundation of the current Escola Nacional de Belas Artes.
In the 19th century, the Portuguese royal family, headed by João VI, arrived in Rio de Janeiro escaping from the Napoleon's army invasion of Portugal in 1807. João VI gave impetus to the expansion of European civilization in Brazil. In a short period between 1808 and 1810, the Portuguese government founded the Royal Naval Academy and the Royal Military Academy, the Biblioteca Nacional, the Rio de Janeiro Botanical Garden, the Medico-Chirurgical School of Bahia, currently known as the "Faculdade de Medicina" under the purview of the Universidade Federal da Bahia and the Medico-Chirurgical School of Rio de Janeiro which is the modern-day Faculdade de Medicina of the Universidade Federal do Rio de Janeiro.
Chile
Nineteenth century elites of South American republics also used a civilizing mission rhetoric to justify armed actions against indigenous groups. On January 1, 1883, Chile refounded the old city of Villarrica, thus formally ending the process of the occupation of the indigenous lands of Araucanía. Six months later, on June 1, president Domingo Santa María declared:
The country has with satisfaction seen the problem of the reduction of the whole Araucanía solved. This event, so important to our social and political life, and so significant for the future of the republic, has ended, happily and with costly and painful sacrifices. Today the whole Araucanía is subjugated, more than to the material forces, to the moral and civilizing force of the republic ...
Modern day
Pinkwashing, the strategy of promoting LGBT rights protections as evidence of liberalism and democracy, has been described as a continuation of the civilizing mission used to justify colonialism, this time on the basis of LGBT rights in Western countries.
See also
Sources
References
Jean Suret-Canale. Afrique Noire: l'Ere Coloniale (Editions Sociales, Paris, 1971)
Eng. translation, French Colonialism in Tropical Africa, 1900–1945. (New York, 1971).
External links
History of European colonialism
Aboriginal title
Christianization
Cultural assimilation
French colonial empire
Portuguese Empire
Eurocentrism
White supremacy
Military-related euphemisms | 0.778638 | 0.991086 | 0.771697 |
Proto-Indo-European language | Proto-Indo-European (PIE) is the reconstructed common ancestor of the Indo-European language family. No direct record of Proto-Indo-European exists; its proposed features have been derived by linguistic reconstruction from documented Indo-European languages.
Far more work has gone into reconstructing PIE than any other proto-language, and it is the best understood of all proto-languages of its age. The majority of linguistic work during the 19th century was devoted to the reconstruction of PIE or its daughter languages, and many of the modern techniques of linguistic reconstruction (such as the comparative method) were developed as a result.
PIE is hypothesized to have been spoken as a single language from approximately 4500 BCE to 2500 BCE during the Late Neolithic to Early Bronze Age, though estimates vary by more than a thousand years. According to the prevailing Kurgan hypothesis, the original homeland of the Proto-Indo-Europeans may have been in the Pontic–Caspian steppe of eastern Europe. The linguistic reconstruction of PIE has provided insight into the pastoral culture and patriarchal religion of its speakers.
As speakers of Proto-Indo-European became isolated from each other through the Indo-European migrations, the regional dialects of Proto-Indo-European spoken by the various groups diverged, as each dialect underwent shifts in pronunciation (the Indo-European sound laws), morphology, and vocabulary. Over many centuries, these dialects transformed into the known ancient Indo-European languages. From there, further linguistic divergence led to the evolution of their current descendants, the modern Indo-European languages.
PIE is believed to have had an elaborate system of morphology that included inflectional suffixes (analogous to English child, child's, children, children's) as well as ablaut (vowel alterations, as preserved in English sing, sang, sung, song) and accent. PIE nominals and pronouns had a complex system of declension, and verbs similarly had a complex system of conjugation. The PIE phonology, particles, numerals, and copula are also well-reconstructed.
Asterisks are used by linguists as a conventional mark of reconstructed words, such as *, *, or *; these forms are the reconstructed ancestors of the modern English words water, hound, and three, respectively.
Development of the hypothesis
No direct evidence of PIE exists; scholars have reconstructed PIE from its present-day descendants using the comparative method. For example, compare the pairs of words in Italian and English: and foot, and father, and fish. Since there is a consistent correspondence of the initial consonants (p and f) that emerges far too frequently to be coincidental, one can infer that these languages stem from a common parent language. Detailed analysis suggests a system of sound laws to describe the phonetic and phonological changes from the hypothetical ancestral words to the modern ones. These laws have become so detailed and reliable as to support the Neogrammarian rule: the Indo-European sound laws apply without exception.
William Jones, an Anglo-Welsh philologist and puisne judge in Bengal, caused an academic sensation when in 1786 he postulated the common ancestry of Sanskrit, Greek, Latin, Gothic, the Celtic languages, and Old Persian, but he was not the first to state such a hypothesis. In the 16th century, European visitors to the Indian subcontinent became aware of similarities between Indo-Iranian languages and European languages, and as early as 1653, Marcus Zuerius van Boxhorn had published a proposal for a proto-language ("Scythian") for the following language families: Germanic, Romance, Greek, Baltic, Slavic, Celtic, and Iranian. In a memoir sent to the in 1767, , a French Jesuit who spent most of his life in India, had specifically demonstrated the analogy between Sanskrit and European languages. According to current academic consensus, Jones's famous work of 1786 was less accurate than his predecessors', as he erroneously included Egyptian, Japanese and Chinese in the Indo-European languages, while omitting Hindi.
In 1818, Danish linguist Rasmus Christian Rask elaborated the set of correspondences in his prize essay ('Investigation of the Origin of the Old Norse or Icelandic Language'), where he argued that Old Norse was related to the Germanic languages, and had even suggested a relation to the Baltic, Slavic, Greek, Latin and Romance languages. In 1816, Franz Bopp published On the System of Conjugation in Sanskrit, in which he investigated the common origin of Sanskrit, Persian, Greek, Latin, and German. In 1833, he began publishing the Comparative Grammar of Sanskrit, Zend, Greek, Latin, Lithuanian, Old Slavic, Gothic, and German.
In 1822, Jacob Grimm formulated what became known as Grimm's law as a general rule in his . Grimm showed correlations between the Germanic and other Indo-European languages and demonstrated that sound change systematically transforms all words of a language. From the 1870s, the Neogrammarians proposed that sound laws have no exceptions, as illustrated by Verner's law, published in 1876, which resolved apparent exceptions to Grimm's law by exploring the role of accent (stress) in language change.
August Schleicher's A Compendium of the Comparative Grammar of the Indo-European, Sanskrit, Greek and Latin Languages (1874–77) represented an early attempt to reconstruct the proto-Indo-European language.
By the early 1900s, Indo-Europeanists had developed well-defined descriptions of PIE which scholars still accept today. Later, the discovery of the Anatolian and Tocharian languages added to the corpus of descendant languages. A subtle new principle won wide acceptance: the laryngeal theory, which explained irregularities in the reconstruction of Proto-Indo-European phonology as the effects of hypothetical sounds which no longer exist in all languages documented prior to the excavation of cuneiform tablets in Anatolian. This theory was first proposed by Ferdinand de Saussure in 1879 on the basis of internal reconstruction only, and progressively won general acceptance after Jerzy Kuryłowicz's discovery of consonantal reflexes of these reconstructed sounds in Hittite.
Julius Pokorny's ('Indo-European Etymological Dictionary', 1959) gave a detailed, though conservative, overview of the lexical knowledge accumulated by 1959. Jerzy Kuryłowicz's 1956 Apophonie gave a better understanding of Indo-European ablaut. From the 1960s, knowledge of Anatolian became robust enough to establish its relationship to PIE.
Historical and geographical setting
Scholars have proposed multiple hypotheses about when, where, and by whom PIE was spoken. The Kurgan hypothesis, first put forward in 1956 by Marija Gimbutas, has become the most popular. It proposes that the original speakers of PIE were the Yamnaya culture associated with the kurgans (burial mounds) on the Pontic–Caspian steppe north of the Black Sea. According to the theory, they were nomadic pastoralists who domesticated the horse, which allowed them to migrate across Europe and Asia in wagons and chariots. By the early 3rd millennium BCE, they had expanded throughout the Pontic–Caspian steppe and into eastern Europe.
Other theories include the Anatolian hypothesis, which posits that PIE spread out from Anatolia with agriculture beginning 7500–6000 BCE, the Armenian hypothesis, the Paleolithic continuity paradigm, and the indigenous Aryans theory. The last two of these theories are not regarded as credible within academia. Out of all the theories for a PIE homeland, the Kurgan and Anatolian hypotheses are the ones most widely accepted, and also the ones most debated against each other. Following the publication of several studies on ancient DNA in 2015, Colin Renfrew, the original author and proponent of the Anatolian hypothesis, has accepted the reality of migrations of populations speaking one or several Indo-European languages from the Pontic steppe towards Northwestern Europe.
Descendants
The table lists the main Indo-European language families, comprising the languages descended from Proto-Indo-European.
Commonly proposed subgroups of Indo-European languages include Italo-Celtic, Graeco-Aryan, Graeco-Armenian, Graeco-Phrygian, Daco-Thracian, and Thraco-Illyrian.
There are numerous lexical similarities between the Proto-Indo-European and Proto-Kartvelian languages due to early language contact, as well as some morphological similarities—notably the Indo-European ablaut, which is remarkably similar to the root ablaut system reconstructible for Proto-Kartvelian.
Marginally attested languages
The Lusitanian language was a marginally attested language spoken in areas near the border between present-day Portugal and Spain.
The Venetic and Liburnian languages known from the North Adriatic region are sometimes classified as Italic.
Albanian and Greek are the only surviving Indo-European descendants of a Paleo-Balkan language area, named for their occurrence in or in the vicinity of the Balkan peninsula. Most of the other languages of this area—including Illyrian, Thracian, and Dacian—do not appear to be members of any other subfamilies of PIE, but are so poorly attested that proper classification of them is not possible. Forming an exception, Phrygian is sufficiently well-attested to allow proposals of a particularly close affiliation with Greek, and a Graeco-Phrygian branch of Indo-European is becoming increasingly accepted.
Phonology
Proto-Indo-European phonology has been reconstructed in some detail. Notable features of the most widely accepted (but not uncontroversial) reconstruction include:
three series of stop consonants reconstructed as voiceless, voiced, and breathy voiced;
sonorant consonants that could be used syllabically;
three so-called laryngeal consonants, whose exact pronunciation is not well-established but which are believed to have existed in part based on their detectable effects on adjacent sounds;
the fricative
a vowel system in which and were the most frequently occurring vowels. The existence of as a separate phoneme is debated.
Notation
Vowels
The vowels in commonly used notation are:
Consonants
The corresponding consonants in commonly used notation are:
All sonorants (i.e. nasals, liquids and semivowels) can appear in syllabic position. The syllabic allophones of *y and *w are realized as the surface vowels *i and *u respectively.
Accent
The Proto-Indo-European accent is reconstructed today as having had variable lexical stress, which could appear on any syllable and whose position often varied among different members of a paradigm (e.g. between singular and plural of a verbal paradigm). Stressed syllables received a higher pitch; therefore it is often said that PIE had a pitch accent. The location of the stress is associated with ablaut variations, especially between full-grade vowels ( and ) and zero-grade (i.e. lack of a vowel), but not entirely predictable from it.
The accent is best preserved in Vedic Sanskrit and (in the case of nouns) Ancient Greek, and indirectly attested in a number of phenomena in other IE languages, such as Verner's Law in the Germanic branch. Sources for Indo-European accentuation are also the Balto-Slavic accentual system and plene spelling in Hittite cuneiform. To account for mismatches between the accent of Vedic Sanskrit and Ancient Greek, as well as a few other phenomena, a few historical linguists prefer to reconstruct PIE as a tone language where each morpheme had an inherent tone; the sequence of tones in a word then evolved, according to that hypothesis, into the placement of lexical stress in different ways in different IE branches.
Morphology
Root
Proto-Indo-European roots were affix-lacking morphemes that carried the core lexical meaning of a word and were used to derive related words (cf. the English root "-friend-", from which are derived related words such as friendship, friendly, befriend, and newly coined words such as unfriend). Proto-Indo-European was probably a fusional language, in which inflectional morphemes signaled the grammatical relationships between words. This dependence on inflectional morphemes means that roots in PIE, unlike those in English, were rarely used without affixes. A root plus a suffix formed a word stem, and a word stem plus a desinence (usually an ending, see inflectional suffixes) formed a word.
Ablaut
Many morphemes in Proto-Indo-European had short e as their inherent vowel; the Indo-European ablaut is the change of this short e to short o, long e (ē), long o (ō), or no vowel. The forms are referred to as the "ablaut grades" of the morpheme—the e-grade, o-grade, zero-grade (no vowel), etc. This variation in vowels occurred both within inflectional morphology (e.g., different grammatical forms of a noun or verb may have different vowels) and derivational morphology (e.g., a verb and an associated abstract verbal noun may have different vowels).
Categories that PIE distinguished through ablaut were often also identifiable by contrasting endings, but the loss of these endings in some later Indo-European languages has led them to use ablaut alone to identify grammatical categories, as in the Modern English words sing, sang, sung.
Noun
Proto-Indo-European nouns were probably declined for eight or nine cases:
nominative: marks the subject of a verb. Words that follow a linking verb (copulative verb) and restate the subject of that verb also use the nominative case. The nominative is the dictionary form of the noun.
accusative: used for the direct object of a transitive verb.
genitive: marks a noun as modifying another noun.
dative: used to indicate the indirect object of a transitive verb, such as Jacob in Maria gave Jacob a drink.
instrumental: marks the instrument or means by, or with, which the subject achieves or accomplishes an action. It may be either a physical object or an abstract concept.
ablative: used to express motion away from something.
locative: expresses location, corresponding vaguely to the English prepositions in, on, at, and by.
vocative: used for a word that identifies an addressee. A vocative expression is one of direct address where the identity of the party spoken to is set forth expressly within a sentence. For example, in the sentence, "I don't know, John", John is a vocative expression that indicates the party being addressed.
allative: used as a type of locative case that expresses movement towards something. It was preserved in Anatolian (particularly Old Hittite), and fossilized traces of it have been found in Greek. It is also present in Tocharian. Its PIE shape is uncertain, with candidates including *-h2(e), *-(e)h2, or *-a.
Late Proto-Indo-European had three grammatical genders:
masculine
feminine
neuter
This system is probably derived from an older two-gender system, attested in Anatolian languages: common (or animate) and neuter (or inanimate) gender. The feminine gender only arose in the later period of the language. Neuter nouns collapsed the nominative, vocative and accusative into a single form, the plural of which used a special collective suffix (manifested in most descendants as -a). This same collective suffix in extended forms and (respectively on thematic and athematic nouns, becoming -ā and -ī in the early daughter languages) became used to form feminine nouns from masculines.
All nominals distinguished three numbers:
singular
dual
plural
These numbers were also distinguished in verbs (see below), requiring agreement with their subject nominal.
Pronoun
Proto-Indo-European pronouns are difficult to reconstruct, owing to their variety in later languages. PIE had personal pronouns in the first and second grammatical person, but not the third person, where demonstrative pronouns were used instead. The personal pronouns had their own unique forms and endings, and some had two distinct stems; this is most obvious in the first person singular where the two stems are still preserved in English I and me. There were also two varieties for the accusative, genitive and dative cases, a stressed and an enclitic form.
Verb
Proto-Indo-European verbs, like the nouns, exhibited an ablaut system.
The most basic categorisation for the reconstructed Indo-European verb is grammatical aspect. Verbs are classed as:
stative: verbs that depict a state of being
imperfective: verbs depicting ongoing, habitual or repeated action
perfective: verbs depicting a completed action or actions viewed as an entire process.
Verbs have at least four grammatical moods:
indicative: indicates that something is a statement of fact; in other words, to express what the speaker considers to be a known state of affairs, as in declarative sentences.
imperative: forms commands or requests, including the giving of prohibition or permission, or any other kind of advice or exhortation.
subjunctive: used to express various states of unreality such as wish, emotion, possibility, judgment, opinion, obligation, or action that has not yet occurred
optative: indicates a wish or hope. It is similar to the cohortative mood and is closely related to the subjunctive mood.
Verbs had two grammatical voices:
active: used in a clause whose subject expresses the main verb's agent.
mediopassive: for the middle voice and the passive voice.
Verbs had three grammatical persons: first, second and third.
Verbs had three grammatical numbers:
singular
dual: referring to precisely two of the entities (objects or persons) identified by the noun or pronoun.
plural: a number other than singular or dual.
Verbs were probably marked by a highly developed system of participles, one for each combination of tense and voice, and an assorted array of verbal nouns and adjectival formations.
The following table shows a possible reconstruction of the PIE verb endings from Sihler, which largely represents the current consensus among Indo-Europeanists.
Numbers
Proto-Indo-European numerals are generally reconstructed as follows:
Rather than specifically 100, * may originally have meant "a large number".
Particle
Proto-Indo-European particles were probably used both as adverbs and as postpositions. These postpositions became prepositions in most daughter languages.
Reconstructed particles include for example, * "under, below"; the negators *, *; the conjunctions * "and", * "or" and others; and an interjection, *, expressing woe or agony.
Derivational morphology
Proto-Indo-European employed various means of deriving words from other words, or directly from verb roots.
Internal derivation
Internal derivation was a process that derived new words through changes in accent and ablaut alone. It was not as productive as external (affixing) derivation, but is firmly established by the evidence of various later languages.
Possessive adjectives
Possessive or associated adjectives were probably created from nouns through internal derivation. Such words could be used directly as adjectives, or they could be turned back into a noun without any change in morphology, indicating someone or something characterised by the adjective. They were probably also used as the second elements in compounds. If the first element was a noun, this created an adjective that resembled a present participle in meaning, e.g. "having much rice" or "cutting trees". When turned back into nouns, such compounds were Bahuvrihis or semantically resembled agent nouns.
In thematic stems, creating a possessive adjective seems to have involved shifting the accent one syllable to the right, for example:
*tómh₁-o-s "slice" (Greek tómos) > *tomh₁-ó-s "cutting" (i.e. "making slices"; Greek tomós) > *dr-u-tomh₁-ó-s "cutting trees" (Greek drutómos "woodcutter" with irregular accent).
*wólh₁-o-s "wish" (Sanskrit vára-) > *wolh₁-ó-s "having wishes" (Sanskrit vará- "suitor").
In athematic stems, there was a change in the accent/ablaut class. The reconstructed four classes followed an ordering in which a derivation would shift the class one to the right:
acrostatic → proterokinetic → hysterokinetic → amphikinetic
The reason for this particular ordering of the classes in derivation is not known. Some examples:
Acrostatic *krót-u-s ~ *krét-u-s "strength" (Sanskrit krátu-) > proterokinetic *krét-u-s ~ *kr̥t-éw-s "having strength, strong" (Greek kratús).
Hysterokinetic *ph₂-tḗr ~ *ph₂-tr-és "father" (Greek patḗr) > amphikinetic *h₁su-péh₂-tōr ~ *h₁su-ph₂-tr-és "having a good father" (Greek εὑπάτωρ, eupátōr).
Vrddhi
A vrddhi derivation, named after the Sanskrit grammatical term, signifying "of, belonging to, descended from". It was characterised by "upgrading" the root grade, from zero to full (e) or from full to lengthened (ē). When upgrading from zero to full grade, the vowel could sometimes be inserted in the "wrong" place, creating a different stem from the original full grade.
Examples:
full grade *swéḱuro-s "father-in-law" (Vedic Sanskrit ) > lengthened grade *swēḱuró-s "relating to one's father-in-law" (Vedic , Old High German swāgur "brother-in-law").
full grade *dyḗw-s > zero grade *diw-és "sky" > new full grade *deyw-o-s "god, sky god" (Vedic , Latin deus, etc.). Note the difference in vowel placement, *dyew- in the full-grade stem of the original noun, but in the vrddhi derivative.
Nominalization
Adjectives with accent on the thematic vowel could be turned into nouns by moving the accent back onto the root. A zero grade root could remain so, or be "upgraded" to full grade like in a vrddhi derivative. Some examples:
PIE *ǵn̥h₁-tó-s "born" (Vedic jātá-) > *ǵénh₁-to- "thing that is born" (German Kind).
Greek leukós "white" > leũkos "a kind of fish", literally "white one".
Vedic kṛṣṇá- "dark" > kṛ́ṣṇa- "dark one", also "antelope".
This kind of derivation is likely related to the possessive adjectives, and can be seen as essentially the reverse of it.
Affixal derivation
Syntax
The syntax of the older Indo-European languages has been studied in earnest since at least the late nineteenth century, by such scholars as Hermann Hirt and Berthold Delbrück. In the second half of the twentieth century, interest in the topic increased and led to reconstructions of Proto-Indo-European syntax.
Since all the early attested IE languages were inflectional, PIE is thought to have relied primarily on morphological markers, rather than word order, to signal syntactic relationships within sentences. Still, a default (unmarked) word order is thought to have existed in PIE. In 1892, Jacob Wackernagel reconstructed PIE's word order as subject–verb–object (SVO), based on evidence in Vedic Sanskrit.
Winfred P. Lehmann (1974), on the other hand, reconstructs PIE as a subject–object–verb (SOV) language. He posits that the presence of person marking in PIE verbs motivated a shift from OV to VO order in later dialects. Many of the descendant languages have VO order: modern Greek, Romance and Albanian prefer SVO, Insular Celtic has VSO as the default order, and even the Anatolian languages show some signs of this word order shift. Tocharian and Indo-Iranian, meanwhile, retained the conservative OV order. Lehmann attributes the context-dependent order preferences in Baltic, Slavic and Germanic to outside influences. Donald Ringe (2006), however, attributes these to internal developments instead.
Paul Friedrich (1975) disagrees with Lehmann's analysis. He reconstructs PIE with the following syntax:
basic SVO word order
adjectives before nouns
head nouns before genitives
prepositions rather than postpositions
no dominant order in comparative constructions
main clauses before relative clauses
Friedrich notes that even among those Indo-European languages with basic OV word order, none of them are rigidly OV. He also notes that these non-rigid OV languages mainly occur in parts of the IE area that overlap with OV languages from other families (such as Uralic and Dravidian), whereas VO is predominant in the central parts of the IE area. For these reasons, among others, he argues for a VO common ancestor.
Hans Henrich Hock (2015) reports that the SVO hypothesis still has some adherents, but the "broad consensus" among PIE scholars is that PIE would have been an SOV language. The SOV default word order with other orders used to express emphasis (e.g., verb–subject–object to emphasise the verb) is attested in Old Indo-Aryan, Old Iranian, Old Latin and Hittite, while traces of it can be found in the enclitic personal pronouns of the Tocharian languages.
See also
Indo-European vocabulary
Proto-Indo-European verbs
Proto-Indo-European pronouns
List of Indo-European languages
Indo-European sound laws
List of proto-languages
Notes
References
Bibliography
External links
At the University of Texas Linguistic Research Center: List of online books , Indo-European Lexicon
Proto-Indo-European Lexicon at the University of Helsinki, Department of Modern Languages, Department of World Cultures, Indo-European Studies
Indo-European Lexical Cognacy Database
glottothèque – Ancient Indo-European Grammars online, an online collection of video lectures on Ancient Indo-European languages
Bronze Age
Indo-European languages
Language
Indo-European | 0.772009 | 0.999591 | 0.771693 |
Ice age | An ice age is a long period of reduction in the temperature of Earth's surface and atmosphere, resulting in the presence or expansion of continental and polar ice sheets and alpine glaciers. Earth's climate alternates between ice ages, and greenhouse periods during which there are no glaciers on the planet. Earth is currently in the ice age called Quaternary glaciation. Individual pulses of cold climate within an ice age are termed glacial periods (glacials, glaciations, glacial stages, stadials, stades, or colloquially, ice ages), and intermittent warm periods within an ice age are called interglacials or interstadials.
In glaciology, the term ice age is defined by the presence of extensive ice sheets in the northern and southern hemispheres. By this definition, the current Holocene period is an interglacial period of an ice age. The accumulation of anthropogenic greenhouse gases is projected to delay the next glacial period.
History of research
In 1742, Pierre Martel (1706–1767), an engineer and geographer living in Geneva, visited the valley of Chamonix in the Alps of Savoy. Two years later he published an account of his journey. He reported that the inhabitants of that valley attributed the dispersal of erratic boulders to the glaciers, saying that they had once extended much farther. Later similar explanations were reported from other regions of the Alps. In 1815 the carpenter and chamois hunter Jean-Pierre Perraudin (1767–1858) explained erratic boulders in the Val de Bagnes in the Swiss canton of Valais as being due to glaciers previously extending further. An unknown woodcutter from Meiringen in the Bernese Oberland advocated a similar idea in a discussion with the Swiss-German geologist Jean de Charpentier (1786–1855) in 1834. Comparable explanations are also known from the Val de Ferret in the Valais and the Seeland in western Switzerland and in Goethe's scientific work. Such explanations could also be found in other parts of the world. When the Bavarian naturalist Ernst von Bibra (1806–1878) visited the Chilean Andes in 1849–1850, the natives attributed fossil moraines to the former action of glaciers.
Meanwhile, European scholars had begun to wonder what had caused the dispersal of erratic material. From the middle of the 18th century, some discussed ice as a means of transport. The Swedish mining expert Daniel Tilas (1712–1772) was, in 1742, the first person to suggest drifting sea ice was a cause of the presence of erratic boulders in the Scandinavian and Baltic regions. In 1795, the Scottish philosopher and gentleman naturalist, James Hutton (1726–1797), explained erratic boulders in the Alps by the action of glaciers. Two decades later, in 1818, the Swedish botanist Göran Wahlenberg (1780–1851) published his theory of a glaciation of the Scandinavian peninsula. He regarded glaciation as a regional phenomenon.
Only a few years later, the Danish-Norwegian geologist Jens Esmark (1762–1839) argued for a sequence of worldwide ice ages. In a paper published in 1824, Esmark proposed changes in climate as the cause of those glaciations. He attempted to show that they originated from changes in Earth's orbit. Esmark discovered the similarity between moraines near Haukalivatnet lake near sea level in Rogaland and moraines at branches of Jostedalsbreen. Esmark's discovery were later attributed to or appropriated by Theodor Kjerulf and Louis Agassiz.
During the following years, Esmark's ideas were discussed and taken over in parts by Swedish, Scottish and German scientists. At the University of Edinburgh Robert Jameson (1774–1854) seemed to be relatively open to Esmark's ideas, as reviewed by Norwegian professor of glaciology Bjørn G. Andersen (1992). Jameson's remarks about ancient glaciers in Scotland were most probably prompted by Esmark. In Germany, Albrecht Reinhard Bernhardi (1797–1849), a geologist and professor of forestry at an academy in Dreissigacker (since incorporated in the southern Thuringian city of Meiningen), adopted Esmark's theory. In a paper published in 1832, Bernhardi speculated about the polar ice caps once reaching as far as the temperate zones of the globe.
In Val de Bagnes, a valley in the Swiss Alps, there was a long-held local belief that the valley had once been covered deep in ice, and in 1815 a local chamois hunter called Jean-Pierre Perraudin attempted to convert the geologist Jean de Charpentier to the idea, pointing to deep striations in the rocks and giant erratic boulders as evidence. Charpentier held the general view that these signs were caused by vast floods, and he rejected Perraudin's theory as absurd. In 1818 the engineer Ignatz Venetz joined Perraudin and Charpentier to examine a proglacial lake above the valley created by an ice dam as a result of the 1815 eruption of Mount Tambora, which threatened to cause a catastrophic flood when the dam broke. Perraudin attempted unsuccessfully to convert his companions to his theory, but when the dam finally broke, there were only minor erratics and no striations, and Venetz concluded that Perraudin was right and that only ice could have caused such major results. In 1821 he read a prize-winning paper on the theory to the Swiss Society, but it was not published until Charpentier, who had also become converted, published it with his own more widely read paper in 1834.
In the meantime, the German botanist Karl Friedrich Schimper (1803–1867) was studying mosses which were growing on erratic boulders in the alpine upland of Bavaria. He began to wonder where such masses of stone had come from. During the summer of 1835 he made some excursions to the Bavarian Alps. Schimper came to the conclusion that ice must have been the means of transport for the boulders in the alpine upland. In the winter of 1835–36 he held some lectures in Munich. Schimper then assumed that there must have been global times of obliteration ("Verödungszeiten") with a cold climate and frozen water. Schimper spent the summer months of 1836 at Devens, near Bex, in the Swiss Alps with his former university friend Louis Agassiz (1801–1873) and Jean de Charpentier. Schimper, Charpentier and possibly Venetz convinced Agassiz that there had been a time of glaciation. During the winter of 1836–37, Agassiz and Schimper developed the theory of a sequence of glaciations. They mainly drew upon the preceding works of Venetz, Charpentier and on their own fieldwork. Agassiz appears to have been already familiar with Bernhardi's paper at that time. At the beginning of 1837, Schimper coined the term "ice age" ("Eiszeit") for the period of the glaciers. In July 1837 Agassiz presented their synthesis before the annual meeting of the Swiss Society for Natural Research at Neuchâtel. The audience was very critical, and some were opposed to the new theory because it contradicted the established opinions on climatic history. Most contemporary scientists thought that Earth had been gradually cooling down since its birth as a molten globe.
In order to persuade the skeptics, Agassiz embarked on geological fieldwork. He published his book Study on Glaciers ("Études sur les glaciers") in 1840. Charpentier was put out by this, as he had also been preparing a book about the glaciation of the Alps. Charpentier felt that Agassiz should have given him precedence as it was he who had introduced Agassiz to in-depth glacial research. As a result of personal quarrels, Agassiz had also omitted any mention of Schimper in his book.
It took several decades before the ice age theory was fully accepted by scientists. This happened on an international scale in the second half of the 1870s, following the work of James Croll, including the publication of Climate and Time, in Their Geological Relations in 1875, which provided a credible explanation for the causes of ice ages.
Evidence
There are three main types of evidence for ice ages: geological, chemical, and paleontological.
Geological evidence for ice ages comes in various forms, including rock scouring and scratching, glacial moraines, drumlins, valley cutting, and the deposition of till or tillites and glacial erratics. Successive glaciations tend to distort and erase the geological evidence for earlier glaciations, making it difficult to interpret. Furthermore, this evidence was difficult to date exactly; early theories assumed that the glacials were short compared to the long interglacials. The advent of sediment and ice cores revealed the true situation: glacials are long, interglacials short. It took some time for the current theory to be worked out.
The chemical evidence mainly consists of variations in the ratios of isotopes in fossils present in sediments and sedimentary rocks and ocean sediment cores. For the most recent glacial periods, ice cores provide climate proxies, both from the ice itself and from atmospheric samples provided by included bubbles of air. Because water containing lighter isotopes has a lower heat of evaporation, its proportion decreases with warmer conditions. This allows a temperature record to be constructed. This evidence can be confounded, however, by other factors recorded by isotope ratios.
The paleontological evidence consists of changes in the geographical distribution of fossils. During a glacial period, cold-adapted organisms spread into lower latitudes, and organisms that prefer warmer conditions become extinct or retreat into lower latitudes. This evidence is also difficult to interpret because it requires:
sequences of sediments covering a long period of time, over a wide range of latitudes and which are easily correlated;
ancient organisms which survive for several million years without change and whose temperature preferences are easily diagnosed; and
the finding of the relevant fossils.
Despite the difficulties, analysis of ice core and ocean sediment cores has provided a credible record of glacials and interglacials over the past few million years. These also confirm the linkage between ice ages and continental crust phenomena such as glacial moraines, drumlins, and glacial erratics. Hence the continental crust phenomena are accepted as good evidence of earlier ice ages when they are found in layers created much earlier than the time range for which ice cores and ocean sediment cores are available.
Major ice ages
There have been at least five major ice ages in Earth's history (the Huronian, Cryogenian, Andean-Saharan, late Paleozoic, and the latest Quaternary Ice Age). Outside these ages, Earth was previously thought to have been ice-free even in high latitudes; such periods are known as greenhouse periods. However, other studies dispute this, finding evidence of occasional glaciations at high latitudes even during apparent greenhouse periods.
Rocks from the earliest well-established ice age, called the Huronian, have been dated to around 2.4 to 2.1 billion years ago during the early Proterozoic Eon. Several hundreds of kilometers of the Huronian Supergroup are exposed north of the north shore of Lake Huron, extending from near Sault Ste. Marie to Sudbury, northeast of Lake Huron, with giant layers of now-lithified till beds, dropstones, varves, outwash, and scoured basement rocks. Correlative Huronian deposits have been found near Marquette, Michigan, and correlation has been made with Paleoproterozoic glacial deposits from Western Australia. The Huronian ice age was caused by the elimination of atmospheric methane, a greenhouse gas, during the Great Oxygenation Event.
The next well-documented ice age, and probably the most severe of the last billion years, occurred from 720 to 630 million years ago (the Cryogenian period) and may have produced a Snowball Earth in which glacial ice sheets reached the equator, possibly being ended by the accumulation of greenhouse gases such as produced by volcanoes. "The presence of ice on the continents and pack ice on the oceans would inhibit both silicate weathering and photosynthesis, which are the two major sinks for at present." It has been suggested that the end of this ice age was responsible for the subsequent Ediacaran and Cambrian explosion, though this model is recent and controversial.
The Andean-Saharan occurred from 460 to 420 million years ago, during the Late Ordovician and the Silurian period.
The evolution of land plants at the onset of the Devonian period caused a long term increase in planetary oxygen levels and reduction of levels, which resulted in the late Paleozoic icehouse. Its former name, the Karoo glaciation, was named after the glacial tills found in the Karoo region of South Africa. There were extensive polar ice caps at intervals from 360 to 260 million years ago in South Africa during the Carboniferous and early Permian periods. Correlatives are known from Argentina, also in the center of the ancient supercontinent Gondwanaland.
Although the Mesozoic Era retained a greenhouse climate over its timespan and was previously assumed to have been entirely glaciation-free, more recent studies suggest that brief periods of glaciation occurred in both hemispheres during the Early Cretaceous. Geologic and palaeoclimatological records suggest the existence of glacial periods during the Valanginian, Hauterivian, and Aptian stages of the Early Cretaceous. Ice-rafted glacial dropstones indicate that in the Northern Hemisphere, ice sheets may have extended as far south as the Iberian Peninsula during the Hauterivian and Aptian. Although ice sheets largely disappeared from Earth for the rest of the period (potential reports from the Turonian, otherwise the warmest period of the Phanerozoic, are disputed), ice sheets and associated sea ice appear to have briefly returned to Antarctica near the very end of the Maastrichtian just prior to the Cretaceous-Paleogene extinction event.
The Quaternary Glaciation / Quaternary Ice Age started about 2.58 million years ago at the beginning of the Quaternary Period when the spread of ice sheets in the Northern Hemisphere began. Since then, the world has seen cycles of glaciation with ice sheets advancing and retreating on 40,000- and 100,000-year time scales called glacial periods, glacials or glacial advances, and interglacial periods, interglacials or glacial retreats. Earth is currently in an interglacial, and the last glacial period ended about 11,700 years ago. All that remains of the continental ice sheets are the Greenland and Antarctic ice sheets and smaller glaciers such as on Baffin Island.
The definition of the Quaternary as beginning 2.58 Ma is based on the formation of the Arctic ice cap. The Antarctic ice sheet began to form earlier, at about 34 Ma, in the mid-Cenozoic (Eocene-Oligocene Boundary). The term Late Cenozoic Ice Age is used to include this early phase.
Ice ages can be further divided by location and time; for example, the names Riss (180,000–130,000 years bp) and Würm (70,000–10,000 years bp) refer specifically to glaciation in the Alpine region. The maximum extent of the ice is not maintained for the full interval. The scouring action of each glaciation tends to remove most of the evidence of prior ice sheets almost completely, except in regions where the later sheet does not achieve full coverage.
Glacials and interglacials
Within the current glaciation, more temperate and more severe periods have occurred. The colder periods are called glacial periods, the warmer periods interglacials, such as the Eemian Stage. There is evidence that similar glacial cycles occurred in previous glaciations, including the Andean-Saharan and the late Paleozoic ice house. The glacial cycles of the late Paleozoic ice house are likely responsible for the deposition of cyclothems.
Glacials are characterized by cooler and drier climates over most of Earth and large land and sea ice masses extending outward from the poles. Mountain glaciers in otherwise unglaciated areas extend to lower elevations due to a lower snow line. Sea levels drop due to the removal of large volumes of water above sea level in the icecaps. There is evidence that ocean circulation patterns are disrupted by glaciations. The glacials and interglacials coincide with changes in orbital forcing of climate due to Milankovitch cycles, which are periodic changes in Earth's orbit and the tilt of Earth's rotational axis.
Earth has been in an interglacial period known as the Holocene for around 11,700 years, and an article in Nature in 2004 argues that it might be most analogous to a previous interglacial that lasted 28,000 years. Predicted changes in orbital forcing suggest that the next glacial period would begin at least 50,000 years from now. Moreover, anthropogenic forcing from increased greenhouse gases is estimated to potentially outweigh the orbital forcing of the Milankovitch cycles for hundreds of thousands of years.
Feedback processes
Each glacial period is subject to positive feedback which makes it more severe, and negative feedback which mitigates and (in all cases so far) eventually ends it.
Positive
An important form of feedback is provided by Earth's albedo, which is how much of the sun's energy is reflected rather than absorbed by Earth. Ice and snow increase Earth's albedo, while forests reduce its albedo. When the air temperature decreases, ice and snow fields grow, and they reduce forest cover. This continues until competition with a negative feedback mechanism forces the system to an equilibrium.
One theory is that when glaciers form, two things happen: the ice grinds rocks into dust, and the land becomes dry and arid. This allows winds to transport iron rich dust into the open ocean, where it acts as a fertilizer that causes massive algal blooms that pulls large amounts of out of the atmosphere. This in turn makes it even colder and causes the glaciers to grow more.
In 1956, Ewing and Donn hypothesized that an ice-free Arctic Ocean leads to increased snowfall at high latitudes. When low-temperature ice covers the Arctic Ocean there is little evaporation or sublimation and the polar regions are quite dry in terms of precipitation, comparable to the amount found in mid-latitude deserts. This low precipitation allows high-latitude snowfalls to melt during the summer. An ice-free Arctic Ocean absorbs solar radiation during the long summer days, and evaporates more water into the Arctic atmosphere. With higher precipitation, portions of this snow may not melt during the summer and so glacial ice can form at lower altitudes and more southerly latitudes, reducing the temperatures over land by increased albedo as noted above. Furthermore, under this hypothesis the lack of oceanic pack ice allows increased exchange of waters between the Arctic and the North Atlantic Oceans, warming the Arctic and cooling the North Atlantic. (Current projected consequences of global warming include a brief ice-free Arctic Ocean period by 2050.) Additional fresh water flowing into the North Atlantic during a warming cycle may also reduce the global ocean water circulation. Such a reduction (by reducing the effects of the Gulf Stream) would have a cooling effect on northern Europe, which in turn would lead to increased low-latitude snow retention during the summer. It has also been suggested that during an extensive glacial, glaciers may move through the Gulf of Saint Lawrence, extending into the North Atlantic Ocean far enough to block the Gulf Stream.
Negative
Ice sheets that form during glaciations erode the land beneath them. This can reduce the land area above sea level and thus diminish the amount of space on which ice sheets can form. This mitigates the albedo feedback, as does the rise in sea level that accompanies the reduced area of ice sheets, since open ocean has a lower albedo than land.
Another negative feedback mechanism is the increased aridity occurring with glacial maxima, which reduces the precipitation available to maintain glaciation. The glacial retreat induced by this or any other process can be amplified by similar inverse positive feedbacks as for glacial advances.
According to research published in Nature Geoscience, human emissions of carbon dioxide (CO2) will defer the next glacial period. Researchers used data on Earth's orbit to find the historical warm interglacial period that looks most like the current one and from this have predicted that the next glacial period would usually begin within 1,500 years. They go on to predict that emissions have been so high that it will not.
Causes
The causes of ice ages are not fully understood for either the large-scale ice age periods or the smaller ebb and flow of glacial–interglacial periods within an ice age. The consensus is that several factors are important: atmospheric composition, such as the concentrations of carbon dioxide and methane (the specific levels of the previously mentioned gases are now able to be seen with the new ice core samples from the European Project for Ice Coring in Antarctica (EPICA) Dome C in Antarctica over the past 800,000 years); changes in Earth's orbit around the Sun known as Milankovitch cycles; the motion of tectonic plates resulting in changes in the relative location and amount of continental and oceanic crust on Earth's surface, which affect wind and ocean currents; variations in solar output; the orbital dynamics of the Earth–Moon system; the impact of relatively large meteorites and volcanism including eruptions of supervolcanoes.
Some of these factors influence each other. For example, changes in Earth's atmospheric composition (especially the concentrations of greenhouse gases) may alter the climate, while climate change itself can change the atmospheric composition (for example by changing the rate at which weathering removes ).
Maureen Raymo, William Ruddiman and others propose that the Tibetan and Colorado Plateaus are immense "scrubbers" with a capacity to remove enough from the global atmosphere to be a significant causal factor of the 40 million year Cenozoic Cooling trend. They further claim that approximately half of their uplift (and "scrubbing" capacity) occurred in the past 10 million years.
Changes in Earth's atmosphere
There is evidence that greenhouse gas levels fell at the start of ice ages and rose during the retreat of the ice sheets, but it is difficult to establish cause and effect (see the notes above on the role of weathering). Greenhouse gas levels may also have been affected by other factors which have been proposed as causes of ice ages, such as the movement of continents and volcanism.
The Snowball Earth hypothesis maintains that the severe freezing in the late Proterozoic was ended by an increase in levels in the atmosphere, mainly from volcanoes, and some supporters of Snowball Earth argue that it was caused in the first place by a reduction in atmospheric . The hypothesis also warns of future Snowball Earths.
In 2009, further evidence was provided that changes in solar insolation provide the initial trigger for Earth to warm after an Ice Age, with secondary factors like increases in greenhouse gases accounting for the magnitude of the change.
Position of the continents
The geological record appears to show that ice ages start when the continents are in positions which block or reduce the flow of warm water from the equator to the poles and thus allow ice sheets to form. The ice sheets increase Earth's reflectivity and thus reduce the absorption of solar radiation. With less radiation absorbed the atmosphere cools; the cooling allows the ice sheets to grow, which further increases reflectivity in a positive feedback loop. The ice age continues until the reduction in weathering causes an increase in the greenhouse effect.
There are three main contributors from the layout of the continents that obstruct the movement of warm water to the poles:
A continent sits on top of a pole, as Antarctica does today.
A polar sea is almost land-locked, as the Arctic Ocean is today.
A supercontinent covers most of the equator, as Rodinia did during the Cryogenian period.
Since today's Earth has a continent over the South Pole and an almost land-locked ocean over the North Pole, geologists believe that Earth will continue to experience glacial periods in the geologically near future.
Some scientists believe that the Himalayas are a major factor in the current ice age, because these mountains have increased Earth's total rainfall and therefore the rate at which carbon dioxide is washed out of the atmosphere, decreasing the greenhouse effect. The Himalayas' formation started about 70 million years ago when the Indo-Australian Plate collided with the Eurasian Plate, and the Himalayas are still rising by about 5 mm per year because the Indo-Australian plate is still moving at 67 mm/year. The history of the Himalayas broadly fits the long-term decrease in Earth's average temperature since the mid-Eocene, 40 million years ago.
Fluctuations in ocean currents
Another important contribution to ancient climate regimes is the variation of ocean currents, which are modified by continent position, sea levels and salinity, as well as other factors. They have the ability to cool (e.g. aiding the creation of Antarctic ice) and the ability to warm (e.g. giving the British Isles a temperate as opposed to a boreal climate). The closing of the Isthmus of Panama about 3 million years ago may have ushered in the present period of strong glaciation over North America by ending the exchange of water between the tropical Atlantic and Pacific Oceans.
Analyses suggest that ocean current fluctuations can adequately account for recent glacial oscillations. During the last glacial period the sea-level fluctuated 20–30 m as water was sequestered, primarily in the Northern Hemisphere ice sheets. When ice collected and the sea level dropped sufficiently, flow through the Bering Strait (the narrow strait between Siberia and Alaska is about 50 m deep today) was reduced, resulting in increased flow from the North Atlantic. This realigned the thermohaline circulation in the Atlantic, increasing heat transport into the Arctic, which melted the polar ice accumulation and reduced other continental ice sheets. The release of water raised sea levels again, restoring the ingress of colder water from the Pacific with an accompanying shift to northern hemisphere ice accumulation.
According to a study published in Nature in 2021, all glacial periods of ice ages over the last 1.5 million years were associated with northward shifts of melting Antarctic icebergs which changed ocean circulation patterns, leading to more CO2 being pulled out of the atmosphere. The authors suggest that this process may be disrupted in the future as the Southern Ocean will become too warm for the icebergs to travel far enough to trigger these changes.
Uplift of the Tibetan plateau
Matthias Kuhle's geological theory of Ice Age development was suggested by the existence of an ice sheet covering the Tibetan Plateau during the Ice Ages (Last Glacial Maximum?). According to Kuhle, the plate-tectonic uplift of Tibet past the snow-line has led to a surface of c. 2,400,000 square kilometres (930,000 sq mi) changing from bare land to ice with a 70% greater albedo. The reflection of energy into space resulted in a global cooling, triggering the Pleistocene Ice Age. Because this highland is at a subtropical latitude, with four to five times the insolation of high-latitude areas, what would be Earth's strongest heating surface has turned into a cooling surface.
Kuhle explains the interglacial periods by the 100,000-year cycle of radiation changes due to variations in Earth's orbit. This comparatively insignificant warming, when combined with the lowering of the Nordic inland ice areas and Tibet due to the weight of the superimposed ice-load, has led to the repeated complete thawing of the inland ice areas.
Variations in Earth's orbit
The Milankovitch cycles are a set of cyclic variations in characteristics of Earth's orbit around the Sun. Each cycle has a different length, so at some times their effects reinforce each other and at other times they (partially) cancel each other.
There is strong evidence that the Milankovitch cycles affect the occurrence of glacial and interglacial periods within an ice age. The present ice age is the most studied and best understood, particularly the last 400,000 years, since this is the period covered by ice cores that record atmospheric composition and proxies for temperature and ice volume. Within this period, the match of glacial/interglacial frequencies to the Milanković orbital forcing periods is so close that orbital forcing is generally accepted. The combined effects of the changing distance to the Sun, the precession of Earth's axis, and the changing tilt of Earth's axis redistribute the sunlight received by Earth. Of particular importance are changes in the tilt of Earth's axis, which affect the intensity of seasons. For example, the amount of solar influx in July at 65 degrees north latitude varies by as much as 22% (from 450 W/m2 to 550 W/m2). It is widely believed that ice sheets advance when summers become too cool to melt all of the accumulated snowfall from the previous winter. Some believe that the strength of the orbital forcing is too small to trigger glaciations, but feedback mechanisms like may explain this mismatch.
While Milankovitch forcing predicts that cyclic changes in Earth's orbital elements can be expressed in the glaciation record, additional explanations are necessary to explain which cycles are observed to be most important in the timing of glacial–interglacial periods. In particular, during the last 800,000 years, the dominant period of glacial–interglacial oscillation has been 100,000 years, which corresponds to changes in Earth's orbital eccentricity and orbital inclination. Yet this is by far the weakest of the three frequencies predicted by Milankovitch. During the period 3.0–0.8 million years ago, the dominant pattern of glaciation corresponded to the 41,000-year period of changes in Earth's obliquity (tilt of the axis). The reasons for dominance of one frequency versus another are poorly understood and an active area of current research, but the answer probably relates to some form of resonance in Earth's climate system. Recent work suggests that the 100K year cycle dominates due to increased southern-pole sea-ice increasing total solar reflectivity.
The "traditional" Milankovitch explanation struggles to explain the dominance of the 100,000-year cycle over the last 8 cycles. Richard A. Muller, Gordon J. F. MacDonald, and others have pointed out that those calculations are for a two-dimensional orbit of Earth but the three-dimensional orbit also has a 100,000-year cycle of orbital inclination. They proposed that these variations in orbital inclination lead to variations in insolation, as Earth moves in and out of known dust bands in the solar system. Although this is a different mechanism to the traditional view, the "predicted" periods over the last 400,000 years are nearly the same. The Muller and MacDonald theory, in turn, has been challenged by Jose Antonio Rial.
William Ruddiman has suggested a model that explains the 100,000-year cycle by the modulating effect of eccentricity (weak 100,000-year cycle) on precession (26,000-year cycle) combined with greenhouse gas feedbacks in the 41,000- and 26,000-year cycles. Yet another theory has been advanced by Peter Huybers who argued that the 41,000-year cycle has always been dominant, but that Earth has entered a mode of climate behavior where only the second or third cycle triggers an ice age. This would imply that the 100,000-year periodicity is really an illusion created by averaging together cycles lasting 80,000 and 120,000 years. This theory is consistent with a simple empirical multi-state model proposed by Didier Paillard. Paillard suggests that the late Pleistocene glacial cycles can be seen as jumps between three quasi-stable climate states. The jumps are induced by the orbital forcing, while in the early Pleistocene the 41,000-year glacial cycles resulted from jumps between only two climate states. A dynamical
model explaining this behavior was proposed by Peter Ditlevsen. This is in support of the suggestion that the late Pleistocene glacial cycles are not due to the weak 100,000-year eccentricity cycle, but a non-linear response to mainly the 41,000-year obliquity cycle.
Variations in the Sun's energy output
There are at least two types of variation in the Sun's energy output:
In the very long term, astrophysicists believe that the Sun's output increases by about 7% every one billion years.
Shorter-term variations such as sunspot cycles, and longer episodes such as the Maunder Minimum, which occurred during the coldest part of the Little Ice Age.
The long-term increase in the Sun's output cannot be a cause of ice ages.
Volcanism
Volcanic eruptions may have contributed to the inception and/or the end of ice age periods. At times during the paleoclimate, carbon dioxide levels were two or three times greater than today. Volcanoes and movements in continental plates contributed to high amounts of CO2 in the atmosphere. Carbon dioxide from volcanoes probably contributed to periods with highest overall temperatures. One suggested explanation of the Paleocene–Eocene Thermal Maximum is that undersea volcanoes released methane from clathrates and thus caused a large and rapid increase in the greenhouse effect. There appears to be no geological evidence for such eruptions at the right time, but this does not prove they did not happen.
Recent glacial and interglacial phases
The current geological period, the Quaternary, which began about 2.6 million years ago and extends into the present, is marked by warm and cold episodes, cold phases called glacials (Quaternary ice age) lasting about 100,000 years, and warm phases called interglacials lasting 10,000–15,000 years. The last cold episode of the Last Glacial Period ended about 10,000 years ago. Earth is currently in an interglacial period of the Quaternary, called the Holocene.
Glacial stages in North America
The major glacial stages of the current ice age in North America are the Illinoian, Eemian, and Wisconsin glaciation. The use of the Nebraskan, Afton, Kansan, and Yarmouthian stages to subdivide the ice age in North America has been discontinued by Quaternary geologists and geomorphologists. These stages have all been merged into the Pre-Illinoian in the 1980s.
During the most recent North American glaciation, during the latter part of the Last Glacial Maximum (26,000 to 13,300 years ago), ice sheets extended to about 45th parallel north. These sheets were thick.
This Wisconsin glaciation left widespread impacts on the North American landscape. The Great Lakes and the Finger Lakes were carved by ice deepening old valleys. Most of the lakes in Minnesota and Wisconsin were gouged out by glaciers and later filled with glacial meltwaters. The old Teays River drainage system was radically altered and largely reshaped into the Ohio River drainage system. Other rivers were dammed and diverted to new channels, such as Niagara Falls, which formed a dramatic waterfall and gorge, when the waterflow encountered a limestone escarpment. Another similar waterfall, at the present Clark Reservation State Park near Syracuse, New York, is now dry.
The area from Long Island to Nantucket, Massachusetts was formed from glacial till, and the plethora of lakes on the Canadian Shield in northern Canada can be almost entirely attributed to the action of the ice. As the ice retreated and the rock dust dried, winds carried the material hundreds of miles, forming beds of loess many dozens of feet thick in the Missouri Valley. Post-glacial rebound continues to reshape the Great Lakes and other areas formerly under the weight of the ice sheets.
The Driftless Area, a portion of western and southwestern Wisconsin along with parts of adjacent Minnesota, Iowa, and Illinois, was not covered by glaciers.
Last Glacial Period in the semiarid Andes around Aconcagua and Tupungato
A specially interesting climatic change during glacial times has taken place in the semi-arid Andes. Beside the expected cooling down in comparison with the current climate, a significant precipitation change happened here. So, researches in the presently semiarid subtropic Aconcagua-massif (6,962 m) have shown an unexpectedly extensive glacial glaciation of the type "ice stream network". The connected valley glaciers exceeding 100 km in length, flowed down on the East-side of this section of the Andes at 32–34°S and 69–71°W as far as a height of 2,060 m and on the western luff-side still clearly deeper. Where current glaciers scarcely reach 10 km in length, the snowline (ELA) runs at a height of 4,600 m and at that time was lowered to 3,200 m asl, i.e. about 1,400 m. From this follows that—beside of an annual depression of temperature about c. 8.4 °C— here was an increase in precipitation. Accordingly, at glacial times the humid climatic belt that today is situated several latitude degrees further to the S, was shifted much further to the N.
Effects of glaciation
Although the last glacial period ended more than 8,000 years ago, its effects can still be felt today. For example, the moving ice carved out the landscape in Canada (See Canadian Arctic Archipelago), Greenland, northern Eurasia and Antarctica. The erratic boulders, till, drumlins, eskers, fjords, kettle lakes, moraines, cirques, horns, etc., are typical features left behind by the glaciers. The weight of the ice sheets was so great that they deformed Earth's crust and mantle. After the ice sheets melted, the ice-covered land rebounded. Due to the high viscosity of Earth's mantle, the flow of mantle rocks which controls the rebound process is very slow—at a rate of about 1 cm/year near the center of rebound area today.
During glaciation, water was taken from the oceans to form the ice at high latitudes, thus global sea level dropped by about 110 meters, exposing the continental shelves and forming land-bridges between land-masses for animals to migrate. During deglaciation, the melted ice-water returned to the oceans, causing sea level to rise. This process can cause sudden shifts in coastlines and hydration systems resulting in newly submerged lands, emerging lands, collapsed ice dams resulting in salination of lakes, new ice dams creating vast areas of freshwater, and a general alteration in regional weather patterns on a large but temporary scale. It can even cause temporary reglaciation. This type of chaotic pattern of rapidly changing land, ice, saltwater and freshwater has been proposed as the likely model for the Baltic and Scandinavian regions, as well as much of central North America at the end of the last glacial maximum, with the present-day coastlines only being achieved in the last few millennia of prehistory. Also, the effect of elevation on Scandinavia submerged a vast continental plain that had existed under much of what is now the North Sea, connecting the British Isles to Continental Europe.
The redistribution of ice-water on the surface of Earth and the flow of mantle rocks causes changes in the gravitational field as well as changes to the distribution of the moment of inertia of Earth. These changes to the moment of inertia result in a change in the angular velocity, axis, and wobble of Earth's rotation.
The weight of the redistributed surface mass loaded the lithosphere, caused it to flex and also induced stress within Earth. The presence of the glaciers generally suppressed the movement of faults below. During deglaciation, the faults experience accelerated slip triggering earthquakes. Earthquakes triggered near the ice margin may in turn accelerate ice calving and may account for the Heinrich events. As more ice is removed near the ice margin, more intraplate earthquakes are induced and this positive feedback may explain the fast collapse of ice sheets.
In Europe, glacial erosion and isostatic sinking from the weight of ice made the Baltic Sea, which before the Ice Age was all land drained by the Eridanos River.
Future ice ages
A 2015 report by the Past Global Changes Project says simulations show that a new glaciation is unlikely to happen within the next approximately 50,000 years, before the next strong drop in Northern Hemisphere summer insolation occurs "if either atmospheric concentration
remains above 300 ppm or cumulative carbon emissions exceed 1000 Pg C" (i.e. 1,000 gigatonnes carbon). "Only for an atmospheric content below the preindustrial level may a glaciation occur within the next 10 ka. ... Given the continued anthropogenic emissions, glacial inception is very unlikely to occur in the next 50 ka, because the timescale for and temperature reduction toward unperturbed values in the absence of active removal is very long [IPCC, 2013], and only weak precessional forcing occurs in the next two precessional cycles." (A precessional cycle is around 21,000 years, the time it takes for the perihelion to move all the way around the tropical year.)
Ice ages go through cycles of about 100,000 years, but the next one may well be avoided due to our carbon dioxide emissions.
See also
List of Ice Age species preserved as permafrost mummies
References
Works cited
Historical Simulation
External links
Cracking the Ice Age from PBS
Eduard Y. Osipov, Oleg M. Khlystov. Glaciers and meltwater flux to Lake Baikal during the Last Glacial Maximum.
Geological history of the Great Lakes
Glaciology
History of climate variability and change
History of science | 0.771995 | 0.999605 | 0.77169 |
The Decline of the West | The Decline of the West (; more literally, The Downfall of the Occident) is a two-volume work by Oswald Spengler. The first volume, subtitled Form and Actuality, was published in the summer of 1918. The second volume, subtitled Perspectives of World History, was published in 1922. The definitive edition of both volumes was published in 1923.
Spengler introduced his book as a "Copernican overturning"—a specific metaphor of societal collapse—involving the rejection of the Eurocentric view of history, especially the division of history into the linear "ancient-medieval-modern" rubric. According to Spengler, the meaningful units for history are not epochs but whole cultures which evolve as organisms. In his framework, the terms "culture" and "civilization" were given non-standard definitions and cultures are described as having lifespans of about a thousand years of flourishing, and a thousand years of decline. To Spengler, the natural lifespan of these groupings was to start as a "race"; become a "culture" as it flourished and produced new insights; and then become a "civilization". Spengler differed from others in not seeing the final civilization stage as necessarily "better" than the earlier stages; rather, the military expansion and self-assured confidence that accompanied the beginning of such a phase was a sign that the civilization had arrogantly decided it had already understood the world and would stop creating bold new ideas, which would eventually lead to a decline. For example, to Spengler, the Classical world's culture stage was in Greek and early Roman thought; the expansion of the Roman Empire was its civilization phase; and the collapse of the Roman and Byzantine Empires their decline. He believed that the West was in its "evening", similar to the late Roman Empire, and approaching its eventual decline despite its seeming power.
Spengler recognized at least eight high cultures: Babylonian, Egyptian, Chinese, Indian, Mesoamerican (Mayan/Aztec), Classical (Greek/Roman, "Apollonian"), the non-Babylonian Middle East ("Magian"), and Western or European ("Faustian"). Spengler combined a number of groups under the "Magian" label; "Semitic", Arabian, Persian, and the Abrahamic religions in general as originating from them (Judaism, Christianity, Islam). Similarly, he combined various Mediterranean cultures of antiquity including both Ancient Greece and Ancient Rome as "Apollonian", and modern Westerners as "Faustian". According to Spengler, the Western world was ending and the final season, the "winter" of Faustian Civilization, was being witnessed. In Spengler's depiction, Western Man was a proud but tragic figure because, while he strives and creates, he secretly knows the actual goal will never be reached.
Creation
Spengler said that he conceived the book sometime in 1911 and spent three years to finish the first draft. At the start of World War I, he began revising it and completed the first volume in 1917. It was published the following year when Spengler was 38 and was his first work, apart from his doctoral thesis on Heraclitus. The second volume was published in 1922. The first volume is subtitled Form and Actuality; the second volume is Perspectives of World-history. Spengler's own view of the aims and intentions of the work were described in the Prefaces and occasionally at other places such as in the preface to Man and Technics.
Overview
Spengler's world-historical outlook was informed by many philosophers, including Goethe and to some degree Nietzsche. He described the significance of these two German philosophers and their influence on his worldview in his lecture Nietzsche and His Century. He called his analytical approach "Analogy. By these means we are enabled to distinguish polarity and periodicity in the world."
Morphology was a key part of Spengler's philosophy of history, using a methodology which approached history and historical comparisons on the basis of civilizational forms and structure, without regard to function.
In a footnote, Spengler described the essential core of his philosophical approach toward history, culture, and civilization:
Plato and Goethe stand for the philosophy of Becoming, Aristotle and Kant the philosophy of Being... Goethe's notes and verse... must be regarded as the expression of a perfectly definite metaphysical doctrine. I would not have a single word changed of this: "The Godhead is effective in the living and not in the dead, in the becoming and the changing, not in the become and the set-fast; and therefore, similarly, the reason is concerned only to strive towards the divine through the becoming and the living, and the understanding only to make use of the become and the set-fast. (Letter to Eckermann)" This sentence comprises my entire philosophy.
Scholars now agree that the word "decline" more accurately renders the intended meaning of Spengler's original German word "Untergang" (often translated as the more emphatic "downfall"; "Unter" being "under" and "gang" being "going", it is also accurately rendered in English as the "going under" of the West). Spengler said that he did not mean to describe a catastrophic occurrence, but rather a protracted fall—a "twilight" or "sunset" (Sonnenuntergang is German for sunset, and Abendland, the German word for the West or the Occident, literally means the "evening land"). In 1921, Spengler wrote that he might have used in his title the word Vollendung (which means 'fulfillment' or 'consummation') and saved a great deal of misunderstanding.
Spenglerian terms
Spengler invented certain terms with unusual meanings not commonly encountered in everyday discourse.
Culture/Civilization
Spengler used the two terms in a specific manner, loading them with particular values. For him, Civilization is what a Culture becomes once its creative impulses wane and become overwhelmed by critical impulses. Culture is the becoming, Civilization is the thing which a culture becomes. Rousseau, Socrates, and Buddha each mark the point where their Cultures transformed into Civilization. They each buried centuries of spiritual depth by presenting the world in rational terms—the intellect comes to rule once the soul has abdicated.
Apollonian/Magian/Faustian
These are Spengler's terms for Classical, Arabian and Western Cultures respectively.
Apollonian Culture and Civilization is focused around Ancient Greece and Rome. Spengler saw its world view as being characterized by appreciation for the beauty of the human body, and a preference for the local and the present moment. The Apollonian world sense was described as ahistorical, citing Thucydides' claim in his Histories that nothing of importance had happened before him. Spengler said that the Classical Culture did not feel the same anxiety as the Faustian when confronted with an undocumented event.
Magian Culture and Civilization includes the Jews from about 400 BC, early Christians and various Arabian religions up to and including Islam. He described it as having a world feeling that revolved around the concept of world as cavern, epitomized by the domed Mosque, and a preoccupation with essence. Spengler saw the development of this Culture as being distorted by a too-influential presence of older Civilizations, the initial vigorous expansionary impulses of Islam being in part a reaction against this.
Faustian According to Spengler, the Faustian culture began in Western Europe around the 10th century, and had such expansionary power that by the 20th century it was covering the entire earth, with only a few regions where Islam provided an alternative world view. He described it as having a world feeling inspired by the concept of infinitely wide and profound space, the yearning towards distance and infinity. The term "Faustian" is a reference to Goethe's Faust (Johann Wolfgang von Goethe had a massive effect on Spengler), in which a dissatisfied Intellectual is willing to make a pact with the Devil in return for unlimited knowledge. Spengler believed that this represented the Western Man's limitless metaphysic, unrestricted thirst for knowledge, and constant confrontation with the Infinite.
Pseudomorphosis
The concept of pseudomorphosis is one that Spengler borrows from mineralogy and is introduced as a way of explaining what he calls half-developed or only partially manifested Cultures. Specifically, pseudomorphosis refers to an older Culture or Civilization being so deeply ingrained that a young Culture cannot find its own form and full expression of itself. In Spengler's words, this leads to the young soul being cast in the old molds, young feelings then stiffen in senile practices, and instead of expanding creatively, it fosters hate toward the older Culture.
Spengler believed that a Magian pseudomorphosis began with the Battle of Actium, in which the gestating Arabian Culture was represented by Mark Antony and lost to the Classical Civilization. The battle was different from the conflict between Rome and Greece, which had been fought out at Cannae and Zama, with Hannibal being the representative of Hellenism. He said that Antony should have won at Actium, and his victory would have freed the Magian Culture, but his defeat imposed Roman Civilization on it.
In Russia, Spengler saw a young, undeveloped Culture in a pseudomorphosis under the Faustian (Petrine) form. He said that Peter the Great distorted the tsarism of Russia to the dynastic form of Western Europe. The burning of Moscow, as Napoleon was set to invade, he described as a primitive expression of hatred toward the foreigner. In the following entry of Alexander I into Paris, the Holy Alliance and the Concert of Europe, he said that Russia was forced into an artificial history before its culture was ready or capable of understanding its burden. This would result in a hatred toward Europe, which Spengler said poisoned the womb of an emerging new Culture in Russia. While he does not name the Culture, he said that Tolstoy is its past and Dostoyevsky is its future.
Becoming/Being
For Spengler, becoming is the basic element and being is static and secondary, not the other way around. He said that his philosophy in a nutshell is contained in these lines from Goethe: "the God-head is effective in the living and not in the dead, in the becoming and the changing, not in the become and the set-fast; and therefore, similarly the intuition is concerned only to strive towards the divine through the becoming and the living, and logic only to make use of the become and the set-fast".
Blood/Race
Spengler described blood as the only power which is strong enough to overthrow money, which he saw as the dominant power of his age. Blood is commonly understood to mean race-feeling, and this concept is partially true but it is misleading. Spengler's concept of race had nothing to do with ethnic identity, so in that sense, he was hostile toward racists. The book states that a population becomes a race when it is united in outlook, regardless of its ethnic origins. Spengler also states that the final struggle with money will be a battle between capitalism and socialism, but again, it will be socialism with a specific definition: "the will to call into life a mighty politico-economic order that transcends all class interests, a system of lofty thoughtfulness and duty sense." He also writes "A power can be overthrown only by another power, not by a principle, and only one power that can confront money is left. Money is overthrown and abolished by blood. Life is alpha and omega ... It is the fact of facts ... Before the irresistible rhythm on the generation-sequence, everything built up by the waking–consciousness in its intellectual world vanishes at the last." Therefore, if we wanted to replace blood by a single word it may be more correct to use "life-force" rather than "race-feeling".
Spengler's cultures
Spengler said that eight Hochkulturen or high cultures have existed:
Babylonian
Egyptiac
Indic
Sinic
Mesoamerican (Mayan/Aztec)
Apollonian or Classical (Greek/Roman)
Magian or Arabian
Faustian or Western (European)
The "Decline" is largely concerned with the Classical and Western (and to some degree Magian) Cultures, but some examples are taken from the Chinese and Egyptian. He said that each Culture arises within a specific geographical area and is defined by its internal coherence of style in terms of art, religious behavior and psychological perspective. In addition, each Culture is described as having a conception of space which is expressed by an "Ursymbol". Spengler said that his idea of Culture is justifiable through the existence of recurrent patterns of development and decline across the thousand years of each Culture's active lifetime.
Spengler did not classify the Southeast Asian and Peruvian (Incan, etc.) cultures as Hochkulturen. He thought that Russia was still defining itself, but was bringing into being a Hochkultur. The Indus Valley civilization had not been discovered at the time he was writing, and its relationship with later Indian civilization remained unclear for some time.
Themes
Meaning of history
Spengler distinguished between ahistorical peoples and peoples caught up in world history. While he recognized that all people are a part of history, he said that only certain Cultures have a wider sense of historical involvement, meaning that some people see themselves as part of a grand historical design or tradition, while others view themselves in a self-contained manner and have no world-historical consciousness.
For Spengler, a world-historical view is about the meaning of history itself, breaking the historian or observer out of a crude, culturally parochial classification of history. By learning about different courses taken by other civilizations, people can better understand their own culture and identity. He said that those who still maintain a historical view of the world are the ones who continue to "make" history. Spengler said that life and humankind as a whole have an ultimate aim. However, he maintains a distinction between world-historical peoples, and ahistorical peoples—the former will have a historical destiny as part of a High Culture, while the latter will have a merely zoological fate. He said that world-historical man's destiny is self-fulfillment as a part of his Culture. Further, Spengler said that not only is pre-cultural man without history, he loses his historical weight as his Culture becomes exhausted and becomes a more and more defined Civilization.
For example, Spengler classifies Classical and Indian civilizations as ahistorical, comparing them to the Egyptian and Western civilizations which developed conceptions of historical time. He sees all Cultures as equal in the study of world-historical development. This leads to a kind of historical relativism or dispensationalism. Historical data, in Spengler's mind, are an expression of their historical time, contingent upon and relative to that context. Thus, the insights of one era are not unshakable or valid in another time or Culture—"there are no eternal truths," and each individual has a duty to look beyond one's own Culture to see what individuals of other Cultures have with equal certainty created for themselves. He said that what is significant is not whether the past thinkers' insights are relevant today, but whether they were exceptionally relevant to the great facts of their own time.
Culture and civilization
Spengler's conception of Culture was organic: primitive Culture is simply the sum of its constituent and incoherent parts (individuals, tribes, clans, etc.). Higher Culture, in its maturity and coherence, becomes an organism in its own right, according to Spengler. A Culture is described as sublimating the various customs, myths, techniques, arts, peoples, and classes into a single strong undiffused historical tendency.
Spengler divided the concepts of Culture and Civilization, the former focused inward and growing, the latter outward and merely expanding. However, he sees Civilization as the destiny of every Culture. The transition is not a matter of choice—it is not the conscious will of individuals, classes, or peoples that decides. He said that while Cultures are "things-becoming", Civilizations are the "thing-become", with the distinction being that Civilizations are what Cultures become when they are no longer creative and growing. As the conclusion of a Culture's arc of growth, Civilizations are described as outwardly focused, and in that sense artificial or insincere. As an example, Spengler used the Greeks and Romans, saying that the imaginative Greek Culture declined into wholly practical Roman Civilization.
Spengler also compared the "world-city" and -province (urban and rural) as concepts analogous to Civilization and Culture respectively, with the city drawing upon and collecting the life of broad surrounding regions. He said there is a "true-type" rural-born person, in contrast to city-dwellers who are allegedly nomadic, traditionless, irreligious, matter-of-fact, clever, unfruitful, and contemptuous of the countryman. In his view, the cities contain only a "mob", not a people, and are hostile to the traditions that represent Culture (in Spengler's view these traditions are: nobility, the Christian Church, privileges, dynasties, convention in art, and limits on scientific knowledge). He said that city-dwellers possess cold intelligence that confounds peasant wisdom, a naturalism in attitudes towards sex which are a return to primitive instincts, and a reduced inner religiousness. Further, Spengler saw urban wage disputes and large entertainment expenditures as the final aspects that signal the closing of Culture and the rise of the Civilization.
Spengler had a low opinion of Civilizations, even those that engaged in significant expansion, because he said that expansion was not actual growth. One of his principal examples was that of Roman "world domination". In his view, the Romans faced no significant resistance to their expansion, meaning it was not an achievement as they did not so much conquer their empire, but rather simply took possession of that which lay open to everyone. Spengler said this is a contrast with Roman displays of Cultural energy during the Punic Wars. After the Battle of Zama, Spengler believes that the Romans never waged, or even were capable of waging, a war against a competing great military power.
Races, peoples, and cultures
According to Spengler, a race has "roots", like a plant, which connect it to a landscape. "If, in that home, the race cannot be found, this means the race has ceased to exist. A race does not migrate. Men migrate, and their successive generations are born in ever-changing landscapes; but the landscape exercises a secret force upon the extinction of the old and the appearance of the new one." In this instance, he uses the word "race" in the tribal and cultural rather than the biological sense, a 19th-century use of the word still common when Spengler wrote.
For this reason, he said a race is not exactly like a plant:
Spengler writes that,
He distinguishes this from the sort of pseudo-anthropological notions commonly held when the book was written, and he dismisses the idea of "an Aryan skull and a Semitic skull". He also does not believe language is itself sufficient to create races, and that "the mother tongue" signifies "deep ethical forces" in Late Civilizations rather than Early Cultures, when a race is still developing the language that fits its "race-ideal".
Closely connected to race, Spengler defined a "people" as a unit of the soul, saying, "The great events of history were not really achieved by peoples; they themselves created the peoples. Every act alters the soul of the doer." He described such events as including migrations and wars, saying that the American people did not migrate from Europe, but were formed by events such as the American Revolution and the American Civil War. "Neither unity of speech nor physical descent is decisive." He said that what distinguishes a people from a population is "the inwardly lived experience of 'we'", and that this exists so long as a people's soul lasts: "The name Roman in Hannibal's day meant a people, in Trajan's time nothing more than a population." In Spengler's view, "Peoples are neither linguistic nor political nor zoological, but spiritual units."
Spengler disliked the contemporary trend of using a biological definition for race, saying, "Of course, it is quite often justifiable to align peoples with races, but 'race' in this connexion must not be interpreted in the present-day Darwinian sense of the word. It cannot be accepted, surely, that a people were ever held together by the mere unity of physical origin, or, if it were, could maintain that unity for ten generations. It cannot be too often reiterated that this physiological provenance has no existence except for science—never for folk-consciousness—and that no people was ever stirred to enthusiasm by this ideal of blood purity. In race (Rasse haben) there is nothing material but something cosmic and directional, the felt harmony of a Destiny, the single cadence of the march of historical Being. It is the incoordination of this (wholly metaphysical) beat which produces race hatred... and it is resonance on this beat that makes the true love—so akin to hate—between man and wife."
To Spengler, peoples are formed from early prototypes during the Early phase of a Culture. In his view, "Out of the people-shapes of the Carolingian Empire—the Saxons, Swabians, Franks, Visigoths, Lombards—arise suddenly the Germans, the French, the Spaniards, the Italians." He describes these peoples as products of the spiritual "race" of the great Cultures, and "people under a spell of a Culture are its products and not its authors. These shapes in which humanity is seized and moulded possess style and style-history no less than kinds of art or mode of thought. The people of Athens is a symbol not less than the Doric temple, the Englishman not less than modern physics. There are peoples of Apollonian, Magian, and Faustian cast ... World history is the history of the great Cultures, and peoples are but the symbolic forms and vessels in which the men of these Cultures fulfill their Destinies."
In saying that race and culture are tied together, Spengler echoes ideas similar to those of Friedrich Ratzel and Rudolf Kjellén. These ideas, which figure prominently in the second volume of the book, were common throughout German culture at the time.
In his later works, such as Man and Technics (1931) and The Hour of Decision (1933), Spengler expanded upon his "spiritual" theory of race and tied it to his metaphysical notion of eternal war and his belief that "Man is a beast of prey". The authorities however banned the book.
Religion and secularity
Spengler differentiates between manifestations of religion that appear within a Civilization's developmental cycle. He sees each Culture as having an initial religious identity, which arises out of the fundamental principle of the culture, and follows a trajectory correlating with that of the Culture. The Religion eventually results in a reformation-like period, after the Culture-Ideal has reached its peak and fulfillment. Spengler views a reformation as representative of decline: the reformation is followed by a period of rationalism, and then a period of second religiousness that correlates with decline. He said that the intellectual creativity of a Culture's Late period begins after the reformation, usually ushering in new freedoms in science.
According to Spengler, the scientific stage associated with post-reformation Puritanism contains the fundamentals of Rationalism, and eventually rationalism spreads throughout the Culture and becomes the dominant school of thought. To Spengler, Culture is synonymous with religious creativeness, and every great Culture begins with a religious trend that arises in the countryside, is carried through to the cultural cities, and ends in materialism in the world-cities.
Spengler believed that Enlightenment rationalism undermines and destroys itself, and described a process that passes from unlimited optimism to unqualified skepticism. He said that Cartesian self-centered rationalism leads to schools of thought that do not cognize outside of their own constructed worlds, ignoring actual every-day life experience, and applies criticism to its own artificial world until it exhausts itself in meaninglessness. In his view, the masses give rise to the Second Religiousness in reaction to the educated elites, which manifests as deep suspicion of academia and science.
Spengler said that the Second Religiousness is a harbinger of the decline of mature Civilization into an ahistorical state and occurs concurrently with Caesarism, the final political constitution of Late Civilization. He describes Caesarism as the rise of an authoritarian ruler, a new 'emperor' akin to Caesar or Augustus, taking the reins in reaction to a decline in creativity, ideology and energy after a Culture has reached its high point and become a Civilization. He said that the Second Religiousness and Caesarism demonstrate a lack of youthful strength or creativity, and the Second Religiousness is simply a rehashing of the original religious trend of the Culture.
Democracy, media, and money
Spengler said that democracy is the political weapon of "money", and the media are the means through which money operates a democratic political system. The penetration of money's power throughout a society is described as another marker of the shift from Culture to Civilization.
Democracy and plutocracy are equivalent in Spengler's argument, and he said the "tragic comedy of the world-improvers and freedom-teachers" is that they are simply assisting money to be more effective. He believed that the principles of equality, natural rights, universal suffrage, and freedom of the press are all disguises for class war of the bourgeois against the aristocracy. Freedom, to Spengler, is a negative concept, only entailing the repudiation of any tradition. He said that freedom of the press requires money, and entails ownership, meaning that it serves money. Similarly, since suffrage involves electoral campaigns, which involve donations, elections serve money as well. Spengler said that the ideologies espoused by candidates, whether Socialism or Liberalism, are set in motion by, and ultimately serve, only money.
Spengler said that in his era money has already won, in the form of democracy. However, he said that in destroying the old elements of the Culture, it prepares the way for the rise of a new and overpowering figure, who he calls the Caesar. Before such a leader, money collapses, and in the Imperial Age the politics of money fades away.
Spengler said that the use of one's constitutional rights requires money, and that voting can only work as designed in the absence of organized leadership working on the election process. He said that if the election process is organized by political leaders, to the extent that money allows, the vote ceases to be truly significant. In his view, it is no more than a recorded opinion of the masses on the organizations of government over which they possess no positive influence. He said that the greater the concentration of wealth in individuals, the more the fight for political power revolves around questions of money. He believed that this was the necessary end of mature democratic systems, rather than being corruption or degeneracy.
On the subject of the press, Spengler said that instead of conversations between men, the press and the "electrical news-service keep the waking-consciousness of whole people and continents under a deafening drum-fire of theses, catchwords, standpoints, scenes, feelings, day by day and year by year." He said that money uses the media to turn itself into force—the more spent, the more intense its influence. In addition, a functioning press requires universal education, and he said schooling leads to a demand for the shepherding of the masses, which then becomes an object of party politics. To Spengler, people who believe in the ideal of education prepare the way for the power of the press, and eventually for the rise of the Caesar. He also said there is no longer a need for leaders to impose military service, because the press will stir the public into a frenzy and force their leaders into a conflict.
Spengler believed that the only force which can counter money is blood. He said that Marx's critique of capitalism was put forth in the same language and on the same assumptions as capitalism, meaning it is more a recognition of capitalism's veracity, than a refutation. He said the only aim of Marxism is to "confer upon objects the advantage of being subjects."
Future
The formation of the "battling society of nations" marks the beginning of every civilization. In the following phase, the size of armies and the scale of warfare increase. For us the time of Warring States began with Napoleon, who introduced the idea of military world domination different from the preceding European maritime empires. The trend continues with the American Civil War and the "explosion" of the First World War (the book was published before the Second World War). The next century will be one of actually Warring States. "Within two generations" (from 1922) will start the contest "for the heritage of the whole world," with continents at stake. The destinies of small states are "without importance to the great march of things." There are ages of "gigantic conflicts," like the Warring States in China and wars in the contemporary Roman world. In one such age we find ourselves today and it is accelerated by modern military technology.
"The way from Alexander to Caesar is unambiguous and unavoidable, and the strongest nation of any and every culture, consciously or unconsciously, willing or unwilling, has had to tread it. From the rigor of these facts there is no refuge." The Hague Conference of 1907 was the prelude of World War, the Washington Conference of 1921 will have been that of other wars. "The alternatives now are to stand fast or go under—there is no middle course. It falls to us to live in the most trying times known to history of a great culture." The strongest race will win and seize the management of the world.
Synchronously with the acceleration of warfare and the rise of the strongest race to world management, there occurs an "accelerating demolition of ancient forms that leaves the path clear to Caesarism." This phase began in China c. 600 BC, the Mediterranean c. 450 BC and the modern world c. 1700. Comparing these three ages, Spengler states that "Caesarism" is an inevitable product of such an age and it "suddenly outlines itself on the horizon." In China the culmination occurred with the First Emperor, in the Mediterranean with Sulla and Pompey and in our world is forthcoming. Spengler selected the Chinese and Roman Empires as most relevant models for the future and argued that the modern world undergoes the same evolution towards "Caesarism" but now on world-wide scale. The present is the last century of the pre-Imperial age of world history to be followed by the "Imperial Age" with the rise of Caesar. The transition from "Napoleonism to Caesarism" is an evolutionary stage universal to every culture and takes two centuries. Hence, modern "Caesarism" is expected in "one century" [=2022].
Caesarism grows on the soil of democracy which is dictatorial money-economics. The mighty ones of the future may possess the Earth as their private property, but they would have a task of caring for this world and this task conflicts with the interests of democratic / money-power age. Hence, there now sets in the final battle in the struggle democracy vs "Caesarism" in which the latter is destined to prevail. "The coming of Caesarism breaks the dictature of money and its political weapon democracy."
Despite Spengler's negative view of democracy, he is neither positive about "Caesarism." Once the "Imperial Age" of world history has arrived, there are no more great politics. People manage with the situation as it is. In the period of Warring States, "torrents of blood had reddened the pavements of all world cities for the winning of rights without which life seemed not worth the living. A hundred years into the Imperial Age, and even the historians will no longer understand the old controversies." "Caesarism" means a "kind of government which, irrespective of any constitutional formulation that it may have, is in its inward self a return to thorough formlessness." It does not matter that the Caesars in history disguised their position under antique forms (such as The Senate and the Roman People in the Roman Imperial Age). The spirit of these forms was dead, and so all institutions, however carefully maintained, were thenceforth destitute of all meaning and weight. Real importance centered in the wholly personal power exercised by the Caesar. A form-fulfilled world degenerates into primitivism and historical periods are replaced by biological stretches of time. Wars between states end to be replaced by private feuds between Caesars. With the accomplished state of "Caesarism," "high history lays itself down weary to sleep. Man becomes a plant again, dumb and enduring."
Reception
The Decline of the West was widely read by German intellectuals. It has been suggested that it intensified a sense of crisis in Germany following the end of World War I. George Steiner suggested that the work can be seen as one of several books that resulted from the crisis of German culture following Germany's defeat in World War I, comparable in this respect to the philosopher Ernst Bloch's The Spirit of Utopia (1918), the theologian Franz Rosenzweig's The Star of Redemption (1921), the theologian Karl Barth's The Epistle to the Romans (1922), Nazi Party leader Adolf Hitler's Mein Kampf (1925), and the philosopher Martin Heidegger's Being and Time (1927).
The book received unfavorable reviews from most scholars even before the release of the second volume, and the stream of criticisms continued for decades. Nevertheless, in Germany the book enjoyed popular success: by 1926 some 100,000 copies were sold.
A 1928 Time review of the second volume of Decline described the immense influence and controversy Spengler's ideas enjoyed in the 1920s: "When the first volume of The Decline of the West appeared in Germany a few years ago, thousands of copies were sold. Cultivated European discourse quickly became Spengler-saturated. Spenglerism spurted from the pens of countless disciples. It was imperative to read Spengler, to sympathize or revolt. It still remains so."
Critique
In 1950, the philosopher Theodor W. Adorno published an essay entitled "Spengler after the Downfall" (in ) to commemorate what would have been Spengler's 70th birthday. Adorno reassessed Spengler's thesis three decades after it had been put forth, in light of the catastrophic destruction of Nazi Germany (although Spengler had not meant "Untergang" in a cataclysmic sense, this was how most authors after World War II interpreted it). As a member of the Frankfurt School of Marxist critical theory, Adorno said he wanted to "turn (Spengler's) reactionary ideas toward progressive ends." He believed that Spengler's insights were often more profound than those of his more liberal contemporaries, and his predictions more far-reaching. Adorno saw the rise of the Nazis as confirmation of Spengler's ideas about "Caesarism" and the triumph of force-politics over the market. Adorno also drew parallels between Spengler's description of the Enlightenment and his own analysis. However, Adorno also criticized Spengler for an overly deterministic view of history, which ignored the unpredictable role that human initiative plays at all times. He quoted the Austrian poet Georg Trakl (1887-1914): "How sickly seem everything that grows" (from the poem "Heiterer Frühling") to illustrate that decay contains new opportunities for renewal. He also criticizes Spengler's use of language, which he called overly reliant on fetishistic terms like "Soul", "Blood" and "Destiny." Pope Benedict XVI disagreed with Spengler's "biologistic" thesis, citing the arguments of Arnold J. Toynbee, who distinguished between "technological-material progress" and spiritual progress in Western civilizations.
György Lukács criticized The Decline of the West in his book 1953 "The Destruction of Reason", in a chapter focusing on Oswald Spengler. Outlining Spengler as “dilettantish amateur” on factual level - and being of lesser “philosophical level” compared to the German vitalist (Lebensphilosophie) and/or irrationalist thinkers before him - Lukács saw The Decline of the West as a “victory of extreme historical relativism”. Describing the work of being amateurish, pseudo-historical and exemplary of irrationalist thought, Lukács attacked Spengler for "rejecting causality and laws, recognizing them as the only historical phenomena of given epochs and denying them any competence for scientific and philosophical methodology" and "substituting causality for analogy", making the "(often shallow) similarities his canon of investigation". Lukács argued that the work was primarily Spengler’s attempt to turn “all fields of human knowledge subservient to his philosophy of history, no matter whether he personally had truly mastered them or whether they, in themselves, had already yielded unequivocal, philosophically applicable results”.
German philosopher and sociologist Max Horkheimer also saw Decline of the West and Spengler in negative light, citing the work as an "superficial synthesis of poorly understood material from a wide variety of fields" and condemning Spengler of being a "worst sort" of Lebensphilosophie populist.
Influenced
Chechen warlord Shamil Basayev was given Decline as a gift by a Russian radio journalist. He reportedly read it in one night and he settled on his plan to organize life in the Chechen Republic of Ichkeria.
Samuel Huntington seems to have been heavily influenced by Spengler's The Decline of the West in his "Clash of Civilizations" theory.
Joseph Campbell, an American professor, writer and orator who is best known for his work in the fields of comparative mythology and comparative religion, claimed that Decline of the West was the biggest influence on him.
Northrop Frye, reviewing the Decline of the West, said that "If... nothing else, it would still be one of the world's great Romantic poems".
Oswald Mosley identified the book as a critical influence on his political conversion from far-left to far-right politics and his subsequent foundation of the British Union of Fascists.
Ludwig Wittgenstein named Spengler as one of his philosophical influences.
Camille Paglia has listed The Decline of the West as one of the influences on her 1990 work of literary criticism Sexual Personae.
William S. Burroughs referred repeatedly to Decline as a pivotal influence on his thoughts and work.
Martin Heidegger was deeply affected by Spengler's work, and referred to him often in his early lecture courses.
James Blish used many of Spengler's ideas in his books Cities in Flight.
Francis Parker Yockey wrote Imperium: The Philosophy of History and Politics, published under the pen name Ulick Varange in 1948. In its introduction, this book is described as a "sequel" to The Decline of the West.
Whittaker Chambers often refers to "Crisis", a concept which was influenced by Spengler, in Witness (it is mentioned in more than 50 pages, including the first page, where it is mentioned in a dozen places), in Cold Friday (1964, more than 30 pages), and in other pre-Hiss Case writings. ("His central feeling, repeated in hundreds of statements and similies, is that the West is going into its Spenglerian twilight, a breaking down in which Communism is more a symptom than an agent.")
The title of Pat Buchanan book The Death of the West, is a reference to The Decline of the West
Evelyn Waugh's novel Decline and Fall is an allusion to both The Decline of the West and Edward Gibbon's The Decline and Fall of the Roman Empire
H. P. Lovecraft was heavily influenced by the book.
William Gaddis was heavily influenced by the book.
Bibliography
Spengler, Oswald. The Decline of the West. Ed. Arthur Helps, and Helmut Werner. Trans. Charles F. Atkinson. Preface Hughes, H. Stuart. New York: Oxford UP, 1991.
Editions
In 2021, unabridged versions of both volumes of The Decline of the West (Form & Actuality and its follow-up Perspectives of World-History) were reissued by Arktos Media, which retains the right to publish the original English translations by Charles Francis Atkinson.
See also
All That Is Solid Melts into Air#Epilogue: The Faustian and Pseudo-Faustian Age
Conservative revolution
Degeneration theory
Historic recurrence
On the Plurality of Civilsation
Social cycle theory
References
Further reading
William H. McNeill, 1963 [1991]. The Rise of the West: A History of the Human Community [With a Retrospective Essay], University of Chicago Press, . Synopsis, Table of Contents Summary and scrollable preview.
Scruton, Roger, "Spengler's Decline of the West" in The Philosopher on Dover Beach, Manchester: Carcanet Press, 1990.
External links
Spengler, Oswald, The Decline of the West v. 1 (©1926) and v. 2 (©1928), Alfred A. Knopf
Unabridged text
1918 non-fiction books
Books about civilizations
Books about the West
German non-fiction books
Universal history books
Works about nihilism
Works by Oswald Spengler
Criticism of rationalism
Declinism
Books about the philosophy of history
Works about the theory of history
Right-wing anti-capitalism | 0.774072 | 0.996904 | 0.771676 |
Archaeological culture | An archaeological culture is a recurring assemblage of types of artifacts, buildings and monuments from a specific period and region that may constitute the material culture remains of a particular past human society. The connection between these types is an empirical observation. Their interpretation in terms of ethnic or political groups is based on archaeologists' understanding. However, this is often subject to long-unresolved debates. The concept of the archaeological culture is fundamental to culture-historical archaeology.
Concept
Different cultural groups have material culture items that differ both functionally and aesthetically due to varying cultural and social practices. This notion is observably true on the broadest scales. For example, the equipment associated with the brewing of tea varies greatly across the world. Social relations to material culture often include notions of identity and status.
Advocates of culture-historical archaeology use the notion to argue that sets of material culture can be used to trace ancient groups of people that were either self-identifying societies or ethnic groups. Archaeological culture is a classifying device to order archaeological data, focused on artifacts as an expression of culture rather than people. The classic definition of this idea comes from Gordon Childe:
The concept of an archaeological culture was crucial to linking the typological analysis of archaeological evidence to mechanisms that attempted to explain why they change through time. The key explanations favoured by culture-historians were the diffusion of forms from one group to another or the migration of the peoples themselves. A simplistic example of the process might be that if one pottery-type had handles very similar to those of a neighbouring type but decoration similar to a different neighbour, the idea for the two features might have diffused from the neighbours. Conversely, if one pottery-type suddenly replaces a great diversity of pottery types in an entire region, that might be interpreted as a new group migrating in with this new style.
This idea of culture is known as normative culture. It relies on the assumption found in the view of archaeological culture that artifacts found are "an expression of cultural norms," and that these norms define culture. This view is also required to be polythetic, multiple artifacts must be found for a site to be classified under a specific archaeological culture. One trait alone does not result in a culture, rather a combination of traits are required.
This view culture gives life to the artifacts themselves. "Once 'cultures' are regarded as things, it is possible to attribute behavior to them, and to talk about them as if they were living organisms."
Archaeological cultures were generally equated with separate 'peoples' (ethnic groups or races) leading in some cases to distinct nationalist archaeologies.
Most archaeological cultures are named after either the type artifact or type site that defines the culture. For example, cultures may be named after pottery types such as Linear Pottery culture or Funnelbeaker culture. More frequently, they are named after the site at which the culture was first defined such as the Hallstatt culture or Clovis culture.
Since the term "culture" has many different meanings, scholars have also coined a more specific term paleoculture, as a specific designation for prehistoric cultures. Critics argue that cultural taxonomies lack a strong consensus on the epistemological aims of cultural taxonomy,
Development
The use of the term "culture" entered archaeology through 19th-century German ethnography, where the Kultur of tribal groups and rural peasants was distinguished from the Zivilisation of urbanised peoples. In contrast to the broader use of the word that was introduced to English-language anthropology by Edward Burnett Tylor, Kultur was used by German ethnologists to describe the distinctive ways of life of a particular people or Volk, in this sense equivalent to the French civilisation. Works of Kulturgeschichte (culture history) were produced by a number of German scholars, particularly Gustav Klemm, from 1780 onwards, reflecting a growing interest in ethnicity in 19th-century Europe.
The first use of "culture" in an archaeological context was in Christian Thomsen's 1836 work Ledetraad til Nordisk Oldkyndighed. In the later half of the 19th century archaeologists in Scandinavia and central Europe increasingly made use of the German concept of culture to describe the different groups they distinguished in the archaeological record of particular sites and regions, often alongside and as a synonym of "civilisation". It was not until the 20th century and the works of German prehistorian and fervent nationalist Gustaf Kossinna that the idea of archaeological cultures became central to the discipline. Kossinna saw the archaeological record as a mosaic of clearly defined cultures (or Kultur-Gruppen, culture groups) that were strongly associated with race. He was particularly interested in reconstructing the movements of what he saw as the direct prehistoric ancestors of Germans, Slavs, Celts and other major Indo-European ethnic groups in order to trace the Aryan race to its homeland or Urheimat.
The strongly racist character of Kossinna's work meant it had little direct influence outside of Germany at the time (the Nazi Party enthusiastically embraced his theories), or at all after World War II. However, the more general "culture history" approach to archaeology that he began did replace social evolutionism as the dominant paradigm for much of the 20th century. Kossinna's basic concept of the archaeological culture, stripped of its racial aspects, was adopted by Vere Gordon Childe and Franz Boas, at the time the most influential archaeologists in Britain and America respectively. Childe, in particular, was responsible for formulating the definition of archaeological culture that is still largely applies today. He defined archaeological culture as artifacts and remains that consistently occur together. This introduced a "new and discrete usage of the term which was significantly different from current anthropological usage." His definition in particular was purely a classifying device to order the archaeological data.
Though he was sceptical about identifying particular ethnicities in the archaeological record and inclined much more to diffusionism than migrationism to explain culture change, Childe and later culture-historical archaeologists, like Kossinna, still equated separate archaeological cultures with separate "peoples". Later archaeologists have questioned the straightforward relationship between material culture and human societies. The definition of archaeological cultures and their relationship to past people has become less clear; in some cases, what was believed to be a monolithic culture is shown by further study to be discrete societies. For example, the Windmill Hill culture now serves as a general label for several different groups that occupied southern Great Britain during the Neolithic. Conversely, some archaeologists have argued that some supposedly distinctive cultures are manifestations of a wider culture, but they show local differences based on environmental factors such as those related to Clactonian man. Conversely, archaeologists may make a distinction between material cultures that actually belonged to a single cultural group. It has been highlighted, for example, that village-dwelling and nomadic Bedouin Arabs have radically different material cultures even if in other respects, they are very similar. In the past, such synchronous findings were often interpreted as representing intrusion by other groups.
Criticism
The concept of archaeological cultures is itself a divisive subject within the archaeological field. When first developed, archaeologic culture was viewed as a reflection of actual human culture.
This view of culture would be "entirely satisfactory if the aim of archaeology was solely the definition and description of these entities." However, as the 1960s rolled around and archaeology sought to be more scientific, archaeologists wanted to do more than just describe artifacts, and the archaeological culture found.
Accusations came that archaeological culture was "idealist" as it assumes that norms and ideas are seen as being "important in the definition of cultural identity." It stresses the particularity of cultures: "Why and how they are different from the adjacent group." Processualists, and other subsequently critics of cultural-historical archaeology argued that archaeological culture treated culture as "just a rag-tag assemblage of ideas."
Archaeological culture is presently useful for sorting and assembling artifacts, especially in European archaeology that often falls towards culture-historical archaeology.
See also
Relative dating (archaeology) – Determination of the relative order of archaeological layers and artifacts
Sequence (archaeology) – Stratigraphy of the archaeological record, used as part of the 'seriation' method of relative dating
References
Sources
External links
What is an archeological culture? – Academia.edu
Archaeology
Methods in archaeology | 0.781921 | 0.986886 | 0.771667 |
Neolithic | The Neolithic or New Stone Age (from Greek 'new' and 'stone') is an archaeological period, the final division of the Stone Age in Europe, Asia, Mesopotamia and Africa (c. 10,000 BC to c. 2,000 BC). It saw the Neolithic Revolution, a wide-ranging set of developments that appear to have arisen independently in several parts of the world. This "Neolithic package" included the introduction of farming, domestication of animals, and change from a hunter-gatherer lifestyle to one of settlement. The term 'Neolithic' was coined by Sir John Lubbock in 1865 as a refinement of the three-age system.
The Neolithic began about 12,000 years ago, when farming appeared in the Epipalaeolithic Near East and Mesopotamia, and later in other parts of the world. It lasted in the Near East until the transitional period of the Chalcolithic (Copper Age) from about 6,500 years ago (4500 BC), marked by the development of metallurgy, leading up to the Bronze Age and Iron Age.
In other places, the Neolithic followed the Mesolithic (Middle Stone Age) and then lasted until later. In Ancient Egypt, the Neolithic lasted until the Protodynastic period, 3150 BC. In China, it lasted until circa 2000 BC with the rise of the pre-Shang Erlitou culture, as it did in Scandinavia.
Origin
Following the ASPRO chronology, the Neolithic started in around 10,200 BC in the Levant, arising from the Natufian culture, when pioneering use of wild cereals evolved into early farming. The Natufian period or "proto-Neolithic" lasted from 12,500 to 9,500 BC, and is taken to overlap with the Pre-Pottery Neolithic (PPNA) of 10,200–8800 BC. As the Natufians had become dependent on wild cereals in their diet, and a sedentary way of life had begun among them, the climatic changes associated with the Younger Dryas (about 10,000 BC) are thought to have forced people to develop farming.
The founder crops of the Fertile Crescent were wheat, lentil, pea, chickpeas, bitter vetch, and flax. Among the other major crop domesticated were rice, millet, maize (corn), and potatoes. Crops were usually domesticated in a single location and ancestral wild species are still found.
Early Neolithic farming was limited to a narrow range of plants, both wild and domesticated, which included einkorn wheat, millet and spelt, and the keeping of dogs. By about 8000 BC, it included domesticated sheep and goats, cattle and pigs.
Not all of these cultural elements characteristic of the Neolithic appeared everywhere in the same order: the earliest farming societies in the Near East did not use pottery. In other parts of the world, such as Africa, South Asia and Southeast Asia, independent domestication events led to their own regionally distinctive Neolithic cultures, which arose completely independently of those in Europe and Southwest Asia. Early Japanese societies and other East Asian cultures used pottery before developing agriculture.
Periods by region
Southwest Asia
In the Middle East, cultures identified as Neolithic began appearing in the 10th millennium BC. Early development occurred in the Levant (e.g. Pre-Pottery Neolithic A and Pre-Pottery Neolithic B) and from there spread eastwards and westwards. Neolithic cultures are also attested in southeastern Anatolia and northern Mesopotamia by around 8000 BC.
Anatolian Neolithic farmers derived a significant portion of their ancestry from the Anatolian hunter-gatherers (AHG), suggesting that agriculture was adopted in site by these hunter-gatherers and not spread by demic diffusion into the region.
Pre-Pottery Neolithic A
The Neolithic 1 (PPNA) period began around 10,000 BC in the Levant. A temple area in southeastern Turkey at Göbekli Tepe, dated to around 9500 BC, may be regarded as the beginning of the period. This site was developed by nomadic hunter-gatherer tribes, as evidenced by the lack of permanent housing in the vicinity, and may be the oldest known human-made place of worship. At least seven stone circles, covering , contain limestone pillars carved with animals, insects, and birds. Stone tools were used by perhaps as many as hundreds of people to create the pillars, which might have supported roofs. Other early PPNA sites dating to around 9500–9000 BC have been found in Tell es-Sultan (ancient Jericho), Israel (notably Ain Mallaha, Nahal Oren, and Kfar HaHoresh), Gilgal in the Jordan Valley, and Byblos, Lebanon. The start of Neolithic 1 overlaps the Tahunian and Heavy Neolithic periods to some degree.
The major advance of Neolithic 1 was true farming. In the proto-Neolithic Natufian cultures, wild cereals were harvested, and perhaps early seed selection and re-seeding occurred. The grain was ground into flour. Emmer wheat was domesticated, and animals were herded and domesticated (animal husbandry and selective breeding).
In 2006, remains of figs were discovered in a house in Jericho dated to 9400 BC. The figs are of a mutant variety that cannot be pollinated by insects, and therefore the trees can only reproduce from cuttings. This evidence suggests that figs were the first cultivated crop and mark the invention of the technology of farming. This occurred centuries before the first cultivation of grains.
Settlements became more permanent, with circular houses, much like those of the Natufians, with single rooms. However, these houses were for the first time made of mudbrick. The settlement had a surrounding stone wall and perhaps a stone tower (as in Jericho). The wall served as protection from nearby groups, as protection from floods, or to keep animals penned. Some of the enclosures also suggest grain and meat storage.
Pre-Pottery Neolithic B
The Neolithic 2 (PPNB) began around 8800 BC according to the ASPRO chronology in the Levant (Jericho, West Bank). As with the PPNA dates, there are two versions from the same laboratories noted above. This system of terminology, however, is not convenient for southeast Anatolia and settlements of the middle Anatolia basin. A settlement of 3,000 inhabitants called 'Ain Ghazal was found in the outskirts of Amman, Jordan. Considered to be one of the largest prehistoric settlements in the Near East, it was continuously inhabited from approximately 7250 BC to approximately 5000 BC.
Settlements have rectangular mud-brick houses where the family lived together in single or multiple rooms. Burial findings suggest an ancestor cult where people preserved skulls of the dead, which were plastered with mud to make facial features. The rest of the corpse could have been left outside the settlement to decay until only the bones were left, then the bones were buried inside the settlement underneath the floor or between houses.
Pre-Pottery Neolithic C
Work at the site of 'Ain Ghazal in Jordan has indicated a later Pre-Pottery Neolithic C period. Juris Zarins has proposed that a Circum Arabian Nomadic Pastoral Complex developed in the period from the climatic crisis of 6200 BC, partly as a result of an increasing emphasis in PPNB cultures upon domesticated animals, and a fusion with Harifian hunter gatherers in the Southern Levant, with affiliate connections with the cultures of Fayyum and the Eastern Desert of Egypt. Cultures practicing this lifestyle spread down the Red Sea shoreline and moved east from Syria into southern Iraq.
Late Neolithic
The Late Neolithic began around 6,400 BC in the Fertile Crescent. By then distinctive cultures emerged, with pottery like the Halafian (Turkey, Syria, Northern Mesopotamia) and Ubaid (Southern Mesopotamia). This period has been further divided into PNA (Pottery Neolithic A) and PNB (Pottery Neolithic B) at some sites.
The Chalcolithic (Stone-Bronze) period began about 4500 BC, then the Bronze Age began about 3500 BC, replacing the Neolithic cultures.
Fertile Crescent
Around 10,000 BC the first fully developed Neolithic cultures belonging to the phase Pre-Pottery Neolithic A (PPNA) appeared in the Fertile Crescent. Around 10,700–9400 BC a settlement was established in Tell Qaramel, north of Aleppo. The settlement included two temples dating to 9650 BC. Around 9000 BC during the PPNA, one of the world's first towns, Jericho, appeared in the Levant. It was surrounded by a stone wall, may have contained a population of up to 2,000–3,000 people, and contained a massive stone tower. Around 6400 BC the Halaf culture appeared in Syria and Northern Mesopotamia.
In 1981, a team of researchers from the Maison de l'Orient et de la Méditerranée, including Jacques Cauvin and Oliver Aurenche, divided Near East Neolithic chronology into ten periods (0 to 9) based on social, economic and cultural characteristics. In 2002, Danielle Stordeur and Frédéric Abbès advanced this system with a division into five periods.
Natufian between 12,000 and 10,200 BC,
Khiamian between 10,200 and 8800 BC, PPNA: Sultanian (Jericho), Mureybetian,
Early PPNB (PPNB ancien) between 8800 and 7600 BC, middle PPNB (PPNB moyen) between 7600 and 6900 BC,
Late PPNB (PPNB récent) between 7500 and 7000 BC,
A PPNB (sometimes called PPNC) transitional stage (PPNB final) in which Halaf and dark faced burnished ware begin to emerge between 6900 and 6400 BC.
They also advanced the idea of a transitional stage between the PPNA and PPNB between 8800 and 8600 BC at sites like Jerf el Ahmar and Tell Aswad.
Southern Mesopotamia
Alluvial plains (Sumer/Elam). Low rainfall makes irrigation systems necessary. Ubaid culture from 6,900 BC.
Northeastern Africa
The earliest evidence of Neolithic culture in northeast Africa was found in the archaeological sites of Bir Kiseiba and Nabta Playa in what is now southwest Egypt. Domestication of sheep and goats reached Egypt from the Near East possibly as early as 6000 BC. Graeme Barker states "The first indisputable evidence for domestic plants and animals in the Nile valley is not until the early fifth millennium BC in northern Egypt and a thousand years later further south, in both cases as part of strategies that still relied heavily on fishing, hunting, and the gathering of wild plants" and suggests that these subsistence changes were not due to farmers migrating from the Near East but was an indigenous development, with cereals either indigenous or obtained through exchange. Other scholars argue that the primary stimulus for agriculture and domesticated animals (as well as mud-brick architecture and other Neolithic cultural features) in Egypt was from the Middle East.
Northwestern Africa
The neolithization of Northwestern Africa was initiated by Iberian, Levantine (and perhaps Sicilian) migrants around 5500-5300 BC. During the Early Neolithic period, farming was introduced by Europeans and was subsequently adopted by the locals. During the Middle Neolithic period, an influx of ancestry from the Levant appeared in Northwestern Africa, coinciding with the arrival of pastoralism in the region. The earliest evidence for pottery, domestic cereals and animal husbandry is found in Morocco, specifically at Kaf el-Ghar.
Sub-Saharan Africa
The Pastoral Neolithic was a period in Africa's prehistory marking the beginning of food production on the continent following the Later Stone Age. In contrast to the Neolithic in other parts of the world, which saw the development of farming societies, the first form of African food production was mobile pastoralism, or ways of life centered on the herding and management of livestock. The term "Pastoral Neolithic" is used most often by archaeologists to describe early pastoralist periods in the Sahara, as well as in eastern Africa.
The Savanna Pastoral Neolithic or SPN (formerly known as the Stone Bowl Culture) is a collection of ancient societies that appeared in the Rift Valley of East Africa and surrounding areas during a time period known as the Pastoral Neolithic. They were South Cushitic speaking pastoralists, who tended to bury their dead in cairns whilst their toolkit was characterized by stone bowls, pestles, grindstones and earthenware pots. Through archaeology, historical linguistics and archaeogenetics, they conventionally have been identified with the area's first Afroasiatic-speaking settlers. Archaeological dating of livestock bones and burial cairns has also established the cultural complex as the earliest center of pastoralism and stone construction in the region.
Europe
In southeast Europe agrarian societies first appeared in the 7th millennium BC, attested by one of the earliest farming sites of Europe, discovered in Vashtëmi, southeastern Albania and dating back to 6500 BC. In most of Western Europe in followed over the next two thousand years, but in some parts of Northwest Europe it is much later, lasting just under 3,000 years from c. 4500 BC–1700 BC. Recent advances in archaeogenetics have confirmed that the spread of agriculture from the Middle East to Europe was strongly correlated with the migration of early farmers from Anatolia about 9,000 years ago, and was not just a cultural exchange.
Anthropomorphic figurines have been found in the Balkans from 6000 BC, and in Central Europe by around 5800 BC (La Hoguette). Among the earliest cultural complexes of this area are the Sesklo culture in Thessaly, which later expanded in the Balkans giving rise to Starčevo-Körös (Cris), Linearbandkeramik, and Vinča. Through a combination of cultural diffusion and migration of peoples, the Neolithic traditions spread west and northwards to reach northwestern Europe by around 4500 BC. The Vinča culture may have created the earliest system of writing, the Vinča signs, though archaeologist Shan Winn believes they most likely represented pictograms and ideograms rather than a truly developed form of writing.
The Cucuteni-Trypillian culture built enormous settlements in Romania, Moldova and Ukraine from 5300 to 2300 BC. The megalithic temple complexes of Ġgantija on the Mediterranean island of Gozo (in the Maltese archipelago) and of Mnajdra (Malta) are notable for their gigantic Neolithic structures, the oldest of which date back to around 3600 BC. The Hypogeum of Ħal-Saflieni, Paola, Malta, is a subterranean structure excavated around 2500 BC; originally a sanctuary, it became a necropolis, the only prehistoric underground temple in the world, and shows a degree of artistry in stone sculpture unique in prehistory to the Maltese islands. After 2500 BC, these islands were depopulated for several decades until the arrival of a new influx of Bronze Age immigrants, a culture that cremated its dead and introduced smaller megalithic structures called dolmens to Malta. In most cases there are small chambers here, with the cover made of a large slab placed on upright stones. They are claimed to belong to a population different from that which built the previous megalithic temples. It is presumed the population arrived from Sicily because of the similarity of Maltese dolmens to some small constructions found there.
With some exceptions, population levels rose rapidly at the beginning of the Neolithic until they reached the carrying capacity. This was followed by a population crash of "enormous magnitude" after 5000 BC, with levels remaining low during the next 1,500 years. Populations began to rise after 3500 BC, with further dips and rises occurring between 3000 and 2500 BC but varying in date between regions. Around this time is the Neolithic decline, when populations collapsed across most of Europe, possibly caused by climatic conditions, plague, or mass migration.
South and East Asia
Settled life, encompassing the transition from foraging to farming and pastoralism, began in South Asia in the region of Balochistan, Pakistan, around 7,000 BC. At the site of Mehrgarh, Balochistan, presence can be documented of the domestication of wheat and barley, rapidly followed by that of goats, sheep, and cattle. In April 2006, it was announced in the scientific journal Nature that the oldest (and first Early Neolithic) evidence for the drilling of teeth in vivo (using bow drills and flint tips) was found in Mehrgarh.
In South India, the Neolithic began by 6500 BC and lasted until around 1400 BC when the Megalithic transition period began. South Indian Neolithic is characterized by Ash mounds from 2500 BC in Karnataka region, expanded later to Tamil Nadu.
In East Asia, the earliest sites include the Nanzhuangtou culture around 9500–9000 BC, Pengtoushan culture around 7500–6100 BC, and Peiligang culture around 7000–5000 BC. The prehistoric Beifudi site near Yixian in Hebei Province, China, contains relics of a culture contemporaneous with the Cishan and Xinglongwa cultures of about 6000–5000 BC, Neolithic cultures east of the Taihang Mountains, filling in an archaeological gap between the two Northern Chinese cultures. The total excavated area is more than , and the collection of Neolithic findings at the site encompasses two phases. Between 3000 and 1900 BC, the Longshan culture existed in the middle and lower Yellow River valley areas of northern China. Towards the end of the 3rd millennium BC, the population decreased sharply in most of the region and many of the larger centres were abandoned, possibly due to environmental change linked to the end of the Holocene Climatic Optimum.
The 'Neolithic' (defined in this paragraph as using polished stone implements) remains a living tradition in small and extremely remote and inaccessible pockets of West Papua. Polished stone adze and axes are used in the present day in areas where the availability of metal implements is limited. This is likely to cease altogether in the next few years as the older generation die off and steel blades and chainsaws prevail.
In 2012, news was released about a new farming site discovered in Munam-ri, Goseong, Gangwon Province, South Korea, which may be the earliest farmland known to date in east Asia. "No remains of an agricultural field from the Neolithic period have been found in any East Asian country before, the institute said, adding that the discovery reveals that the history of agricultural cultivation at least began during the period on the Korean Peninsula". The farm was dated between 3600 and 3000 BC. Pottery, stone projectile points, and possible houses were also found. "In 2002, researchers discovered prehistoric earthenware, jade earrings, among other items in the area". The research team will perform accelerator mass spectrometry (AMS) dating to retrieve a more precise date for the site.
The Americas
In Mesoamerica, a similar set of events (i.e., crop domestication and sedentary lifestyles) occurred by around 4500 BC in South America, but possibly as early as 11,000–10,000 BC. These cultures are usually not referred to as belonging to the Neolithic; in America different terms are used such as Formative stage instead of mid-late Neolithic, Archaic Era instead of Early Neolithic, and Paleo-Indian for the preceding period.
The Formative stage is equivalent to the Neolithic Revolution period in Europe, Asia, and Africa. In the southwestern United States it occurred from 500 to 1200 AD when there was a dramatic increase in population and development of large villages supported by agriculture based on dryland farming of maize, and later, beans, squash, and domesticated turkeys. During this period the bow and arrow and ceramic pottery were also introduced. In later periods cities of considerable size developed, and some metallurgy by 700 BC.
Australia
Australia, in contrast to New Guinea, has generally been held not to have had a Neolithic period, with a hunter-gatherer lifestyle continuing until the arrival of Europeans. This view can be challenged in terms of the definition of agriculture, but "Neolithic" remains a rarely used and not very useful concept in discussing Australian prehistory.
Cultural characteristics
Social organization
During most of the Neolithic age of Eurasia, people lived in small tribes composed of multiple bands or lineages. There is little scientific evidence of developed social stratification in most Neolithic societies; social stratification is more associated with the later Bronze Age. Although some late Eurasian Neolithic societies formed complex stratified chiefdoms or even states, generally states evolved in Eurasia only with the rise of metallurgy, and most Neolithic societies on the whole were relatively simple and egalitarian. Beyond Eurasia, however, states were formed during the local Neolithic in three areas, namely in the Preceramic Andes with the Caral-Supe Civilization, Formative Mesoamerica and Ancient Hawaiʻi. However, most Neolithic societies were noticeably more hierarchical than the Upper Paleolithic cultures that preceded them and hunter-gatherer cultures in general.
The domestication of large animals (c. 8000 BC) resulted in a dramatic increase in social inequality in most of the areas where it occurred; New Guinea being a notable exception. Possession of livestock allowed competition between households and resulted in inherited inequalities of wealth. Neolithic pastoralists who controlled large herds gradually acquired more livestock, and this made economic inequalities more pronounced. However, evidence of social inequality is still disputed, as settlements such as Çatalhöyük reveal a lack of difference in the size of homes and burial sites, suggesting a more egalitarian society with no evidence of the concept of capital, although some homes do appear slightly larger or more elaborately decorated than others.
Families and households were still largely independent economically, and the household was probably the center of life. However, excavations in Central Europe have revealed that early Neolithic Linear Ceramic cultures ("Linearbandkeramik") were building large arrangements of circular ditches between 4800 and 4600 BC. These structures (and their later counterparts such as causewayed enclosures, burial mounds, and henge) required considerable time and labour to construct, which suggests that some influential individuals were able to organise and direct human labour – though non-hierarchical and voluntary work remain possibilities.
There is a large body of evidence for fortified settlements at Linearbandkeramik sites along the Rhine, as at least some villages were fortified for some time with a palisade and an outer ditch. Settlements with palisades and weapon-traumatized bones, such as those found at the Talheim Death Pit, have been discovered and demonstrate that "...systematic violence between groups" and warfare was probably much more common during the Neolithic than in the preceding Paleolithic period. This supplanted an earlier view of the Linear Pottery Culture as living a "peaceful, unfortified lifestyle".
Control of labour and inter-group conflict is characteristic of tribal groups with social rank that are headed by a charismatic individual – either a 'big man' or a proto-chief – functioning as a lineage-group head. Whether a non-hierarchical system of organization existed is debatable, and there is no evidence that explicitly suggests that Neolithic societies functioned under any dominating class or individual, as was the case in the chiefdoms of the European Early Bronze Age. Possible exceptions to this include Iraq during the Ubaid period and England beginning in the Early Neolithic (4100–3000 BC). Theories to explain the apparent implied egalitarianism of Neolithic (and Paleolithic) societies have arisen, notably the Marxist concept of primitive communism.
Shelter and sedentism
The shelter of early people changed dramatically from the Upper Paleolithic to the Neolithic era. In the Paleolithic, people did not normally live in permanent constructions. In the Neolithic, mud brick houses started appearing that were coated with plaster. The growth of agriculture made permanent houses far more common. At Çatalhöyük 9,000 years ago, doorways were made on the roof, with ladders positioned both on the inside and outside of the houses. Stilt-house settlements were common in the Alpine and Pianura Padana (Terramare) region. Remains have been found in the Ljubljana Marsh in Slovenia and at the Mondsee and Attersee lakes in Upper Austria, for example.
Agriculture
A significant and far-reaching shift in human subsistence and lifestyle was to be brought about in areas where crop farming and cultivation were first developed: the previous reliance on an essentially nomadic hunter-gatherer subsistence technique or pastoral transhumance was at first supplemented, and then increasingly replaced by, a reliance upon the foods produced from cultivated lands. These developments are also believed to have greatly encouraged the growth of settlements, since it may be supposed that the increased need to spend more time and labor in tending crop fields required more localized dwellings. This trend would continue into the Bronze Age, eventually giving rise to permanently settled farming towns, and later cities and states whose larger populations could be sustained by the increased productivity from cultivated lands.
The profound differences in human interactions and subsistence methods associated with the onset of early agricultural practices in the Neolithic have been called the Neolithic Revolution, a term coined in the 1920s by the Australian archaeologist Vere Gordon Childe.
One potential benefit of the development and increasing sophistication of farming technology was the possibility of producing surplus crop yields, in other words, food supplies in excess of the immediate needs of the community. Surpluses could be stored for later use, or possibly traded for other necessities or luxuries. Agricultural life afforded securities that nomadic life could not, and sedentary farming populations grew faster than nomadic.
However, early farmers were also adversely affected in times of famine, such as may be caused by drought or pests. In instances where agriculture had become the predominant way of life, the sensitivity to these shortages could be particularly acute, affecting agrarian populations to an extent that otherwise may not have been routinely experienced by prior hunter-gatherer communities. Nevertheless, agrarian communities generally proved successful, and their growth and the expansion of territory under cultivation continued.
Another significant change undergone by many of these newly agrarian communities was one of diet. Pre-agrarian diets varied by region, season, available local plant and animal resources and degree of pastoralism and hunting. Post-agrarian diet was restricted to a limited package of successfully cultivated cereal grains, plants and to a variable extent domesticated animals and animal products. Supplementation of diet by hunting and gathering was to variable degrees precluded by the increase in population above the carrying capacity of the land and a high sedentary local population concentration. In some cultures, there would have been a significant shift toward increased starch and plant protein. The relative nutritional benefits and drawbacks of these dietary changes and their overall impact on early societal development are still debated.
In addition, increased population density, decreased population mobility, increased continuous proximity to domesticated animals, and continuous occupation of comparatively population-dense sites would have altered sanitation needs and patterns of disease.
Lithic technology
The identifying characteristic of Neolithic technology is the use of polished or ground stone tools, in contrast to the flaked stone tools used during the Paleolithic era.
Neolithic people were skilled farmers, manufacturing a range of tools necessary for the tending, harvesting and processing of crops (such as sickle blades and grinding stones) and food production (e.g. pottery, bone implements). They were also skilled manufacturers of a range of other types of stone tools and ornaments, including projectile points, beads, and statuettes. But what allowed forest clearance on a large scale was the polished stone axe above all other tools. Together with the adze, fashioning wood for shelter, structures and canoes for example, this enabled them to exploit the newly developed farmland.
Neolithic peoples in the Levant, Anatolia, Syria, northern Mesopotamia and Central Asia were also accomplished builders, utilizing mud-brick to construct houses and villages. At Çatalhöyük, houses were plastered and painted with elaborate scenes of humans and animals. In Europe, long houses built from wattle and daub were constructed. Elaborate tombs were built for the dead. These tombs are particularly numerous in Ireland, where there are many thousand still in existence. Neolithic people in the British Isles built long barrows and chamber tombs for their dead and causewayed camps, henges, flint mines and cursus monuments. It was also important to figure out ways of preserving food for future months, such as fashioning relatively airtight containers, and using substances like salt as preservatives.
The peoples of the Americas and the Pacific mostly retained the Neolithic level of tool technology until the time of European contact. Exceptions include copper hatchets and spearheads in the Great Lakes region.
Clothing
Most clothing appears to have been made of animal skins, as indicated by finds of large numbers of bone and antler pins that are ideal for fastening leather. Wool cloth and linen might have become available during the later Neolithic, as suggested by finds of perforated stones that (depending on size) may have served as spindle whorls or loom weights.
List of early settlements
Neolithic human settlements include:
The world's oldest known engineered roadway, the Post Track in England, dates from 3838 BC and the world's oldest freestanding structure is the Neolithic temple of Ġgantija in Gozo, Malta.
List of cultures and sites
Note: Dates are very approximate, and are only given for a rough estimate; consult each culture for specific time periods.
Early Neolithic
Periodization: The Levant: 9500–8000 BC; Europe: 7000–4000 BC; Elsewhere: varies greatly, depending on region.
Pre-Pottery Neolithic A (Levant, 9500–8000 BC)
Nanzhuangtou (China, 8500 BC)
Franchthi Cave (Greece, 7000 BC)
Cishan culture (China, 6500–5000 BC)
Sesklo village (Greece, c. 6300 BC)
Starcevo-Criş culture (Starčevo-Körös-Criş culture) (Balkans, 5800–4500 BC)
Katundas Cavern (Albania, 6th millennium BC)
Dudeşti culture (Romania, 6th millennium BC)
Beixin culture (China, 5300–4100 BC)
Tamil Nadu culture (India, 3000–2800 BC)
Mentesh Tepe and Kamiltepe (Azerbaijan, 7000–3000 BC)
Middle Neolithic
Periodization: The Levant: 8000–6500 BC; Europe: 5500–3500 BC; Elsewhere: varies greatly, depending on region.
Later Neolithic
Periodization: 6500–4500 BC; Europe: 5000–3000 BC; Elsewhere: varies greatly, depending on region.
Pottery Neolithic (Fertile Crescent, 6400–4500 BC)
Halaf culture (Mesopotamia, 6100 BC and 5100 BC)
Halaf-Ubaid Transitional period (Mesopotamia, 5500–5000 BC)
Ubaid 1/2 (5400–4500 BC)
Funnelbeaker culture (North/Eastern Europe, 4300–2800 BC)
Chalcolithic
Periodization: Near East: 6000–3500 BC; Europe: 5000–2000 BC; Elsewhere: varies greatly, depending on region. In the Americas, the Chalcolithic ended as late as the 19th century AD for some peoples.
Ubaid 3/4 (Mesopotamia, 4500–4000 BC)
early Uruk period (Mesopotamia, 4000–3800 BC)
middle Uruk period (Mesopotamia, 3800–3400 BC)
late Trypillian (Eastern Europe, 3000–2750 BC)
Gaudo Culture (Italy, 3150–2950 BC)
Corded Ware culture (North/Eastern Europe, 2900–2350 BC)
Beaker culture (Central/Western Europe, 2900–1800 BC)
Comparative chronology
See also
References
Citations
Sources
Further reading
External links
Romeo, Nick (Feb. 2015). Embracing Stone Age Couple Found in Greek Cave. "Rare double burials discovered at one of the largest Neolithic burial sites in Europe." National Geographic Society
1860s neologisms
Historical eras
Holocene | 0.771819 | 0.99975 | 0.771626 |
Postpositivism | Postpositivism or postempiricism is a metatheoretical stance that critiques and amends positivism and has impacted theories and practices across philosophy, social sciences, and various models of scientific inquiry. While positivists emphasize independence between the researcher and the researched person (or object), postpositivists argue that theories, hypotheses, background knowledge and values of the researcher can influence what is observed. Postpositivists pursue objectivity by recognizing the possible effects of biases. While positivists emphasize quantitative methods, postpositivists consider both quantitative and qualitative methods to be valid approaches.
Philosophy
Epistemology
Postpositivists believe that human knowledge is based not on a priori assessments from an objective individual, but rather upon human conjectures. As human knowledge is thus unavoidably conjectural, the assertion of these conjectures are warranted, or more specifically, justified by a set of warrants, which can be modified or withdrawn in the light of further investigation. However, postpositivism is not a form of relativism, and generally retains the idea of objective truth.
Ontology
Postpositivists believe that a reality exists, but, unlike positivists, they believe reality can be known only imperfectly. Postpositivists also draw from social constructionism in forming their understanding and definition of reality.
Axiology
While positivists believe that research is or can be value-free or value-neutral, postpositivists take the position that bias is undesired but inevitable, and therefore the investigator must work to detect and try to correct it. Postpositivists work to understand how their axiology (i.e. values and beliefs) may have influenced their research, including through their choice of measures, populations, questions, and definitions, as well as through their interpretation and analysis of their work.
History
Historians identify two types of positivism: classical positivism, an empirical tradition first described by Henri de Saint-Simon and Auguste Comte in the first half of the 19th century, and logical positivism, which is most strongly associated with the Vienna Circle, which met near Vienna, Austria, in the 1920s and 1930s. Postpositivism is the name D.C. Phillips gave to a group of critiques and amendments which apply to both forms of positivism.
One of the first thinkers to criticize logical positivism was Karl Popper. He advanced falsification in lieu of the logical positivist idea of verificationism. Falsificationism argues that it is impossible to verify that beliefs about universals or unobservables are true, though it is possible to reject false beliefs if they are phrased in a way amenable to falsification.
In 1965, Karl Popper and Thomas Kuhn had a debate as Thomas Kuhn's theory did not incorporate this idea of falsification. It has influenced contemporary research methodologies.
Thomas Kuhn is credited with having popularized and at least in part originated the post-empiricist philosophy of science. Kuhn's idea of paradigm shifts offers a broader critique of logical positivism, arguing that it is not simply individual theories but whole worldviews that must occasionally shift in response to evidence.
Postpositivism is not a rejection of the scientific method, but rather a reformation of positivism to meet these critiques. It reintroduces the basic assumptions of positivism: the possibility and desirability of objective truth, and the use of experimental methodology. The work of philosophers Nancy Cartwright and Ian Hacking are representative of these ideas. Postpositivism of this type is described in social science guides to research methods.
Structure of a postpositivist theory
Robert Dubin describes the basic components of a postpositivist theory as being composed of basic "units" or ideas and topics of interest, "laws of interactions" among the units, and a description of the "boundaries" for the theory. A postpositivist theory also includes "empirical indicators" to connect the theory to observable phenomena, and hypotheses that are testable using the scientific method.
According to Thomas Kuhn, a postpositivist theory can be assessed on the basis of whether it is "accurate", "consistent", "has broad scope", "parsimonious", and "fruitful".
Main publications
Karl Popper (1934) Logik der Forschung, rewritten in English as The Logic of Scientific Discovery (1959)
Thomas Kuhn (1962) The Structure of Scientific Revolutions
Karl Popper (1963) Conjectures and Refutations
Ian Hacking (1983) Representing and Intervening
Andrew Pickering (1984) Constructing Quarks
Peter Galison (1987) How Experiments End
Nancy Cartwright (1989) Nature's Capacities and Their Measurement
See also
Antipositivism
Philosophy of science
Scientism
Sociology of scientific knowledge
Notes
References
Alexander, J.C. (1995), Fin De Siecle Social Theory: Relativism, Reductionism and The Problem of Reason, London; Verso.
Phillips, D.C. & Nicholas C. Burbules (2000): Postpositivism and Educational Research. Lanham & Boulder: Rowman & Littlefield Publishers.
Zammito, John H. (2004): A Nice Derangement of Epistemes. Post-positivism in the study of Science from Quine to Latour. Chicago & London: The University of Chicago Press.
Popper, K. (1963), Conjectures and Refutations: The Growth of Scientific Knowledge, London; Routledge.
Moore, R. (2009), Towards the Sociology of Truth, London; Continuum.
External links
Positivism and Post-positivism
Positivism
Metatheory of science
Epistemological theories | 0.77865 | 0.990931 | 0.771589 |
Types of nationalism | Among scholars of nationalism, a number of types of nationalism have been presented. Nationalism may manifest itself as part of official state ideology or as a popular non-state movement and may be expressed along civic, ethnic, language, religious or ideological lines. These self-definitions of the nation are used to classify types of nationalism, but such categories are not mutually exclusive and many nationalist movements combine some or all of these elements to varying degrees. Nationalist movements can also be classified by other criteria, such as scale and location.
Some political theorists, like Umut Özkirimli, make the case that any distinction between forms of nationalism is false. In all forms of nationalism, the populations believe that they share some kind of common culture. Arguably, all types of nationalism merely refer to different ways academics throughout the years have tried to define nationalism. Similarly, Yael Tamir has argued that the differences between the oft-dichotomized ethnic and civic nationalism are blurred.
Ethnic nationalism
Ethnic nationalism, also known as ethnonationalism, is a form of nationalism wherein the nation and nationality are defined in terms of ethnicity, with emphasis on an ethnocentric (and in some cases an ethnocratic) approach to various political issues related to national affirmation of a particular ethnic group.
The central tenet of ethnic nationalists is that "nations are defined by a shared heritage, which usually includes a common language, a common faith, and a common ethnic ancestry". Those of other ethnicities may be classified as second-class citizens.
Ethnic nationalism was traditionally the determinant type of nationalism in Eastern Europe.
Expansionist nationalism
Expansionist nationalism is an aggressive radical form of nationalism or ethnic nationalism (ethnonationalism) that incorporates autonomous, heightened ethnic consciousness and patriotic sentiments with atavistic fears and hatreds focused on "other" or foreign peoples, framing a belief in expansion or recovery of formerly owned territories through militaristic means.
Romantic nationalism
Romantic nationalism, also known as organic nationalism and identity nationalism, is the form of ethnic nationalism in which the state derives political legitimacy as a natural ("organic") consequence and expression of the nation, race, or ethnicity. It reflected the ideals of Romanticism and was opposed to Enlightenment rationalism. Romantic nationalism emphasized a historical ethnic culture which meets the Romantic Ideal; folklore developed as a Romantic nationalist concept. The Brothers Grimm were inspired by Herder's writings to create an idealized collection of tales which they labeled as ethnically German. Historian Jules Michelet exemplifies French romantic-nationalist history.
Liberal ethnonationalism
Generally, "liberal nationalism" is used in a similar sense to "civic nationalism"; liberal nationalism is a kind of nationalism defended recently by political philosophers who believe that there can be a non-xenophobic form of nationalism compatible with liberal values of freedom, tolerance, equality, and individual rights. However, not all "liberal nationalism" is always "civic nationalism"; there are also liberals who advocate moderate nationalism that affirm ethnic identity, it is also called "liberal ethno-nationalism".
Xenophobic movements in long-established Western European states indeed often took a 'civic national' form, rejecting a given group's ability to assimilate with the nation due to its belonging to a cross-border community (Irish Catholics in Britain, Ashkenazic Jews in France). On the other hand, while liberal subnational separatist movements were commonly associated with ethnic nationalism; such nationalists as the Corsican Republic, United Irishmen, Breton Federalist League or Catalan Republican Party could combine a rejection of the unitary civic-national state with a belief in liberal universalism.
During Taiwan's KMT one-party dictatorship, the Kuomintang (KMT) defended Chinese state nationalism, in opposition to which liberal/progressives, including the Democratic Progressive Party (DPP), defended Taiwanese-based "liberal [ethnic] nationalism" (自由民族主義). South Korea prioritized South Korean-based "state nationalism" (국가주의) over Korean ethnic nationalism during the right-wing dictatorship, in response, political liberals and leftists defended "liberal [ethnic] nationalism" (자유민족주의), a moderate version of Korean ethnic nationalism. Even today, major left-liberal and progressive nationalists in Taiwan and South Korea advocate anti-imperialistic minzu-based nationalism (民族主義) and are critical of right-wing state nationalism (國家主義).
In 19th century Europe, liberal movements often affirmed ethnic nationalism in the modern sense along with to topple classical conservatism; István Széchenyi was a representative liberal ethnic nationalist.
Left-wing ethnonationalism
While left-wing nationalism has a weaker ethnic nationalist component than right-wing nationalism, but some national liberation movements have also combined with ethnic nationalism; Northeast Asia and Vietnam's "national liberation" (民族解放, Minzu jiefang) are representative.
Civic nationalism
Civic nationalism, sometimes known as democratic nationalism and liberal nationalism, is a political identity built around shared citizenship within the state, with emphasis on political institutions and liberal principles, which its citizens pledge to uphold. It aims to adhere to traditional liberal values of freedom, tolerance, equality, and individual rights, and is not based on ethnocentrism. Civic nationalists often defend the value of national identity by arguing that individuals need it as a partial shared aspect of their identity to lead meaningful, autonomous lives and that democratic polities need a national identity to function properly.
Membership in the civic nation is open to every person by citizenship, regardless of culture or ethnicity; those who share these values can be considered members of the nation. In theory, a civic nation or state does not aim to promote one culture over another. German philosopher Jürgen Habermas has argued that immigrants to a liberal-democratic state need not assimilate into the host culture but only accept the principles of the country's constitution (constitutional patriotism).
Donald Ipperciel argues civic nationalism historically was a determining factor in the development of modern constitutional and democratic state. The 20th-century revival of civic nationalism played a key role in the ideological war against racism. However, as the Turkish political scientist Umut Özkirimli states, "civic" nations can be as intolerant and cruel as the so-called "ethnic" nations, citing French Jacobin techniques of persecution that were used by 20th-century fascists.
State nationalism
State nationalism, state-based nationalism, state-led nationalism, or "statism" equates 'state identity' with 'national identity' and values state authority. State nationalism is classified as civic nationalism by the dichotomy that divides nationalism into "civic" and "ethnic", but it is not necessarily liberal and has something to do with authoritarian politics. Soviet nationalism, Shōwa Statism, Kemalism, Francoism, and Communist-led Chinese state nationalism are classified as state nationalism.
Ideological nationalism
Revolutionary nationalism
Revolutionary nationalism is a broad label that has been applied to many different types of nationalist political movements that wish to achieve their goals through a revolution against the established order. Individuals and organizations described as being revolutionary nationalist include some political currents within the French Revolution, Irish republicans engaged in armed struggle against the British crown, the Can Vuong movement against French rule in 19th century Vietnam, the Indian independence movement in the 20th century, some participants in the Mexican Revolution, Benito Mussolini and the Italian Fascists, the Autonomous Government of Khorasan, Augusto Cesar Sandino, the Revolutionary Nationalist Movement in Bolivia, black nationalism in the United States, and some African independence movements.
Liberation nationalism
Many nationalist movements in the world are dedicated to national liberation in the view that their nations are being persecuted by other nations and thus need to exercise self-determination by liberating themselves from the accused persecutors. Anti-revisionist Marxist–Leninism is closely tied with this ideology, and practical examples include Stalin's early work Marxism and the National Question and his Socialism in One Country edict, which declares that nationalism can be used in an internationalist context i.e. fighting for national liberation without racial or religious divisions.
Left-wing nationalism
Left-wing nationalism, also occasionally known as socialist nationalism, refers to any political movement that combines left-wing politics or socialism with nationalism. Notable examples include Fidel Castro's 26th of July Movement that launched the Cuban Revolution that ousted dictator Fulgencio Batista in 1959, Ireland's Sinn Féin, Labor Zionism in Israel and the African National Congress in South Africa.
Schools of anarchism which acknowledge nationalism
Anarchists who see value in nationalism typically argue that a nation is first and foremost a people; that the state is parasite upon the nation and should not be confused with it; and that since in reality states rarely coincide with national entities, the ideal of the nation state is actually little more than a myth. Within the European Union, for instance, they argue there are over 500 ethnic nations within the 25 member states, and even more in Asia, Africa, and the Americas. Moving from this position, they argue that the achievement of meaningful self-determination for all of the world's nations requires an anarchist political system based on local control, free federation, and mutual aid. There has been a long history of anarchist involvement with left-nationalism all over the world. Contemporary fusions of anarchism with anti-state left-nationalism include some strains of Black anarchism and indigenism.
In the early to mid 19th century Europe, the ideas of nationalism, socialism, and liberalism were closely intertwined. Revolutionaries and radicals like Giuseppe Mazzini aligned with all three in about equal measure. The early pioneers of anarchism participated in the spirit of their times: they had much in common with both liberals and socialists, and they shared much of the outlook of early nationalism as well. Thus Mikhail Bakunin had a long career as a pan-Slavic nationalist before adopting anarchism. He also agitated for a United States of Europe (a contemporary nationalist vision originated by Mazzini). In 1880–1881, the Boston-based Irish nationalist W. G. H. Smart wrote articles for a magazine called The Anarchist. Similarly, anarchists in China during the early part of the 20th century were very much involved in the left-wing of the nationalist movement while actively opposing racist elements of the anti-Manchu wing of that movement.
Pan-nationalism
Pan-nationalism is usually an ethnic and cultural nationalism, but the 'nation' is itself a cluster of related ethnic groups and cultures, such as Slavic peoples. Occasionally pan-nationalism is applied to mono-ethnic nationalism, when the national group is dispersed over a wide area and several states – as in Pan-Germanism.
Trans-nationalism
Transnationalism puts nations within an overarching concept, such as global citizenry, seeing shared overarching institutions, for example such as institutions for continental union or globalising society.
Religious nationalism
Religious nationalism is the relationship of nationalism to a particular religious belief, church, Hindu temple or affiliation. This relationship can be broken down into two aspects; the politicization of religion and the converse influence of religion on politics. In the former aspect, a shared religion can be seen to contribute to a sense of national unity, by the citizens of the nation. Another political aspect of religion is the support of a national identity, similar to a shared ethnicity, language or culture. The influence of religion on politics is more ideological, where current interpretations of religious ideas inspire political activism and action; for example, laws are passed to foster stricter religious adherence. Hindu nationalism is common in many states and union territories in India which joined the union of India solely on the basis of religion and post-colonial nationalism.
Post-colonial nationalism
Since the process of decolonisation that occurred after World War II, there has been a rise of Third World nationalisms. Third world nationalisms occur in those nations that have been colonized and exploited. The nationalisms of these nations were forged in a furnace that required resistance to colonial domination to survive. As such, resistance is part and parcel of such nationalisms and their very existence is a form of resistance to imperialist intrusions. Third World nationalism attempts to ensure that the identities of Third World peoples are authored primarily by themselves, not colonial powers.
Examples of third world nationalist ideologies are African nationalism and Arab nationalism. Other important nationalist movements in the developing world have included the ideas of the Mexican Revolution and Haitian Revolution. Third world nationalist ideas have been particularly influential among governments elected in South America.
Multi-ethnic nationalism
Multi-ethnic nationalism, as in a multinational state.
Chinese nationalism is a representative multi-ethnic nationalism. The concept of "Zhonghua minzu" ("Chinese ethnicity") includes many indigenous minorities in China who already live on Chinese territory, but does not include immigrants who are not part of the traditional Chinese ethnic group (ex, Japanese Chinese, European Chinese, African Chinese, etc). Therefore, Chinese nationalism is multi-ethnic nationalism, but it is distinct from civic nationalism.
Taiwanese nationalism and India's composite nationalism is also considered a multi-ethnic nationalism.
Multi-ethnic nationalism may be similar to civic nationalism. However multi-ethnic nationalism tends to embrace multi-ethnic elements without embracing the core elements of civic nationalism.
Diaspora nationalism
Diaspora nationalism, or as Benedict Anderson terms it, "long-distance nationalism", generally refers to nationalist feeling among a diaspora such as the Irish in the United States, Jews around the world after the expulsion from Jerusalem (586 BCE), the Lebanese in the Americas and Africa, or Armenians in Europe and the United States. Anderson states that this sort of nationalism acts as a "phantom bedrock" for people who want to experience a national connection, but who do not actually want to leave their diaspora community. The essential difference between pan-nationalism and diaspora nationalism is that members of a diaspora, by definition, are no longer resident in their national or ethnic homeland. In some instances, 'Diaspora' refers to a dispersal of a people from a (real or imagined) 'homeland' due to a cataclysmic disruption, such as war, famine, etc. New networks – new 'roots' – form along the 'routes' travelled by diasporic people, who are connected by a shared desire to return 'home'. In reality, the desire to return may be eschatological (i.e. end times orientation), or may not occur in any foreseeable future, but the longing for the lost homeland and the sense of difference from circumambient cultures in which Diasporic people live becomes an identity unto itself.
See also
Anti-nationalism
Integral nationalism
Postnationalism
Jingoism
Notes | 0.776608 | 0.993365 | 0.771455 |
Alternate history | Alternate history (also referred to as alternative history, allohistory, althist, or simply AH) is a subgenre of speculative fiction in which one or more historical events have occurred but are resolved differently than in actual history. As conjecture based upon historical fact, alternate history stories propose What if? scenarios about crucial events in human history, and present outcomes very different from the historical record. Some alternate histories are considered a subgenre of science fiction, or historical fiction.
Since the 1950s, as a subgenre of science fiction, some alternative history stories have featured the tropes of time travel between histories, the psychic awareness of the existence of an alternative universe by the inhabitants of a given universe, and time travel that divides history into various timestreams.
Definition
Often described as a subgenre of science fiction, alternative history is a genre of fiction wherein the author speculates upon how the course of history might have been altered if a particular historical event had an outcome different from the real life outcome. An alternate history requires three conditions: (i) A point of divergence from the historical record, before the time in which the author is writing; (ii) A change that would alter known history; and (iii) An examination of the ramifications of that alteration to history. Occasionally, some types of genre fiction are misidentified as alternative history, specifically science fiction stories set in a time that was the future for the writer, but now is the past for the reader, such as the novels 2001: A Space Odyssey (1968) by Arthur C. Clarke, 1984 (1949) by George Orwell and the movie 2012 (2009) because the authors did not alter the real history of the past when they wrote the stories.
Similar to the genre of alternative history, there is also the genre of secret history - which can be either fictional or non-fictional - which documents events that might have occurred in history, but which had no effect upon the recorded historical outcome. Alternative history also is thematically related to, but distinct from, counterfactual history, which is a form of historiography that explores historical events in an extrapolated timeline in which key historical events either did not occur or had an outcome different from the historical record, in order to understand what did happen.
History of literature
Antiquity and medieval
The earliest example of alternate (or counterfactual) history is found in Livy's Ab Urbe Condita Libri (book IX, sections 17–19). Livy contemplated an alternative 4th century BC in which Alexander the Great had survived to attack Europe as he had planned; asking, "What would have been the results for Rome if she had been engaged in a war with Alexander?" Livy concluded that the Romans would likely have defeated Alexander. An even earlier possibility is Herodotus's Histories, which contains speculative material.
Another example of counterfactual history was posited by cardinal and Doctor of the Church Peter Damian in the 11th century. In his famous work De Divina Omnipotentia, a long letter in which he discusses God's omnipotence, he treats questions related to the limits of divine power, including the question of whether God can change the past, for example, bringing about that Rome was never founded:I see I must respond finally to what many people, on the basis of your holiness's [own] judgment, raise as an objection on the topic of this dispute. For they say: If, as you assert, God is omnipotent in all things, can he manage this, that things that have been made were not made? He can certainly destroy all things that have been made, so that they do not exist now. But it cannot be seen how he can bring it about that things that have been made were not made. To be sure, it can come about that from now on and hereafter Rome does not exist; for it can be destroyed. But no opinion can grasp how it can come about that it was not founded long ago...One early work of fiction detailing an alternate history is Joanot Martorell's 1490 epic romance Tirant lo Blanch, which was written when the fall of Constantinople to the Turks was still a recent and traumatic memory for Christian Europe. It tells the story of the knight Tirant the White from Brittany who travels to the embattled remnants of the Byzantine Empire. He becomes a Megaduke and commander of its armies and manages to fight off the invading Ottoman armies of . He saves the city from Islamic conquest, and even chases the Turks deeper into lands they had previously conquered.
19th century
One of the earliest works of alternate history published in large quantities for the reception of a large audience may be Louis Geoffroy's Histoire de la Monarchie universelle : Napoléon et la conquête du monde (1812–1832) (History of the Universal Monarchy: Napoleon and the Conquest of the World) (1836), which imagines Napoleon's First French Empire emerging victorious in the French invasion of Russia in 1812 and in an invasion of England in 1814, later unifying the world under Bonaparte's rule.
The Book of Mormon (published 1830) is described as an "alternative history" by Richard Lyman Bushman, a biographer of Joseph Smith. Smith claimed to have translated the document from golden plates, which told the story of a Jewish group who migrated from Israel to the Americas and inhabited the region from about 600 B.C. to 400 A.D., becoming the ancestors of Native Americans. In the 2005 biography Joseph Smith: Rough Stone Rolling, Bushman wrote that the Book of Mormon "turned American history upside down [and] works on the premise that a history—a book—can reconstitute a nation. It assumes that by giving a nation an alternative history, alternative values can be made to grow."
In the English language, the first known complete alternate history may be Nathaniel Hawthorne's short story "P.'s Correspondence", published in 1845. It recounts the tale of a man who is considered "a madman" due to his perceptions of a different 1845, a reality in which long-dead famous people, such as the poets Robert Burns, Lord Byron, Percy Bysshe Shelley and John Keats, the actor Edmund Kean, the British politician George Canning, and Napoleon Bonaparte, are still alive.
The first novel-length alternate history in English would seem to be Castello Holford's Aristopia (1895). While not as nationalistic as Louis Geoffroy's Napoléon et la conquête du monde, 1812–1823, Aristopia is another attempt to portray a Utopian society. In Aristopia, the earliest settlers in Virginia discover a reef made of solid gold and are able to build a Utopian society in North America.
Early 20th century and the era of the pulps
In 1905, H. G. Wells published A Modern Utopia. As explicitly noted in the book itself, Wells's main aim in writing it was to set out his social and political ideas, the plot serving mainly as a vehicle to expound them. This book introduced the idea of a person being transported from a point in our familiar world to the precise geographical equivalent point in an alternate world in which history had gone differently. The protagonists undergo various adventures in the alternate world, and then are finally transported back to our world, again to the precise geographical equivalent point. Since then, that has become a staple of the alternate history genre.
A number of alternate history stories and novels appeared in the late 19th and early 20th centuries (see, for example, Joseph Edgar Chamberlin's The Ifs of History [1907] and Charles Petrie's If: A Jacobite Fantasy [1926]). In 1931, British historian Sir John Squire collected a series of essays from some of the leading historians of the period for his anthology If It Had Happened Otherwise. In that work, scholars from major universities, as well as important non-academic authors, turned their attention to such questions as "If the Moors in Spain Had Won" and "If Louis XVI Had Had an Atom of Firmness". The essays range from serious scholarly efforts to Hendrik Willem van Loon's fanciful and satiric portrayal of an independent 20th-century New Amsterdam, a Dutch city-state on the island of Manhattan. Among the authors included were Hilaire Belloc, André Maurois, and Winston Churchill.
One of the entries in Squire's volume was Churchill's "If Lee Had Not Won the Battle of Gettysburg", written from the viewpoint of a historian in a world in which the Confederacy had won the American Civil War. The entry considers what would have happened if the North had been victorious (in other words, a character from an alternate world imagines a world more like the real one we live in, although it is not identical in every detail). Speculative work that narrates from the point of view of an alternate history is variously known as "recursive alternate history", a "double-blind what-if", or an "alternate-alternate history". Churchill's essay was one of the influences behind Ward Moore's alternate history novel Bring the Jubilee in which General Robert E. Lee won the Battle of Gettysburg and paved the way for the eventual victory of the Confederacy in the American Civil War (named the "War of Southron Independence" in this timeline). The protagonist, the autodidact Hodgins Backmaker, travels back to the aforementioned battle and inadvertently changes history, which results in the emergence of our own timeline and the consequent victory of the Union instead.
The American humorist author James Thurber parodied alternate history stories about the American Civil War in his 1930 story "If Grant Had Been Drinking at Appomattox", which he accompanied with this very brief introduction: "Scribner's magazine is publishing a series of three articles: 'If Booth Had Missed Lincoln', 'If Lee Had Won the Battle of Gettysburg', and 'If Napoleon Had Escaped to America'. This is the fourth".
Another example of alternate history from this period (and arguably the first that explicitly posited cross-time travel from one universe to another as anything more than a visionary experience) is H.G. Wells' Men Like Gods (1923) in which the London-based journalist Mr. Barnstable, along with two cars and their passengers, is mysteriously teleported into "another world", which the "Earthlings" call Utopia. Being far more advanced than Earth, Utopia is some 3000 years ahead of humanity in its development. Wells describes a multiverse of alternative worlds, complete with the paratime travel machines that would later become popular with American pulp writers. However, since his hero experiences only a single alternate world, the story is not very different from conventional alternate history.
In the 1930s, alternate history moved into a new arena. The December 1933 issue of Astounding published Nat Schachner's "Ancestral Voices", which was quickly followed by Murray Leinster's "Sidewise in Time" (1934). While earlier alternate histories examined reasonably-straightforward divergences, Leinster attempted something completely different. In his "World gone mad", pieces of Earth traded places with their analogs from different timelines. The story follows Professor Minott and his students from a fictitious Robinson College as they wander through analogues of worlds that followed a different history. "Sidewise in Time" has been described as "the point at which the alternate history narrative first enters science fiction as a plot device" and is the story for which the Sidewise Award for Alternate History is named.
A somewhat similar approach was taken by Robert A. Heinlein in his 1941 novelette Elsewhen in which a professor trains his mind to move his body across timelines. He then hypnotizes his students so that they can explore more of them. Eventually, each settles into the reality that is most suitable for him or her. Some of the worlds they visit are mundane, some are very odd, and others follow science fiction or fantasy conventions.
World War II produced alternate history for propaganda: both British and American authors wrote works depicting Nazi invasions of their respective countries as cautionary tales.
Time travel to create historical divergences
The period around World War II also saw the publication of the time travel novel Lest Darkness Fall by L. Sprague de Camp in which an American academic travels to Italy at the time of the Byzantine invasion of the Ostrogoths. De Camp's time traveler, Martin Padway, is depicted as making permanent historical changes and implicitly forming a new time branch, thereby making the work an alternate history.
In William Tenn's short story Brooklyn Project (1948), a tyrannical US Government brushes aside the warnings of scientists about the dangers of time travel and goes on with a planned experiment - with the result that minor changes to the prehistoric past cause Humanity to never have existed, its place taken by tentacled underwater intelligent creatures - who also have a tyrannical government which also insists on experimenting with time-travel.
Time travel as the cause of a point of divergence (POD), which can denote either the bifurcation of a historical timeline or a simple replacement of the future that existed before the time-travelling event, has continued to be a popular theme. In Ward Moore's Bring the Jubilee (1953), the protagonist lives in an alternate history in which the Confederacy has won the American Civil War. He travels backward through time and brings about a Union victory at the Battle of Gettysburg.
When a story's assumptions about the nature of time travel lead to the complete replacement of the visited time's future, rather than just the creation of an additional time line, the device of a "time patrol" is often used where guardians move through time to preserve the "correct" history.
A more recent example is Making History by Stephen Fry in which a time machine is used to alter history so that Adolf Hitler was never born. That ironically results in a more competent leader of Nazi Germany and results in the country's ascendancy and longevity in the altered timeline.
Quantum theory of many worlds
While many justifications for alternate histories involve a multiverse, the "many world" theory would naturally involve many worlds, in fact a continually exploding array of universes. In quantum theory, new worlds would proliferate with every quantum event, and even if the writer uses human decisions, every decision that could be made differently would result in a different timeline. A writer's fictional multiverse may, in fact, preclude some decisions as humanly impossible, as when, in Night Watch, Terry Pratchett depicts a character informing Vimes that while anything that can happen, has happened, nevertheless there is no history whatsoever in which Vimes has ever murdered his wife. When the writer explicitly maintains that all possible decisions are made in all possible ways, one possible conclusion is that the characters were neither brave, nor clever, nor skilled, but simply lucky enough to happen on the universe in which they did not choose the cowardly route, take the stupid action, fumble the crucial activity, etc.; few writers focus on this idea, although it has been explored in stories such as Larry Niven's story All the Myriad Ways, where the reality of all possible universes leads to an epidemic of suicide and crime because people conclude their choices have no moral import.
In any case, even if it is true that every possible outcome occurs in some world, it can still be argued that traits such as bravery and intelligence might still affect the relative frequency of worlds in which better or worse outcomes occurred (even if the total number of worlds with each type of outcome is infinite, it is still possible to assign a different measure to different infinite sets). The physicist David Deutsch, a strong advocate of the many-worlds interpretation of quantum mechanics, has argued along these lines, saying that "By making good choices, doing the right thing, we thicken the stack of universes in which versions of us live reasonable lives. When you succeed, all the copies of you who made the same decision succeed too. What you do for the better increases the portion of the multiverse where good things happen." This view is perhaps somewhat too abstract to be explored directly in science fiction stories, but a few writers have tried, such as Greg Egan in his short story The Infinite Assassin, where an agent is trying to contain reality-scrambling "whirlpools" that form around users of a certain drug, and the agent is constantly trying to maximize the consistency of behavior among his alternate selves, attempting to compensate for events and thoughts he experiences, he guesses are of low measure relative to those experienced by most of his other selves.
Many writers—perhaps the majority—avoid the discussion entirely. In one novel of this type, H. Beam Piper's Lord Kalvan of Otherwhen, a Pennsylvania State Police officer, who knows how to make gunpowder, is transported from our world to an alternate universe where the recipe for gunpowder is a tightly held secret and saves a country that is about to be conquered by its neighbors. The paratime patrol members are warned against going into the timelines immediately surrounding it, where the country will be overrun, but the book never depicts the slaughter of the innocent thus entailed, remaining solely in the timeline where the country is saved.
The cross-time theme was further developed in the 1960s by Keith Laumer in the first three volumes of his Imperium sequence, which would be completed in Zone Yellow (1990). Piper's politically more sophisticated variant was adopted and adapted by Michael Kurland and Jack Chalker in the 1980s; Chalker's G.O.D. Inc trilogy (1987–89), featuring paratime detectives Sam and Brandy Horowitz, marks the first attempt at merging the paratime thriller with the police procedural. Kurland's Perchance (1988), the first volume of the never-completed "Chronicles of Elsewhen", presents a multiverse of secretive cross-time societies that utilize a variety of means for cross-time travel, ranging from high-tech capsules to mutant powers. Harry Turtledove has launched the Crosstime Traffic series for teenagers featuring a variant of H. Beam Piper's paratime trading empire.
Rival paratime worlds
The concept of a cross-time version of a world war, involving rival paratime empires, was developed in Fritz Leiber's Change War series, starting with the Hugo Award winning The Big Time (1958); followed by Richard C. Meredith's Timeliner trilogy in the 1970s, Michael McCollum's A Greater Infinity (1982) and John Barnes' Timeline Wars trilogy in the 1990s.
Such "paratime" stories may include speculation that the laws of nature can vary from one universe to the next, providing a science fictional explanation—or veneer—for what is normally fantasy. Aaron Allston's Doc Sidhe and Sidhe Devil take place between our world, the "grim world" and an alternate "fair world" where the Sidhe retreated to. Although technology is clearly present in both worlds, and the "fair world" parallels our history, about fifty years out of step, there is functional magic in the fair world. Even with such explanation, the more explicitly the alternate world resembles a normal fantasy world, the more likely the story is to be labelled fantasy, as in Poul Anderson's "House Rule" and "Loser's Night". In both science fiction and fantasy, whether a given parallel universe is an alternate history may not be clear. The writer might allude to a POD only to explain the existence and make no use of the concept, or may present the universe without explanation of its existence.
Major writers explore alternate histories
Isaac Asimov's short story "What If—" (1952) is about a couple who can explore alternate realities by means of a television-like device. This idea can also be found in Asimov's novel The End of Eternity (1955), in which the "Eternals" can change the realities of the world, without people being aware of it. Poul Anderson's Time Patrol stories feature conflicts between forces intent on changing history and the Patrol who work to preserve it. One story, Delenda Est, describes a world in which Carthage triumphed over the Roman Republic. The Big Time, by Fritz Leiber, describes a Change War ranging across all of history.
Keith Laumer's Worlds of the Imperium is one of the earliest alternate history novels; it was published by Fantastic Stories of the Imagination in 1961, in magazine form, and reprinted by Ace Books in 1962 as one half of an Ace Double. Besides our world, Laumer describes a world ruled by an Imperial aristocracy formed by the merger of European empires, in which the American Revolution never happened, and a third world in post-war chaos ruled by the protagonist's doppelganger.
Philip K. Dick's novel, The Man in the High Castle (1962), is an alternate history in which Nazi Germany and Imperial Japan won World War II. This book contains an example of "alternate-alternate" history, in that one of its characters authored a book depicting a reality in which the Allies won the war, itself divergent from real-world history in several aspects. The several characters live within a divided United States, in which the Empire of Japan takes the Pacific states, governing them as a puppet, Nazi Germany takes the East Coast of the United States and parts of the Midwest, with the remnants of the old United States' government as the Neutral Zone, a buffer state between the two superpowers. The book has inspired an Amazon series of the same name.
Vladimir Nabokov's novel, Ada or Ardor: A Family Chronicle (1969), is a story of incest that takes place within an alternate North America settled in part by Czarist Russia and that borrows from Dick's idea of "alternate-alternate" history (the world of Nabokov's hero is wracked by rumors of a "counter-earth" that apparently is ours). Some critics believe that the references to a counter-earth suggest that the world portrayed in Ada is a delusion in the mind of the hero (another favorite theme of Dick's novels). Strikingly, the characters in Ada seem to acknowledge their own world as the copy or negative version, calling it "Anti-Terra", while its mythical twin is the real "Terra". Like history, science has followed a divergent path on Anti-Terra: it boasts all the same technology as our world, but all based on water instead of electricity; e.g., when a character in Ada makes a long-distance call, all the toilets in the house flush at once to provide hydraulic power.
Guido Morselli described the defeat of Italy (and subsequently France) in World War I in his novel, Past Conditional (1975; ), wherein the static Alpine front line which divided Italy from Austria during that war collapses when the Germans and the Austrians forsake trench warfare and adopt blitzkrieg twenty years in advance.
Kingsley Amis set his novel, The Alteration (1976), in the 20th century, but major events in the Reformation did not take place, and Protestantism is limited to the breakaway Republic of New England. Martin Luther was reconciled to the Roman Catholic Church and later became Pope Germanian I.
In Nick Hancock and Chris England's 1997 book What Didn't Happen Next: An Alternative History of Football it is suggested that, had Gordon Banks been fit to play in the 1970 FIFA World Cup quarter-final, there would have been no Thatcherism and the post-war consensus would have continued indefinitely.
Kim Stanley Robinson's novel, The Years of Rice and Salt (2002), starts at the point of divergence with Timur turning his army away from Europe, and the Black Death has killed 99% of Europe's population, instead of only a third. Robinson explores world history from that point in AD 1405 (807 AH) to about AD 2045 (1467 AH). Rather than following the great man theory of history, focusing on leaders, wars, and major events, Robinson writes more about social history, similar to the Annales School of history theory and Marxist historiography, focusing on the lives of ordinary people living in their time and place.
Philip Roth's novel, The Plot Against America (2004), looks at an America where Franklin D. Roosevelt is defeated in 1940 in his bid for a third term as President of the United States, and Charles Lindbergh is elected, leading to a US that features increasing fascism and anti-Semitism.
Michael Chabon, occasionally an author of speculative fiction, contributed to the genre with his novel The Yiddish Policemen's Union (2007), which explores a world in which the State of Israel was destroyed in its infancy and many of the world's Jews instead live in a small strip of Alaska set aside by the US government for Jewish settlement. The story follows a Jewish detective solving a murder case in the Yiddish-speaking semi-autonomous city state of Sitka. Stylistically, Chabon borrows heavily from the noir and detective fiction genres, while exploring social issues related to Jewish history and culture. Apart from the alternate history of the Jews and Israel, Chabon also plays with other common tropes of alternate history fiction; in the book, Germany actually loses the war even harder than they did in reality, getting hit with a nuclear bomb instead of just simply losing a ground war (subverting the common "what if Germany won WWII?" trope).
Contemporary alternate history in popular literature
The late 1980s and the 1990s saw a boom in popular-fiction versions of alternate history, fueled by the emergence of the prolific alternate history author Harry Turtledove, as well as the development of the steampunk genre and two series of anthologies—the What Might Have Been series edited by Gregory Benford and the Alternate ... series edited by Mike Resnick. This period also saw alternate history works by S. M. Stirling, Kim Stanley Robinson, Harry Harrison, Howard Waldrop, Peter Tieryas, and others.
In 1986, a sixteen-part epic comic book series called Captain Confederacy began examining a world where the Confederate States of America won the American Civil War. In the series, the Captain and others heroes are staged government propaganda events featuring the feats of these superheroes.
Since the late 1990s, Harry Turtledove has been the most prolific practitioner of alternate history and has been given the title "Master of Alternate History" by some. His books include those of Timeline 191 (a.k.a. Southern Victory, also known as TL-191), in which, while the Confederate States of America won the American Civil War, the Union and Imperial Germany defeat the Entente Powers in the two "Great War"s of the 1910s and 1940s (with a Nazi-esque Confederate government attempting to exterminate its black population), and the Worldwar series, in which aliens invaded Earth during World War II. Other stories by Turtledove include A Different Flesh, in which the Americas were not populated from Asia during the last ice age; In the Presence of Mine Enemies, in which the Nazis won World War II; and Ruled Britannia, in which the Spanish Armada succeeded in conquering England in the Elizabethan era, with William Shakespeare being given the task of writing the play that will motivate the Britons to rise up against their Spanish conquerors. He also co-authored a book with actor Richard Dreyfuss, The Two Georges, in which the United Kingdom retained the American colonies, with George Washington and King George III making peace. He did a two-volume series in which the Japanese not only bombed Pearl Harbor but also invaded and occupied the Hawaiian Islands.
Perhaps the most incessantly explored theme in popular alternate history focuses on the aftermath of an Axis victory in World War II. In some versions, the Nazis and/or Axis Powers win; or in others, they conquer most of the world but a "Fortress America" exists under siege; while in others, there is a Nazi/Japanese Cold War comparable to the US/Soviet equivalent in 'our' timeline. Fatherland (1992), by Robert Harris, is set in Europe following the Nazi victory. The novel Dominion by C.J. Sansom (2012) is similar in concept but is set in England, with Churchill the leader of an anti-German Resistance and other historic persons in various fictional roles. In the Mecha Samurai Empire series (2016), Peter Tieryas focuses on the Asian-American side of the alternate history, exploring an America ruled by the Japanese Empire while integrating elements of Asian pop culture like mechas and videogames.
Several writers have posited points of departure for such a world but then have injected time splitters from the future. For instance James P. Hogan's The Proteus Operation. Norman Spinrad wrote The Iron Dream in 1972, which is intended to be a science fiction novel written by Adolf Hitler after fleeing from Europe to North America in the 1920s.
In Jo Walton's "Small Change" series, the United Kingdom made peace with Hitler before the involvement of the United States in World War II, and slowly collapses due to severe economic depression. Former House Speaker Newt Gingrich and William R. Forstchen have written a novel, 1945, in which the US defeated Japan but not Germany in World War II, resulting in a Cold War with Germany rather than the Soviet Union. Gingrich and Forstchen neglected to write the promised sequel; instead, they wrote a trilogy about the American Civil War, starting with Gettysburg: A Novel of the Civil War, in which the Confederates win a victory at the Battle of Gettysburg - however, after Lincoln responds by bringing Grant and his forces to the eastern theater, the Army of Northern Virginia is soon trapped and destroyed in Maryland, and the war ends within weeks.
While World War II has been a common point of divergence in alternate history literature, several works have been based on other points of divergence. For example, Martin Cruz Smith, in his first novel, posited an independent American Indian nation following the defeat of Custer in The Indians Won (1970). Beginning with The Probability Broach in 1980, L. Neil Smith wrote several novels that postulated the disintegration of the US Federal Government after Albert Gallatin joins the Whiskey Rebellion in 1794 and eventually leads to the creation of a libertarian utopia. In the 2022 novel Poutine and Gin by Steve Rhinelander, the point of divergence is the Battle of the Plains of Abraham of the French and Indian War. That novel is a mystery set in 1940 of that time line.
A recent time traveling splitter variant involves entire communities being shifted elsewhere to become the unwitting creators of new time branches. These communities are transported from the present (or the near-future) to the past or to another timeline via a natural disaster, the action of technologically advanced aliens, or a human experiment gone wrong. S. M. Stirling wrote the Island in the Sea of Time trilogy, in which Nantucket Island and all its modern inhabitants are transported to Bronze Age times to become the world's first superpower. In Eric Flint's 1632 series, a small town in West Virginia is transported to 17th century central Europe and drastically changes the course of the Thirty Years' War, which was then underway. John Birmingham's Axis of Time trilogy deals with the culture shock when a United Nations naval task force from 2021 finds itself back in 1942 helping the Allies against the Empire of Japan and the Germans (and doing almost as much harm as good in spite of its advanced weapons). The series also explores the cultural impacts of people with 2021 ideals interacting with 1940s culture. Similarly, Robert Charles Wilson's Mysterium depicts a failed US government experiment which transports a small American town into an alternative version of the US run by Gnostics, who are engaged in a bitter war with the "Spanish" in Mexico (the chief scientist at the laboratory where the experiment occurred is described as a Gnostic, and references to Christian Gnosticism appear repeatedly in the book).
Although not dealing in physical time travel, in his alt-history novel Marx Returns, Jason Barker introduces anachronisms into the life and times of Karl Marx, such as when his wife Jenny sings a verse from the Sex Pistols's song "Anarchy in the U.K.", or in the games of chess she plays with the Marxes' housekeeper Helene Demuth, which on one occasion involves a Caro–Kann Defence. In her review of the novel, Nina Power writes of "Jenny's 'utopian' desire for an end to time", an attitude which, according to Power, is inspired by her husband's co-authored book The German Ideology. However, in keeping with the novel's anachronisms, the latter was not published until 1932. By contrast, the novel's timeline ends in 1871.
In the 2022 novel Hydrogen Wars: Atomic Sunrise by R.M. Christianson a small change in post-war Japanese history leads to the election of General Douglas MacArthur as President of the United States. This minor change ultimately leads to all-out atomic war between the major Cold War powers.
Through crowdfunding on Kickstarter, Alan Jenkins and Gan Golan produced a graphic novel series called 1/6 depicting a dystopian alternate reality in which the January 6 United States Capitol attack was successful. What follows is the burning down of the Capitol building and the hanging of Vice President Mike Pence. Under Donald Trump's second term as president, a solid gold statue of him is erected and armed thugs patrol the streets of Washington DC suppressing civilian resistance with brutal violence under the banner of the Confederate flag.
In fantasy genre
Many works of straight fantasy and science fantasy take place in historical settings, though with the addition of, for example, magic or mythological beasts. Some present a secret history in which the modern day world no longer believes that these elements ever existed. Many ambiguous alternate/secret histories are set in Renaissance or pre-Renaissance times, and may explicitly include a "retreat" from the world, which would explain the current absence of such phenomena. Other stories make plan a divergence of some kind.
In Poul Anderson's Three Hearts and Three Lions in which the Matter of France is history and the fairy folk are real and powerful. The same author's A Midsummer Tempest occurs in a world in which the plays of William Shakespeare (called here "the Great Historian"), presented the literal truth in every instance. The novel itself takes place in the era of Oliver Cromwell and Charles I. Here, the English Civil War had a different outcome, and the Industrial Revolution has occurred early.
Randall Garrett's "Lord Darcy" series presents a point of divergence: a monk systemizes magic rather than science, so the use of foxglove to treat heart disease is regarded as superstition. Another point of divergence occurs in 1199, when Richard the Lionheart survives the Siege of Chaluz and returns to England and makes the Angevin Empire so strong that it survives into the 20th century.
Jonathan Strange & Mr Norrell by Susanna Clarke takes place in an England where a separate Kingdom ruled by the Raven King and founded on magic existed in Northumbria for over 300 years. In Patricia Wrede's Regency fantasies, Great Britain has a Royal Society of Wizards.
The Tales of Alvin Maker series by Orson Scott Card (a parallel to the life of Joseph Smith, founder of the Latter Day Saint movement) takes place in an alternate America, beginning in the early 19th century. Prior to that time, a POD occurred: England, under the rule of Oliver Cromwell, had banished "makers", or anyone else demonstrating "knacks" (an ability to perform seemingly supernatural feats) to the North American continent. Thus the early American colonists embraced as perfectly ordinary these gifts, and counted on them as a part of their daily lives. The political division of the continent is considerably altered, with two large English colonies bookending a smaller "American" nation, one aligned with England, and the other governed by exiled Cavaliers. Actual historical figures are seen in a much different light: Ben Franklin is revered as the continent's finest "maker", George Washington was executed after being captured, and "Tom" Jefferson is the first president of "Appalachia", the result of a compromise between the Continentals and the British Crown.
On the other hand, when the "Old Ones" (fairies) still manifest themselves in England in Keith Roberts's Pavane, which takes place in a technologically backward world after a Spanish assassination of Elizabeth I allowed the Spanish Armada to conquer England, the possibility that the fairies were real but retreated from modern advances makes the POD possible: the fairies really were present all along, in a secret history.
Again, in the English Renaissance fantasy Armor of Light by Melissa Scott and Lisa A. Barnett, the magic used in the book, by Dr. John Dee and others, actually was practiced in the Renaissance; positing a secret history of effective magic makes this an alternate history with a point of departure. Sir Philip Sidney survives the Battle of Zutphen in 1586, and shortly thereafter saving the life of Christopher Marlowe.
When the magical version of our world's history is set in contemporary times, the distinction becomes clear between alternate history on the one hand and contemporary fantasy, using in effect a form of secret history (as when Josepha Sherman's Son of Darkness has an elf living in New York City, in disguise) on the other. In works such as Robert A. Heinlein's Magic, Incorporated where a construction company can use magic to rig up stands at a sporting event and Poul Anderson's Operation Chaos and its sequel Operation Luna, where djinns are serious weapons of war—with atomic bombs—the use of magic throughout the United States and other modern countries makes it clear that this is not secret history—although references in Operation Chaos to degaussing the effects of cold iron make it possible that it is the result of a POD. The sequel clarifies this as the result of a collaboration of Einstein and Planck in 1901, resulting in the theory of "rhea tics". Henry Moseley applies this theory to "degauss the effects of cold iron and release the goetic forces." This results in the suppression of ferromagnetism and the re-emergence of magic and magical creatures.
Alternate history shades off into other fantasy subgenres when the use of actual, though altered, history and geography decreases, although a culture may still be clearly the original source; Barry Hughart's Bridge of Birds and its sequels take place in a fantasy world, albeit one clearly based on China, and with allusions to actual Chinese history, such as the Empress Wu. Richard Garfinkle's Celestial Matters incorporates ancient Chinese physics and Greek Aristotelian physics, using them as if factual.
Alternate history has long been a staple of Japanese speculative fiction with such authors as Futaro Yamada and Ryō Hanmura writing novels set in recognizable historical settings with added supernatural or science fiction elements. Ryō Hanmura's 1973 Musubi no Yama Hiroku which recreated 400 years of Japan's history from the perspective of a secret magical family with psychic abilities. The novel has since come to be recognized as a masterpiece of Japanese speculative fiction. Twelve years later, author Hiroshi Aramata wrote the groundbreaking Teito Monogatari which reimagined the history of Tokyo across the 20th century in a world heavily influenced by the supernatural.
Television
1983 is set on a world where the Iron Curtain never fell and the Cold War continues until the present (2003).
An Englishman's Castle tells the story of the writer of a soap opera in a 1970s England which lost World War II. England is run by a collaborator government which strains to maintain a normal appearance of British life. Slowly, however, the writer begins to uncover the truth.
In the Community episode "Remedial Chaos Theory," each of the six members of the study group rolls a die to decide who has to go downstairs to accept a pizza delivery for the group, creating 6 different alternative worlds. Characters from the worst universe, "darkest timeline," would later appear in the "prime universe".
Confederate was a planned HBO series set on a world where the south won the US Civil War. Social media backlash during pre-production led to the series being cancelled with no episodes produced.
Counterpart tells of a United Nations agency that is responsible for monitoring passage between alternative worlds. Two of the worlds, Alpha and Prime are locked in a cold war.
The Court-Martial of George Armstrong Custer is a 1977 telemovie where George Custer survives the Battle of Little Bighorn and faces a court martial hearing over his incompetence.
C.S.A.: The Confederate States of America presents itself as a British TV documentary uncovering some of the dark secrets of the Confederacy on a world where the south won the US Civil War.
Dark Skies tells that much of history having been shaped since the 1940s by a government conspiracy with aliens. One race of aliens can take over humans, while those immune to the alien's control fight back.
Doctor Who'''s main character has visited two alternative worlds in the TV show and several in its spin off media. The Third Doctor visits a world with a fascist Great Britain on the brink of destruction, Inferno while the Tenth Doctor visits a Britain that has a President and blimps are a common form of transportation beset by Cybermen, "Doomsday". The Seventh Doctor faces a threat from an alternative world in Battlefield, where magic is real and the alternative version of The Doctor is hinted to be that reality's Merlin.Fallout shows a 1950s retro-future world that suffers a global nuclear war on the Amazon streaming service.Fatherland is a TV movie set in a 1960 alternative world where US President Joseph Kennedy and Adolf Hitler have agreed to meet to discuss an end to their country's Cold War 15 years after the Axis victory in World War II. However, an American reporter has discovered proof of the long denied Final Solution threatens the meeting.
The anime Fena: Pirate Princess featured an alternate 18th century.For All Mankind depicts an alternate timeline in which the Soviet crewed lunar program successfully lands on the Moon before the US Apollo program, resulting in a continued and intensified Space Race. Fringe has the father of one of the main characters cross into another reality to steal that world's version of his son after his son dies. The second world has a slightly different history, with a few different states in the United States, such as only one Carolina and Upper Michigan as a state. In addition, the 9/11 attack didn't take down the Twin Towers but the White House. Also, several major DC Comics events are different, such as Superman not Supergirl dying during Crisis on Infinite Earths. The incursion to steal the son has many negative effects on that world, and while the realities start out as antagonist, they eventually work together to repair the damage.The Man in the High Castle, an adaptation of the novel of the same name, showed a world where the Axis Powers won World War II.Motherland: Fort Salem explores a female-dominated world in which witchcraft is real. Its world diverged from our timeline when the Salem witch trials are resolved by an agreement between witches and ungifted humans.Noughts + Crosses is a British TV show set on a world where a powerful West African empire colonizes Europe 700 years before the start of the series.Parallels was a planned TV show whose pilot was later released as a Netflix movie. The plot concerns a building which can shift realities every 36 hours and those who use the building to travel to other realities.The Plot Against America is an HBO miniseries where Charles Lindbergh wins the 1940 US presidential election as an anti-war candidate who moves the country toward fascism.
The TV show Sliders explores different possible alternate realities by having the protagonist "slide" into different parallel dimensions of the same planet Earth.The Great Martian War 1913-1917An alternate history documentary where giant martians with machines invaded the Earth during WW1, causing huge technological upgrades and the entente and central powers fighting alongside each other.SS-GB (TV series) shows a world where the Axis Powers quickly win World War II, killing Churchill and installing a puppet government. However, British resistance fights back.
In the various Star Trek TV shows and spin off media a Mirror Universe has been encountered where Earth has an empire that subjugates other planets. Doppelgängers of the main cast of many the TV shows appear in that reality.
The Watchmen series is set on a world where costumed heroes were initially welcomed but later outlawed. It is set 34 years after the events of the comic book on which the series shares a name.
The Marvel Cinematic Universe series, Loki (2021 & 2023), on Disney+, shows an agency which prevents alterations to the timeline. Alternate versions of Loki from various universes appear.
The Marvel Cinematic Universe series, What If...? (2021-present), on Disney+, shows alternate universes that depict alternate events from the MCU films.
Online
Fans of alternate history have made use of the internet from a very early point to showcase their own works and provide useful tools for those fans searching for anything alternate history, first in mailing lists and usenet groups, later in web databases and forums. The "Usenet Alternate History List" was first posted on 11 April 1991, to the Usenet newsgroup rec.arts.sf-lovers. In May 1995, the dedicated newsgroup soc.history.what-if was created for showcasing and discussing alternate histories. Its prominence declined with the general migration from unmoderated usenet to moderated web forums, most prominently AlternateHistory.com, the self-described "largest gathering of alternate history fans on the internet" with over 10,000 active members.
In addition to these discussion forums, in 1997 Uchronia: The Alternate History List was created as an online repository, now containing over 2,900 alternate history novels, stories, essays, and other printed materials in several different languages. Uchronia was selected as the Sci Fi Channel's "Sci Fi Site of the Week" twice.
Uchronia
In Spanish, French, German, Portuguese, Italian, Catalan, and Galician, the words , , and are native versions of alternate history, from which comes the English loanword uchronia. The English term uchronia is a neologism that is sometimes used in its original meaning as a straightforward synonym for alternate history.Loyer, Emmanuelle (2019). Uchronia. Booksandideas.net.Schmid, Helga (2020). Uchronia: Designing Time. Germany: Walter de Gruyter GmbH. p. 26 However, it may also now refer to other concepts, namely an umbrella genre of fiction that encompasses alternate history, parallel universes in fiction, and fiction based in futuristic or non-temporal settings.Craveiro, Joanna (2016). A live/living museum of small, forgotten and unwanted memories: performing narratives, testimonies and archives of the Portuguese Dictatorship and Revolution (Doctoral dissertation, University of Roehampton), p. 46.
See also
20th century in science fiction
Alien space bats
Alternate ending
Alternative future
American Civil War alternate histories
Dieselpunk
Dystopian
Fictional universe
Future history
The Garden of Forking Paths Historical revisionism
Hypothetical Axis victory in World War II
Invasion literature
Jonbar hinge
List of alternate history fiction
Possible worlds
Pulp novels
Ruritanian romance
References
Further reading
Chapman, Edgar L., and Carl B. Yoke (eds.). Classic and Iconoclastic Alternate History Science Fiction. Mellen, 2003.
Collins, William Joseph. Paths Not Taken: The Development, Structure, and Aesthetics of the Alternative History. University of California, Davis 1990.
Darius, Julian. "58 Varieties: Watchmen and Revisionism". In Minutes to Midnight: Twelve Essays on Watchmen. Sequart Research & Literacy Organization, 2010. Focuses on Watchmen as alternate history.
Cowley, Robert, ed., What If? Military Historians Imagine What Might Have Been. Pan Books, 1999.
Gevers, Nicholas. Mirrors of the Past: Versions of History in Science Fiction and Fantasy. University of Cape Town, 1997
Hellekson, Karen. The Alternate History: Refiguring Historical Time. Kent State University Press, 2001
Keen, Antony G. "Alternate Histories of the Roman Empire in Stephen Baxter, Robert Silverberg and Sophia McDougall". Foundation: The International Review of Science Fiction 102, Spring 2008.
McKnight, Edgar Vernon Jr. Alternative History: The Development of a Literary Genre. University of North Carolina at Chapel Hill, 1994.
Morgan, Glyn, and C. Palmer-Patel (eds.). Sideways in Time: Critical Essays on Alternate History Fiction. Liverpool University Press, 2019.
Nedelkovh, Aleksandar B. British and American Science Fiction Novel 1950–1980 with the Theme of Alternative History (an Axiological Approach). 1994 , 1999 .
Rosenfeld, Gavriel David. The World Hitler Never Made: Alternate History and the Memory of Nazism. 2005
Rosenfeld, Gavriel David. "Why Do We Ask 'What If?' Reflections on the Function of Alternate History." History and Theory 41, Theme Issue 41: Unconventional History (December 2002), 90–103. .
Schneider-Mayerson, Matthew. "What Almost Was: The Politics of the Contemporary Alternate History Novel". American Studies 30, 3–4 (Summer 2009), 63–83.
Singles, Kathleen. Alternate History: Playing with Contingency and Necessity''. De Gruyter, 2013.
External links
Alternate History on TV Tropes
Alternate history
Science fiction genres
Speculative fiction | 0.772742 | 0.998332 | 0.771453 |
Trompenaars's model of national culture differences | Trompenaars's model of national culture differences is a framework for cross-cultural communication applied to general business and management, developed by Fons Trompenaars and Charles Hampden-Turner. This involved a large-scale survey of 8,841 managers and organization employees from 43 countries.
This model of national culture differences has seven dimensions. There are five orientations covering the ways in which human beings deal with each other, one which deals with time, and one which deals with the environment.
Universalism vs particularism
Universalism is the belief that ideas and practices can be applied everywhere without modification, while particularism is the belief that circumstances dictate how ideas and practices should be applied. It asks the question, What is more important, rules or relationships? Cultures with high universalism see one reality and focus on formal rules. Business meetings are characterized by rational, professional arguments with a "get down to business" attitude. Trompenaars research found there was high universalism in countries like the United States, Canada, UK, Australia, Germany, and Sweden. Cultures with high particularism see reality as more subjective and place a greater emphasis on relationships. It is important to get to know the people one is doing business with during meetings in a particularist environment. Someone from a universalist culture would be wise not to dismiss personal meanderings as irrelevancies or mere small talk during such business meetings. Countries that have high particularism include Venezuela, Indonesia, China, South Korea, and the former Soviet Union.
Individualism vs communitarianism
Individualism refers to people regarding themselves as individuals, while communitarianism refers to people regarding themselves as part of a group. Trompenaars research yielded some interesting results and suggested that cultures may change more quickly than many people realize. It may not be surprising to see a country like the United States with high individualism, but Mexico and the former communist countries of Czechoslovakia and the Soviet Union were also found to be individualistic in Trompenaars research. In Mexico, the shift from a previously communitarian culture could be explained with its membership in NAFTA and involvement in the global economy. This contrasts with Hofstede's earlier research, which found these countries to be collectivist, and shows the dynamic and complex nature of culture. Countries with high communitarianism include Germany, China, France, Japan, and Singapore.
Neutral vs emotional
A neutral culture is a culture in which emotions are held in check whereas an emotional culture is a culture in which emotions are expressed openly and naturally. Neutral cultures that come rapidly to mind are those of the Japanese and British. Some examples of high emotional cultures are the Netherlands, Mexico, Italy, Israel and Spain. In emotional cultures, people often smile, talk loudly when excited, and greet each other with enthusiasm. So, when people from neutral culture are doing business in an emotional culture they should be ready for a potentially animated and boisterous meeting and should try to respond warmly. As for those from an emotional culture doing business in a neutral culture, they should not be put off by a lack of emotion.
Specific vs diffuse
A specific culture is one in which individuals have a large public space they readily share with others and small private space guard closely and share with only close friends and associates. A diffuse culture is one in which public space and private space are similar in size and individuals guard their public space carefully, because entry into public space affords entry into private space as well. It looks at how separate a culture keeps their personal and public lives. Fred Luthans and Jonathan Doh give the following example which explains this:
An example of these specific and diffuse cultural dimensions is provided by the United States and Germany. A U.S. professor, such as Robert Smith, PhD, generally would be called “Doctor Smith” by students when at his U.S. university. When shopping, however, he might be referred to by the store clerk as “Bob,” and he might even ask the clerk’s advice regarding some of his intended purchases. When golfing, Bob might just be one of the guys, even to a golf partner who happens to be a graduate student in his department. The reason for these changes in status is that, with the specific U.S. cultural values, people have large public spaces and often conduct themselves differently depending on their public role. At the same time, however, Bob has private space that is off-limits to the students who must call him “Doctor Smith” in class. In high-diffuse cultures, on the other hand, a person’s public life and private life often are similar. Therefore, in Germany, Herr Professor Doktor Schmidt would be referred to that way at the university, local market, and bowling alley—and even his wife might address him formally in public. A great deal of formality is maintained, often giving the impression that Germans are stuffy or aloof.
Achievement vs ascription
In an achievement culture, people are accorded status based on how well they perform their functions. In an ascription culture, status is based on who or what a person is. Does one have to prove themself to receive status or is it given to them? Achievement cultures include the US, Austria, Israel, Switzerland and the UK. Some ascription cultures are Venezuela, Indonesia, and China. When people from an achievement culture do business in an ascription culture it is important to have older, senior members with formal titles and respect should be shown to their counterparts. However, for an ascription culture doing business in an achievement culture, it is important to bring knowledgeable members who can prove to be proficient to other group, and respect should be shown for the knowledge and information of their counterparts.
Sequential vs synchronic
A sequential time culture is the one in which the people like events to happen in a chronological order. The punctuality is very appreciated and they base their lives in schedules, plannification and specific and clear deadlines; in this kind of cultures time is very important and they do not tolerate the waste of time. Instead in synchronic cultures, they see specific time periods as interwoven periods, the use to highlight the importance of punctuality and deadlines if these are key to meeting objectives and they often work in several things at a time, they are also more flexible with the distribution of time and commitments.
Internal vs external control
Do we control our environment or are we controlled by it? In inner directed culture, people believe in controlling outcomes and have a dominant attitude toward environment. In outer-directed culture, people believe in letting things take their own course and have a more flexible attitude, characterized by a willingness to compromise and maintain harmony with nature.
See also
Cross-cultural communication
Fons Trompenaars
Hofstede's cultural dimensions theory
References
External links
THT Consulting
Riding the Waves of Culture by Fons Trompenaars and Charles Hampden-Turner (book)
Trompenaars' and Hampden-Turner's cultural factors
Riding The waves of commerce: a test of Trompenaars' "model" of national culture differences
Fons Trompenaars' Cultural Dimensions
THE TROMPENAARS’ SEVEN-DIMENSION CULTURAL MODEL
Culture Compass
THT IAP Certification Presentation Day1 6Dec2010
Cross-cultural psychology
Cultural studies
National identity
Organizational culture | 0.782308 | 0.986078 | 0.771417 |
History of capitalism | Capitalism is an economic system based on the private ownership of the means of production. This is generally taken to imply the moral permissibility of profit, free trade, capital accumulation, voluntary exchange, wage labor, etc. Its emergence, evolution, and spread are the subjects of extensive research and debate. Debates sometimes focus on how to bring substantive historical data to bear on key questions. Key parameters of debate include: the extent to which capitalism is natural, versus the extent to which it arises from specific historical circumstances; whether its origins lie in towns and trade or in rural property relations; the role of class conflict; the role of the state; the extent to which capitalism is a distinctively European innovation; its relationship with European imperialism; whether technological change is a driver or merely a secondary byproduct of capitalism; and whether or not it is the most beneficial way to organize human societies.
Agrarian capitalism
Crisis of the 14th century
According to some historians, the modern capitalist system originated in the "crisis of the Late Middle Ages", a conflict between the land-owning aristocracy and the agricultural producers, or serfs. Manorial arrangements inhibited the development of capitalism in a number of ways. Serfs had obligations to produce for lords and therefore had no interest in technological innovation; they also had no interest in cooperating with one another because they produced to sustain their own families. The lords who owned the land relied on force to guarantee that they received sufficient food. Because lords were not producing to sell on the market, there was no competitive pressure for them to innovate. Finally, because lords expanded their power and wealth through military means, they spent their wealth on military equipment or on conspicuous consumption that helped foster alliances with other lords; they had no incentive to invest in developing new productive technologies.
The demographic crisis of the 14th century upset this arrangement. This crisis had several causes: agricultural productivity reached its technological limitations and stopped growing, bad weather led to the Great Famine of 1315–1317, and the Black Death of 1348–1350 led to a population crash. These factors led to a decline in agricultural production. In response, feudal lords sought to expand agricultural production by extending their domains through warfare; therefore they demanded more tribute from their serfs to pay for military expenses. In England, many serfs rebelled. Some moved to towns, some bought land, and some entered into favorable contracts to rent lands from lords who needed to repopulate their estates.
In effect, feudalism began to lay some of the foundations necessary for the development of mercantilism, a precursor of capitalism. Feudalism lasted from the medieval period through the 16th century. Feudal manors were almost entirely self-sufficient, and therefore limited the role of the market. This stifled any incipient tendency towards capitalism. However, the relatively sudden emergence of new technologies and discoveries, particularly in agriculture and exploration, facilitated the growth of capitalism. The most important development at the end of feudalism was the emergence of what Robert Degan calls "the dichotomy between wage earners and capitalist merchants". The competitive nature meant there are always winners and losers, and this became clear as feudalism evolved into mercantilism, an economic system characterized by the private or corporate ownership of capital goods, investments determined by private decisions, and by prices, production, and the distribution of goods determined mainly by competition in a free market.
Enclosure
England in the 16th century was already a centralized state, in which much of the feudal order of Medieval Europe had been swept away. This centralization was strengthened by a good system of roads and a disproportionately large capital city, London. The capital acted as a central market for the entire country, creating a large internal market for goods, in contrast to the fragmented feudal holdings that prevailed in most parts of the Continent. The economic foundations of the agricultural system were also beginning to diverge substantially; the manorial system had broken down by this time, and land began to be concentrated in the hands of fewer landlords with increasingly large estates. The system put pressure on both the landlords and the tenants to increase agricultural productivity to create profit. The weakened coercive power of the aristocracy to extract peasant surpluses encouraged them to try out better methods. The tenants also had an incentive to improve their methods to succeed in an increasingly competitive labour market. Land rents had moved away from the previous stagnant system of custom and feudal obligation, and were becoming directly subject to economic market forces.
An important aspect of this process of change was the enclosure of the common land previously held in the open field system where peasants had traditional rights, such as mowing meadows for hay and grazing livestock. Once enclosed, these uses of the land became restricted to the owner, and it ceased to be land for commons. The process of enclosure began to be a widespread feature of the English agricultural landscape during the 16th century. By the 19th century, unenclosed commons had become largely restricted to rough pasture in mountainous areas and to relatively small parts of the lowlands.
Marxist and neo-Marxist historians argue that rich landowners used their control of state processes to appropriate public land for their private benefit. This created a landless working class that provided the labour required in the new industries developing in the north of England. For example: "In agriculture the years between 1760 and 1820 are the years of wholesale enclosure in which, in village after village, common rights are lost". "Enclosure (when all the sophistications are allowed for) was a plain enough case of class robbery". Anthropologist Jason Hickel notes that this process of enclosure led to myriad peasant revolts, among them Kett's Rebellion and the Midland Revolt, which culminated in violent repression and executions.
Other scholars argue that the better-off members of the European peasantry encouraged and participated actively in enclosure, seeking to end the perpetual poverty of subsistence farming. "We should be careful not to ascribe to [enclosure] developments that were the consequence of a much broader and more complex process of historical change." "[T]he impact of eighteenth and nineteenth century enclosure has been grossly exaggerated...."
Merchant capitalism and mercantilism
Precedents
While trade has existed since early in human history, it was not capitalism. The earliest recorded activity of long-distance profit-seeking merchants can be traced to the Old Assyrian merchants active in Mesopotamia the 2nd millennium BCE. The Roman Empire developed more advanced forms of commerce, and similarly widespread networks existed in Islamic nations. However, capitalism took shape in Europe in the late Middle Ages and Renaissance.
An early emergence of commerce occurred on monastic estates in Italy and France, but in particular in the independent Italian city-states during the late Middle Ages, such as Florence, Genoa and Venice. Those states pioneered innovative financial instruments such as bills of exchange and banking practices that facilitated long-distance trade. The competitive nature of these city-states fostered a spirit of innovation and risk-taking, laying the groundwork for capitalism's core principles of private ownership, market competition, and profit-seeking behavior. The economic prowess of the Italian city-states during this time not only fueled its own prosperity but also contributed significantly to the spread of capitalist ideas and practices throughout Europe and beyond.
Emergence
Modern capitalism resembles some elements of mercantilism in the early modern period between the 16th and 18th centuries. Early evidence for mercantilist practices appears in early modern Venice, Genoa, and Pisa over the Mediterranean trade in bullion. The region of mercantilism's real birth, however, was the Atlantic Ocean.
England began a large-scale and integrative approach to mercantilism during the Elizabethan Era. An early statement on national balance of trade appeared in Discourse of the Common Weal of this Realm of England, 1549: "We must always take heed that we buy no more from strangers than we sell them, for so should we impoverish ourselves and enrich them." The period featured various but often disjointed efforts by the court of Queen Elizabeth to develop a naval and merchant fleet capable of challenging the Spanish stranglehold on trade and of expanding the growth of bullion at home. Elizabeth promoted the Trade and Navigation Acts in Parliament and issued orders to her navy for the protection and promotion of English shipping.
These efforts organized national resources sufficiently in the defense of England against the far larger and more powerful Spanish Empire, and in turn paved the foundation for establishing a global empire in the 19th century. The authors noted most for establishing the English mercantilist system include Gerard de Malynes and Thomas Mun, who first articulated the Elizabethan System. The latter's England's Treasure by Forraign Trade, or the Balance of our Forraign Trade is The Rule of Our Treasure gave a systematic and coherent explanation of the concept of balance of trade. It was written in the 1620s and published in 1664. Mercantile doctrines were further developed by Josiah Child. Numerous French authors helped to cement French policy around mercantilism in the 17th century. French mercantilism was best articulated by Jean-Baptiste Colbert (in office, 1665–1683), although his policies were greatly liberalised under Napoleon.
Doctrines
Under mercantilism, European merchants, backed by state controls, subsidies, and monopolies, made most of their profits from buying and selling goods. In the words of Francis Bacon, the purpose of mercantilism was "the opening and well-balancing of trade; the cherishing of manufacturers; the banishing of idleness; the repressing of waste and excess by sumptuary laws; the improvement and husbanding of the soil; the regulation of prices..." Similar practices of economic regimentation had begun earlier in medieval towns. However, under mercantilism, given the contemporaneous rise of absolutism, the state superseded the local guilds as the regulator of the economy.
Among the major tenets of mercantilist theory was bullionism, a doctrine stressing the importance of accumulating precious metals. Mercantilists argued that a state should export more goods than it imported so that foreigners would have to pay the difference in precious metals. Mercantilists asserted that only raw materials that could not be extracted at home should be imported. They promoted the idea that government subsidies, such as granting monopolies and protective tariffs, were necessary to encourage home production of manufactured goods.
Proponents of mercantilism emphasized state power and overseas conquest as the principal aim of economic policy. If a state could not supply its own raw materials, according to the mercantilists, it should acquire colonies from which they could be extracted. Colonies constituted not only sources of raw materials but also markets for finished products. Because it was not in the interests of the state to allow competition, to help the mercantilists, colonies should be prevented from engaging in manufacturing and trading with foreign powers.
Mercantilism was a system of trade for profit, although commodities were still largely produced by non-capitalist production methods. Noting the various pre-capitalist features of mercantilism, Karl Polanyi argued that "mercantilism, with all its tendency toward commercialization, never attacked the safeguards which protected [the] two basic elements of production – labor and land – from becoming the elements of commerce." Thus mercantilist regulation was more akin to feudalism than capitalism. According to Polanyi, "not until 1834 was a competitive labor market established in England, hence industrial capitalism as a social system cannot be said to have existed before that date."
Chartered trading companies
The Muscovy Company was the first major chartered joint stock English trading company. It was established in 1555 with a monopoly on trade between England and Muscovy. It was an offshoot of the earlier Company of Merchant Adventurers to New Lands, founded in 1551 by Richard Chancellor, Sebastian Cabot and Sir Hugh Willoughby to locate the Northeast Passage to China to allow trade. This was the precursor to a type of business that would soon flourish in England, the Dutch Republic and elsewhere.
The British East India Company (1600) and the Dutch East India Company (1602) launched an era of large state chartered trading companies. These companies were characterized by their monopoly on trade, granted by letters patent provided by the state. Recognized as chartered joint-stock companies by the state, these companies enjoyed lawmaking, military, and treaty-making privileges. Characterized by its colonial and expansionary powers by states, powerful nation-states sought to accumulate precious metals, and military conflicts arose. During this era, merchants, who had previously traded on their own, invested capital in the East India Companies and other colonies, seeking a return on investment.
Industrial capitalism
Mercantilism declined in Great Britain in the mid-18th century, when a new group of economic theorists, led by Adam Smith, challenged fundamental mercantilist doctrines, such as that the world's wealth remained constant and that a state could only increase its wealth at the expense of another state. However, mercantilism continued in less developed economies, such as Prussia and Russia, with their much younger manufacturing bases.
The mid-18th century gave rise to industrial capitalism, made possible by (1) the accumulation of vast amounts of capital under the merchant phase of capitalism and its investment in machinery, and (2) the fact that the enclosures meant that Britain had a large population of people with no access to subsistence agriculture, who needed to buy basic commodities via the market, ensuring a mass consumer market. Industrial capitalism, which Marx dated from the last third of the 18th century, marked the development of the factory system of manufacturing, characterized by a complex division of labor between and within work processes and the routinization of work tasks. Industrial capitalism finally established the global domination of the capitalist mode of production.
During the resulting Industrial Revolution, the industrialist replaced the merchant as a dominant actor in the capitalist system, which led to the decline of the traditional handicraft skills of artisans, guilds, and journeymen. Also during this period, capitalism transformed relations between the British landowning gentry and peasants, giving rise to the production of cash crops for the market rather than for subsistence on a feudal manor. The surplus generated by the rise of commercial agriculture encouraged increased mechanization of agriculture.
There is an activate debate on the role of the Atlantic slavery in the emergence of industrial capitalism. Eric Williams (1944) argued on Capitalism and Slavery about the crucial role of plantation slavery in the growth of industrial capitalism, since both happened in similar time periods. Harvey (2019) wrote that "A flagship of the industrial revolution, the Lancashire mills and their 465,000 textile workers, was entirely reliant [in the 1860s] on the labour of three million cotton slaves in the American Deep South."
Industrial Revolution
The productivity gains of capitalist production began a sustained and unprecedented increase at the turn of the 19th century, in a process commonly referred to as the Industrial Revolution. Starting in about 1760 in England, there was a steady transition to new manufacturing processes in a variety of industries, including going from hand production methods to machine production, new chemical manufacturing and iron production processes, improved efficiency of water power, the increasing use of steam power and the development of machine tools. It also included the change from wood and other bio-fuels to coal.
In textile manufacturing, mechanized cotton spinning powered by steam or water increased the output of a worker by a factor of about 1000, due to the application of James Hargreaves' spinning jenny, Richard Arkwright's water frame, Samuel Crompton's Spinning Mule and other inventions. The power loom increased the output of a worker by a factor of over 40. The cotton gin increased the productivity of removing seed from cotton by a factor of 50. Large gains in productivity also occurred in spinning and weaving wool and linen, although they were not as great as in cotton.
Finance
The growth of Britain's industry stimulated a concomitant growth in its system of finance and credit. In the 18th century, services offered by banks increased. Clearing facilities, security investments, cheques and overdraft protections were introduced. Cheques had been invented in the 17th century in England, and banks settled payments by direct courier to the issuing bank. Around 1770, they began meeting in a central location, and by the 19th century a dedicated space was established, known as a bankers' clearing house. The London clearing house used a method where each bank paid cash to and then was paid cash by an inspector at the end of each day. The first overdraft facility was set up in 1728 by The Royal Bank of Scotland.
The end of the Napoleonic War and the subsequent rebound in trade led to an expansion in the bullion reserves held by the Bank of England, from a low of under 4 million pounds in 1821 to 14 million pounds by late 1824.
Older innovations became routine parts of financial life during the 19th century. The Bank of England first issued bank notes during the 17th century, but the notes were hand written and few in number. After 1725, they were partially printed, but cashiers still had to sign each note and make them payable to a named person. In 1844, parliament passed the Bank Charter Act tying these notes to gold reserves, effectively creating the institution of central banking and monetary policy. The notes became fully printed and widely available from 1855.
Growing international trade increased the number of banks, especially in London. These new "merchant banks" facilitated trade growth, profiting from England's emerging dominance in seaborne shipping. Two immigrant families, Rothschild and Baring, established merchant banking firms in London in the late 18th century and came to dominate world banking in the next century. The tremendous wealth amassed by these banking firms soon attracted much attention. The poet George Gordon Byron wrote in 1823: "Who makes politics run glibber all?/ The shade of Bonaparte's noble daring?/ Jew Rothschild and his fellow-Christian, Baring."
The operation of banks also shifted. At the beginning of the century, banking was still an elite preoccupation of a handful of very wealthy families. Within a few decades, however, a new sort of banking had emerged, owned by anonymous stockholders, run by professional managers, and the recipient of the deposits of a growing body of small middle-class savers. Although this breed of banks was newly prominent, it was not new – the Quaker family Barclays had been banking in this way since 1690.
Free trade and globalization
At the height of the First French Empire, Napoleon sought to introduce a "continental system" that would render Europe economically autonomous, thereby emasculating British trade and commerce. It involved such stratagems as the use of beet sugar in preference to the cane sugar that had to be imported from the tropics. Although this caused businessmen in England to agitate for peace, Britain persevered, in part because it was well into the Industrial Revolution. The war had the opposite effect – it stimulated the growth of certain industries, such as pig-iron production which increased from 68,000 tons in 1788 to 244,000 by 1806.
In 1817, David Ricardo, James Mill and Robert Torrens, in the famous theory of comparative advantage, argued that free trade would benefit the industrially weak as well as the strong. In Principles of Political Economy and Taxation, Ricardo advanced the doctrine still considered the most counterintuitive in economics:
When an inefficient producer sends the merchandise it produces best to a country able to produce it more efficiently, both countries benefit.
By the mid 19th century, Britain was firmly wedded to the notion of free trade, and the first era of globalization began. In the 1840s, the Corn Laws and the Navigation Acts were repealed, ushering in a new age of free trade. In line with the teachings of the classical political economists, led by Adam Smith and David Ricardo, Britain embraced liberalism, encouraging competition and the development of a market economy.
Industrialization allowed cheap production of household items using economies of scale, while rapid population growth created sustained demand for commodities. Nineteenth-century imperialism decisively shaped globalization in this period. After the First and Second Opium Wars and the completion of the British conquest of India, vast populations of these regions became ready consumers of European exports. During this period, areas of sub-Saharan Africa and the Pacific islands were incorporated into the world system. Meanwhile, the European conquest of new parts of the globe, notably sub-Saharan Africa, yielded valuable natural resources such as rubber, diamonds and coal and helped fuel trade and investment between the European imperial powers, their colonies, and the United States.
The global financial system was mainly tied to the gold standard during this period. The United Kingdom first formally adopted this standard in 1821. Soon to follow was Canada in 1853, Newfoundland in 1865, and the United States and Germany (de jure) in 1873. New technologies, such as the telegraph, the transatlantic cable, the Radiotelephone, the steamship, and the railway allowed goods and information to move around the world at an unprecedented degree.
The eruption of civil war in the United States in 1861 and the blockade of its ports to international commerce meant that the main supply of cotton for the Lancashire looms was cut off. The textile industries shifted to reliance upon cotton from Africa and Asia during the course of the U.S. civil war, and this created pressure for an Anglo-French controlled canal through the Suez peninsula. The Suez canal opened in 1869, the same year in which the Central Pacific Railroad that spanned the North American continent was completed. Capitalism and the engine of profit were making the globe a smaller place.
20th century
Several major challenges to capitalism appeared in the early part of the 20th century. The Russian revolution in 1917 established the first state with a ruling communist party in the world; a decade later, the Great Depression triggered increasing criticism of the existing capitalist system. One response to this crisis was a turn to fascism, an ideology that advocated state capitalism. Another response was to reject capitalism altogether in favour of communist or democratic socialist ideologies.
Keynesianism and free markets
The economic recovery of the world's leading capitalist economies in the period following the end of the Great Depression and the Second World War—a period of unusually rapid growth by historical standards—eased discussion of capitalism's eventual decline or demise. The state began to play an increasingly prominent role to moderate and regulate the capitalistic system throughout much of the world.
Keynesian economics became a widely accepted method of government regulation and countries such as the United Kingdom experimented with mixed economies in which the state owned and operated certain major industries.
The state also expanded in the US; in 1929, total government expenditures amounted to less than one-tenth of GNP; from the 1970s they amounted to around one-third. Similar increases were seen in all industrialised capitalist economies, some of which, such as France, have reached even higher ratios of government expenditures to GNP than the United States.
A broad array of new analytical tools in the social sciences were developed to explain the social and economic trends of the period, including the concepts of post-industrial society and the welfare state.
The long post-war boom ended in the 1970s, amid the economic crises experienced following the 1973 oil crisis. The "stagflation" of the 1970s led many economic commentators and politicians to embrace market-oriented policy prescriptions inspired by the laissez-faire capitalism and classical liberalism of the nineteenth century, particularly under the influence of Friedrich Hayek and Milton Friedman. The theoretical alternative to Keynesianism was more compatible with laissez-faire and emphasised individual rights and absence of government intervention. Market-oriented solutions gained increasing support in the Western world, especially under the leadership of Ronald Reagan in the United States and Margaret Thatcher in the UK in the 1980s. Public and political interest began shifting away from the so-called collectivist concerns of Keynes's managed capitalism to a focus on individual choice, called "remarketized capitalism".
The three booming decades that followed the Second World War, according to political economist Clara E. Mattei, were an anomaly in the history of contemporary capitalism. She writes that austerity did not originate with the emergence of the neoliberal era starting in the 1970s, but "has been the mainstay of capitalism."
Globalization
Although overseas trade has been associated with the development of capitalism for over five hundred years, some thinkers argue that a number of trends associated with globalisation have acted to increase the mobility of people and capital since the last quarter of the twentieth century, combining to circumscribe the room to manoeuvre of states in choosing non-capitalist models of development. Today, these trends have bolstered the argument that capitalism should now be viewed as a truly world system (Burnham). However, other thinkers argue that globalisation, even in its quantitative degree, is no greater now than during earlier periods of capitalist trade.
After the abandonment of the Bretton Woods system in 1971, and the strict state control of foreign exchange rates, the total value of transactions in foreign exchange was estimated to be at least twenty times greater than that of all foreign movements of goods and services (EB). The internationalisation of finance, which some see as beyond the reach of state control, combined with the growing ease with which large corporations have been able to relocate their operations to low-wage states, has posed the question of the 'eclipse' of state sovereignty, arising from the growing 'globalization' of capital.
While economists generally agree about the size of global income inequality, there is a general disagreement about the recent direction of change of it. In cases such as China, where income inequality is clearly growing it is also evident that overall economic growth has rapidly increased with capitalist reforms. Indur M. Goklany's book The Improving State of the World, published by the libertarian think tank Cato Institute, argues that economic growth since the Industrial Revolution has been very strong and that factors such as adequate nutrition, life expectancy, infant mortality, literacy, prevalence of child labor, education, and available free time have improved greatly. Some scholars, including Stephen Hawking and researchers for the International Monetary Fund, contend that globalization and neoliberal economic policies are not ameliorating inequality and poverty but exacerbating it, and are creating new forms of contemporary slavery. Such policies are also expanding populations of the displaced, the unemployed and the imprisoned along with accelerating the destruction of the environment and species extinction. In 2017, the IMF warned that inequality within nations, in spite of global inequality falling in recent decades, has risen so sharply that it threatens economic growth and could result in further political polarization. Surging economic inequality following the economic crisis and the anger associated with it have resulted in a resurgence of socialist and nationalist ideas throughout the Western world, which has some economic elites from places including Silicon Valley, Davos and Harvard Business School concerned about the future of capitalism.
According to the scholars Gary Gerstle and Fritz Bartel, with the end of the Cold War and the emergence of neoliberal financialized capitalism as the dominant system, capitalism has become a truly global order in a way not seen since 1914. Economist Radhika Desai, while concurring that 1914 was the peak of the capitalist system, argues that the neoliberal reforms that were intended to restore capitalism to its primacy have instead bequeathed to the world increased inequalities, divided societies, economic crises and misery and a lack of meaningful politics, along with sluggish growth which demonstrates that, according to Desai, the system is "losing ground in terms of economic weight and world influence" with "the balance of international power . . . tilting markedly away from capitalism." Gerstle
argues that in the twilight of the neoliberal period "political disorder and dysfunction reign" and posits that the most important question for the United States and the world is what comes next.
21st century
By the beginning of the twenty-first century, mixed economies with capitalist elements had become the pervasive economic systems worldwide. The collapse of the Soviet bloc in 1991 significantly reduced the influence of socialism as an alternative economic system. Leftist movements continue to be influential in some parts of the world, most notably Latin-American Bolivarianism, with some having ties to more traditional anti-capitalist movements, such as Bolivarian Venezuela's ties to Cuba.
In many emerging markets, the influence of banking and financial capital have come to increasingly shape national developmental strategies, leading some to argue we are in a new phase of financial capitalism.
State intervention in global capital markets following the 2007–2008 financial crisis was perceived by some as signalling a crisis for free-market capitalism. Serious turmoil in the banking system and financial markets due in part to the subprime mortgage crisis reached a critical stage during September 2008, characterised by severely contracted liquidity in the global credit markets posed an existential threat to investment banks and other institutions.
Future
According to Michio Kaku, the transition to the information society involves abandoning some parts of capitalism, as the "capital" required to produce and process information becomes available to the masses and difficult to control, and is closely related to the controversial issues of intellectual property. Some have further speculated that the development of mature nanotechnology, particularly of universal assemblers, may make capitalism obsolete, with capital ceasing to be an important factor in the economic life of humanity. Various thinkers have also explored what kind of economic system might replace capitalism, such as Bob Avakian, Jason Hickel, Paul Mason, Richard D. Wolff and contributors to the "Scientists' warning on affluence".
Role of women
Women's historians have debated the impact of capitalism on the status of women. Alice Clark argued that, when capitalism arrived in 17th-century England, it negatively impacted the status of women, who lost much of their economic importance. Clark argued that, in 16th-century England, women were engaged in many aspects of industry and agriculture. The home was a central unit of production, and women played a vital role in running farms and in some trades and landed estates. Their useful economic roles gave them a sort of equality with their husbands. However, Clark argued, as capitalism expanded in the 17th century, there was more and more division of labor, with the husband taking paid labor jobs outside the home, and the wife reduced to unpaid household work. Middle-class women were confined to an idle domestic existence, supervising servants; lower-class women were forced to take poorly paid jobs. Capitalism, therefore, had a negative effect on women. By contrast, Ivy Pinchbeck argued that capitalism created the conditions for women's emancipation. Tilly and Scott have emphasized the continuity and the status of women, finding three stages in European history. In the preindustrial era, production was mostly for home use, and women produced many household needs. The second stage was the "family wage economy" of early industrialization. During this stage, the entire family depended on the collective wages of its members, including husband, wife, and older children. The third, or modern, stage is the "family consumer economy", in which the family is the site of consumption, and women are employed in large numbers in retail and clerical jobs to support rising standards of consumption.
See also
2007–2008 financial crisis
Brenner debate
Capitalism and Islam
Capitalist mode of production
Enclosure and British Agricultural Revolution
Fernand Braudel
History of capitalist theory
History of globalization
History of private equity and venture capital
Primitive accumulation of capital
Protestant work ethic
Simple commodity production
References
Further reading
Appleby, Joyce. The Relentless Revolution: A History of Capitalism (2011)
Business History Review, Special issue on Italy and the Origins of Capitalism, Business History Review, Volume 94 - Issue 1 - Spring 2020.
Cheta, Omar Youssef. "The economy by other means: The historiography of capitalism in the modern Middle East", History Compass (April 2018) 16#4
Comninel, George C. "English feudalism and the origins of capitalism" Journal of Peasant Studies, 27#4 (2000), pp. 1–53
Maurice Dobb and Paul Sweezy's famous debate on transition from feudalism to capitalism. Hilton, Rodney H. 1976. ed. The Transition from Feudalism to Capitalism.
blog on the Dobb-Sweezy debate Leftspot.com
Duplessis, Robert S. Transitions to Capitalism in Early Modern Europe (1997).
Friedman, Walter A. "Recent trends in business history research: Capitalism, democracy, and innovation". Enterprise & Society 18.4 (2017): 748–771.
Giddens, Anthony. Capitalism and modern social theory: An analysis of the writings of Marx, Durkheim and Max Weber (1971).
Hilt, Eric. "Economic History, Historical Analysis, and the 'New History of Capitalism'". Journal of Economic History (June 2017) 77#2 pp 511–536. online
Kocka, Jürgen. Capitalism: A Short History (2016) excerpt
McCarraher, Eugene. The Enchantments of Mammon: How Capitalism Became the Religion of Modernity (2022) excerpt
Marx, Karl (1867) Das Kapital
Morton, Adam David. "The Age of Absolutism: capitalism, the modern states-system and international relations" Review of International Studies (2005), 31 : 495–517 Cambridge University Press
Neal, Larry, and Jeffrey G. Williamson, eds. The Cambridge History of Capitalism (2 Vol 2016)
Olmstead, Alan L., and Paul W. Rhode. "Cotton, slavery, and the new history of capitalism". Explorations in Economic History 67 (2018): 1–17. online
O'Sullivan, Mary. "The Intelligent Woman's Guide to Capitalism". Enterprise & Society 19.4 (2018): 751–802. online
Nolan, Peter (2009) Crossroads: The end of wild capitalism. Marshall Cavendish,
Patriquin, Larry "The Agrarian Origins of the Industrial Revolution in England" Review of Radical Political Economics, Vol. 36, No. 2, 196–216 (2004)
Perelman, Michael (2000) The Invention of Capitalism: Classical Political Economy and the Secret History of Primitive Accumulation. Duke University Press. ,
Prak, Maarten; Zanden, Jan Luiten van (2022). Pioneers of Capitalism: The Netherlands 1000–1800. Princeton University Press.
Schuessler, Jennifer. "In History Departments, It's Up With Capitalism" April 6, 2013 The New York Times
Schumpeter, Joseph A. Can capitalism survive? (1978)
James Denham-Steuart [1767] An Inquiry into the Principles of Political Economy vol 1, vol2, vol 3
Weber, Max. The Protestant Ethic and the Spirit of Capitalism (2002).
Zmolek, Mike (2000). "The case for Agrarian capitalism: A response to albritton" Journal of Peasant Studies, Volume 27, Issue 4 July 2000, pp. 138–59
Zmolek M. (2001) DEBATE – Further Thoughts on Agrarian Capitalism: A Reply to Albritton The Journal of Peasant Studies, Volume 29, Number 1, October 2001, pp. 129–54 (26)
Debating Agrarian Capitalism: A Rejoinder to Albritton
Research in Political Economy, Volume 22
The Roots of Merchant Capitalism
World Socialist Movement. "What Is Capitalism?" World Socialism. 13, August 2007.
Thomas K. McCraw, "The Current Crisis and the Essence of Capitalism", The Montreal Review (August, 2011)
Zofia Łapniewska, "History of Capitalism", Museum des Kapitalismus, Berlin 2014.
Capitalism
Capitalism
Economic history
Late modern economic history | 0.774731 | 0.995722 | 0.771416 |
Enculturation | Enculturation is the process by which people learn the dynamics of their surrounding culture and acquire values and norms appropriate or necessary to that culture and its worldviews.
Definition and history of research
The term enculturation was used first by sociologist of science Harry Collins to describe one of the models whereby scientific knowledge is communicated among scientists, and is contrasted with the 'algorithmic' mode of communication.
The ingredients discussed by Collins for enculturation are
Learning by Immersion: whereby aspiring scientists learn by engaging in the daily activities of the laboratory, interacting with other scientists, and participating in experiments and discussions.
Tacit Knowledge: highlighting the importance of tacit knowledge—knowledge that is not easily codified or written down but is acquired through experience and practice.
Socialization: where individuals learn the social norms, values, and behaviours expected within the scientific community.
Language and Discourse: Scientists must become fluent in the terminology, theoretical frameworks, and modes of argumentation specific to their discipline.
Community Membership: recognition of the individual as a legitimate member of the scientific community.
The problem tackled in the article of Harry Collins was the early experiments for the detection of gravitational waves.
Enculturation is mostly studied in sociology and anthropology. The influences that limit, direct, or shape the individual (whether deliberately or not) include parents, other adults, and peers. If successful, enculturation results in competence in the language, values, and rituals of the culture. Growing up, everyone goes through their own version of enculturation. Enculturation helps form an individual into an acceptable citizen. Culture impacts everything that an individual does, regardless of whether they know about it. Enculturation is a deep-rooted process that binds together individuals. Even as a culture undergoes changes, elements such as central convictions, values, perspectives, and young raising practices remain similar. Enculturation paves way for tolerance which is highly needed for peaceful co-habitance.
The process of enculturation, most commonly discussed in the field of anthropology, is closely related to socialization, a concept central to the field of sociology. Both roughly describe the adaptation of an individual into social groups by absorbing the ideas, beliefs and practices surrounding them. In some disciplines, socialization refers to the deliberate shaping of the individual. As such, the term may cover both deliberate and informal enculturation.
The process of learning and absorbing culture need not be social, direct or conscious. Cultural transmission can occur in various forms, though the most common social methods include observing other individuals, being taught or being instructed. Less obvious mechanisms include learning one's culture from the media, the information environment and various social technologies, which can lead to cultural transmission and adaptation across societies. A good example of this is the diffusion of hip-hop culture into states and communities beyond its American origins.
Enculturation has often been studied in the context of non-immigrant African Americans.
Conrad Phillip Kottak (in Window on Humanity) writes:
Enculturation is referred to as acculturation in some academic literature. However, more recent literature has signalled a difference in meaning between the two. Whereas enculturation describes the process of learning one's own culture, acculturation denotes learning a different culture, for example, that of a host. The latter can be linked to ideas of a culture shock, which describes an emotionally-jarring disconnect between one's old and new culture cues.
Famously, the sociologist Talcott Parsons once described children as "barbarians" of a sort, since they are fundamentally uncultured.
How enculturation occurs
When minorities come into the U.S., these people might fully associate with their racial legacy prior to taking part in processing enculturation. Enculturation can happen in several ways. Direct education implies that your family, instructors, or different individuals from the general public unequivocally show you certain convictions, esteems, or anticipated standards of conduct. Parents may play a vital role in teaching their children standard behavior for their culture, including table manners and some aspects of polite social interactions. Strict familial and societal teaching, which often uses different forms of positive and negative reinforcement to shape behavior, can lead a person to adhere closely to their religious convictions and customs. Schools also provide a formal setting to learn national values, such as honoring a country's flag, national anthem, and other significant patriotic symbols.
Participatory learning occurs as individuals take an active role of interacting with their environment and culture. Through their own engagement in meaningful activities, they learn socio-cultural norms for their area and may adopt related qualities and values. For example, if your school organizes an outing to gather trash at a public park, this action assists with ingraining the upsides of regard for nature and ecological protection. Strict customs frequently stress participatory learning - for example, kids who take part in the singing of psalms during Christmas will assimilate the qualities and practices of the occasion.
Observational learning is when knowledge is gained essentially by noticing and emulating others. As much as an individual related to a model accepts that emulating the model will prompt good results and feels that one is fit for mimicking the way of behaving, learning can happen with no unequivocal instruction. For example, a youngster who is sufficiently fortunate to be brought into the world by guardians in a caring relationship will figure out how to be tender and mindful in their future connections.
See also
Civil society
Dual inheritance theory
Education
Educational anthropology
Ethnocentrism
Indoctrination
Intercultural competence
Mores
Norm (philosophy)
Norm (sociology)
Peer pressure
Transculturation
References
Bibliography
Further reading
External links
Enculturation and Acculturation
Community empowerment
Concepts of moral character, historical and contemporary (Stanford Encyclopedia of Philosophy)
Cultural concepts
Cultural studies
Interculturalism | 0.779109 | 0.990046 | 0.771354 |
Occidentalism | Occidentalism refers to a discipline that discusses the Western world (the Occident). In this context the West becomes the object, while the East is the subject. The West in the context of Occidentalism does not refer to the West in a geographical sense, but to culture or custom, especially covering the fields of thought, philosophy, sociology, anthropology, history, religion, colonialism, war, apartheid, and geography. It is not as popular as Orientalism in the general public and in academic settings.
The term emerged as the reciprocal of the notion of Orientalism popularized by literary critic Edward Said, which refers to Western stereotypes of the Eastern world, the Orient.
Terminologies
Different languages have different terms relating to Occidentalism and Westernization.
In Arabic
In Arabic, is a contemporary psychological, social, and cultural phenomenon. The individuals who embody it are characterized by their inclination toward, attachment to, and emulation of the West. It originated in non-Western societies as a result of the civilizational shock that befell it before and during colonialism.
means the 'science of Westernization' or 'Occidentalism'. It is opposite to the science of Orientalism. Dr. Hassan Hanafi said: "Occidentalism is the unraveling of the double historical knot between the self and the other... It is the elimination of the complex of greatness of the Western other, by transforming it from a subject in itself to a studied object... The task of Orientalism is to eliminate Eurocentrism and show how European consciousness has taken center stage throughout modern history, within its own civilizational environment."
means 'Westernization'. It is "a cultural and political action carried out by officials in the West, most importantly Orientalists and Westernizers, aiming to obscure the features of the religious and cultural life of Islamic and other societies, and to force these societies to imitate the West and revolve in its orbit."
Occidental representations
In China, "Traditions Regarding Western Countries" became a regular part of the Twenty-Four Histories from the 5th century AD, when commentary about The West concentrated upon on an area that did not extend farther than Syria. The extension of European imperialism in the 18th and 19th centuries established, represented, and defined the existence of an "Eastern world" and of a "Western world". Western stereotypes appear in works of Indian, Chinese and Japanese art of those times. At the same time, Western influence in politics, culture, economics and science came to be constructed through an imaginative geography of West and East.
Occidentalism figures
Hassan Hanafi (Cairo)
Mukti Ali (Indonesia)
Adian Husaini (Indonesia)
Occidentalism debated
In Occidentalism: The West in the Eyes of its Enemies (2004), Ian Buruma and Avishai Margalit argue that nationalist and nativist resistance to the West replicates Eastern-world responses against the socio-economic forces of modernization, which originated in Western culture, among utopian radicals and conservative nationalists who viewed capitalism, liberalism, and secularism as forces destructive of their societies and cultures. While the early responses to the West were a genuine encounter between alien cultures, many of the later manifestations of Occidentalism betray the influence of Western ideas upon Eastern intellectuals, such as the supremacy of the nation-state, the Romantic rejection of rationality, and the spiritual impoverishment of the citizenry of liberal democracies.
Buruma and Margalit trace that resistance to German Romanticism and to the debates, between the Westernisers and the Slavophiles in 19th-century Russia, and show that like arguments appear in the ideologies of Zionism, Maoism, Islamism, and Imperial Japanese nationalism. Nonetheless, Alastair Bonnett rejects the analyses of Buruma and Margalit as Eurocentric, and said that the field of Occidentalism emerged from the interconnection of Eastern and Western intellectual traditions.
See also
Anti-Western sentiment
Colonialism
Decolonization
Global arrogance
Atlanticism
Indigenism
Islamism
Orient
Orientalism
The War Against the West
References
Further reading
Buruma, I. and Margalit, A., Occidentalism: A Short History of Anti-Westernism, Atlantic Books, London, 2004. .
Carrier, James G. Occidentalism: Images of the West, Oxford, Clarendon Press, 1995. , .
Chen, Xiaomei, Occidentalism: A Theory Of Counter-Discourse in Post-Mao China, second ed., rev. and expanded. Lanham, Maryland: Rowman & Littlefield, 2002. .
Cohen, Nick, What's Left?: How the Left Lost its Way, New York, Harper Perennial, 2007. .
Hägerdal, Hans, "Negotiating With the Bogey Man: Perceptions of European-Southeast Asian Relations in Lore and Tradition", HumaNetten 52, 2024, pp. 195-217 https://conferences.lnu.se/index.php/hn/article/view/4310/3801
Hanafi, Hassan, Muqaddimah fi 'ilm al-istighrab (Introduction to Occidentalism), Cairo, Madbuli, 1991.
König, Daniel G., Arabic-Islamic Views of the Latin West. Tracing the Emergence of Medieval Europe, Oxford, OUP, 2015.
Souza, Teotonio R. de, "Orientalism, Occidentosis and Other Viral Strains: Historical Objectivity and Social Responsibilities", in The Portuguese, Indian Ocean and European Bridgeheads, Festschrift in Honour of K.S. Mathew, eds Pius Malekandathil & Jamal Mohammed, Fundação Oriente, India, 2001. . pp. 452–479. https://web.archive.org/web/20160422180612/https://pt.scribd.com/doc/30027278/Orientalism-Occidentosis-and-Other-Viral-Strains-Historical-Objectivity-and-Social-Responsibilities
History of international relations
Western culture
Asiacentrism | 0.780947 | 0.987703 | 0.771344 |
Cultural movement | A cultural movement is a change in the way a number of different disciplines approach their work. This embodies all art forms, the sciences, and philosophies. Historically, different nations or regions of the world have gone through their own independent sequence of movements in culture; but as world communications have accelerated, this geographical distinction has become less distinct. When cultural movements go through revolutions from one to the next, genres tend to get attacked and mixed up, and often new genres are generated and old ones fade.: These changes are often reactions against the prior cultural form, which typically has grown stale and repetitive. An obsession emerges among the mainstream with the new movement, and the old one falls into neglect – sometimes it dies out entirely, but often it chugs along favored in a few disciplines and occasionally making reappearances (sometimes prefixed with "neo-").
There is continual argument over the precise definition of each of these periods as one historian might group them differently, or choose different names or descriptions. Even though in many cases the popular change from one to the next can be swift and sudden, the beginning and end of movements are somewhat subjective. This is because the movements did not spring out of the blue and into existence then come to an abrupt end and lose total support, as would be suggested by a date range. Thus use of the term "period" is somewhat deceptive. "Period" also suggests a linearity of development, whereas it has not been uncommon for two or more distinctive cultural approaches to be active at the same time. Historians will be able to find distinctive traces of a cultural movement before its accepted beginning, and there will always be new creations in old forms. So it can be more useful to think in terms of broad "movements" that have rough beginnings and endings. Yet for historical perspective, some rough date ranges will be provided for each to indicate the "height" or accepted time span of the movement.
This current article covers Western, notably European and American cultural movements. They have, however, been paralleled by cultural movements in East Asia and elsewhere. In the late 20th and early 21st century in Thailand, for example, there has been a cultural shift away from Western social and political values and more toward Japanese and Chinese. As well, Thai culture has reinvigorated monarchical concepts to accommodate state shifts away from Western ideology regarding democracy and monarchies.
Cultural movements
Graeco-Roman
The Greek culture marked a departure from the other Mediterranean cultures that preceded and surrounded it. The Romans adopted Greek and other styles, and spread the result throughout Western Europe, North Africa, and the Middle East. Together, Greek and Roman thought in philosophy, religion, science, history, and all forms of thought can be viewed as a central underpinning of Western culture, and is therefore termed the Classical Age by some. Others might divide it into the Hellenistic period and the Roman period, or might choose other finer divisions.
See: Classical architecture — Classical sculpture — Greek architecture — Hellenistic architecture — Ionic — Doric — Corinthian — Stoicism — Cynicism — Epicurean — Roman architecture — Early Christian — Neoplatonism
Romanesque (11th century & 12th centuries)
A style (esp. architectural) similar in form and materials to Roman styles. Romanesque seems to be the first pan-European style since Roman Imperial Architecture and examples are found in every part of the continent.
See: Romanesque architecture — Ottonian Art
Gothic (mid 12th century until mid 15th century)
See: Gothic architecture — Gregorian chant — Neoplatonism
Nominalism
Rejects Platonic realism as a requirement for thinking and speaking in general terms.
Humanism (16th century)
Renaissance
The use of light, shadow, and perspective to more accurately represent life. Because of how fundamentally these ideas were felt to alter so much of life, some have referred to it as the "Golden Age". In reality it was less an "Age" and more of a movement in popular philosophy, science, and thought that spread over Europe (and probably other parts of the world), over time, and affected different aspects of culture at different points in time. Very roughly, the following periods can be taken as indicative of place/time foci of the Renaissance: Italian Renaissance 1450–1550. Spanish Renaissance 1550–1587. English Renaissance 1588–1629.
Protestant Reformation
The Protestant Reformation, often referred to simply as the Reformation, was a schism from the Roman Catholic Church initiated by Martin Luther, John Calvin, Huldrych Zwingli and other early Protestant Reformers in the 16th century Europe.
Mannerism
Anti-classicist movement that sought to emphasize the feeling of the artist himself.
See: Mannerism/Art
Baroque
Emphasizes power and authority, characterized by intricate detail and without the "disturbing angst" of Mannerism. Essentially is exaggerated Classicism to promote and glorify the Church and State. Occupied with notions of infinity.
See: Baroque art — Baroque music
Rococo
Neoclassical (17th–19th centuries)
Severe, unemotional movement recalling Roman and Greek ("classical") style, reacting against the overbred Rococo style and the emotional Baroque style. It stimulated revival of classical thinking, and had especially profound effects on science and politics. It also had a direct influence on Academic Art in the 19th century. Beginning in the early 17th century with Cartesian thought (see René Descartes), this movement provided philosophical frameworks for the natural sciences, sought to determine the principles of knowledge by rejecting all things previously believed to be known about the world. In Renaissance Classicism attempts are made to recreate the classic art forms — tragedy, comedy, and farce.
See also: Weimar Classicism
Age of Enlightenment (1688–1789): Reason (rationalism) seen as the ideal.
Romanticism (1770–1830)
Began in Germany and spread to England and France as a reaction against Neoclassicism and against the Age of Enlightenment.. The notion of "folk genius", or an inborn and intuitive ability to do magnificent things, is a core principle of the Romantic movement. Nostalgia for the primitive past in preference to the scientifically minded present. Romantic heroes, exemplified by Napoleon, are popular. Fascination with the past leads to a resurrection of interest in the Gothic period. It did not really replace the Neoclassical movement so much as provide a counterbalance; many artists sought to join both styles in their works.
See: Symbolism
Realism (1830–1905)
Ushered in by the Industrial Revolution and growing Nationalism in the world. Began in France. Attempts to portray the speech and mannerisms of everyday people in everyday life. Tends to focus on middle class social and domestic problems. Plays by Ibsen are an example. Naturalism evolved from Realism, following it briefly in art and more enduringly in theatre, film, and literature. Impressionism, based on 'scientific' knowledge and discoveries concerns observing nature and reality objectively.
See: Post-Impressionism — Neo-impressionism — Pointillism — Pre-Raphaelite
Art Nouveau (1880–1905)
Decorative, symbolic art
See: Transcendentalism
Modernism (1880–1965)
Also known as the Avant-garde movement. Originating in the 19th century with Symbolism, the Modernist movement composed itself of a wide range of 'isms' that ran in contrast to Realism and that sought out the underlying fundamentals of art and philosophy. The Jazz age and Hollywood emerge and have their hey-days.
See: Fauvism — Cubism — Futurism — Suprematism — Dada — Constructivism — Surrealism — Expressionism — Existentialism — Op art — Art Deco — Bauhaus — Neo-Plasticism — Precisionism — Abstract expressionism — New Realism — Color field painting — Happening — Fluxus — Hard-edge painting — Pop art — Photorealism — Minimalism — Postminimalism — Lyrical abstraction — Situationism
Postmodernism (since c.1965)
A reaction to Modernism, in a way, Postmodernism largely discards the notion that artists should seek pure fundamentals, often questioning whether such fundamentals even exist – or suggestion that if they do exist, they may be irrelevant. It is exemplified by movements such as deconstructivism, conceptual art, etc.
See: Postmodern philosophy — Postmodern music — Postmodern art
Post-postmodernism (since c.1990)
See also
Art movement
List of art movements
Critical theory
Cultural imperialism
Cultural sensibility
History of philosophy
Postliterate society
Periodization
Social movement
External links
Alphabetical list of some movements, styles, discoveries and facts on the World History Timeline chart
Culture
Social movements | 0.780591 | 0.988053 | 0.771265 |
Matriarchy | Matriarchy is a social system in which positions of dominance and privilege are held by women. In a broader sense it can also extend to moral authority, social privilege, and control of property. While those definitions apply in general English, definitions specific to anthropology and feminism differ in some respects.
Matriarchies may also be confused with matrilineal, matrilocal, and matrifocal societies. While some may consider any non-patriarchal system to be matriarchal, most academics exclude those systems from matriarchies as strictly defined.
Definitions, connotations, and etymology
According to the Oxford English Dictionary (OED), matriarchy is a "form of social organization in which the mother or oldest female is the head of the family, and descent and relationship are reckoned through the female line; government or rule by a woman or women." A popular definition, according to James Peoples and Garrick Bailey, is "female dominance". Within the academic discipline of cultural anthropology, according to the OED, matriarchy is a "culture or community in which such a system prevails" or a "family, society, organization, etc., dominated by a woman or women" without reference to laws that require women to dominate. In general anthropology, according to William A. Haviland, matriarchy is "rule by women". According to Lawrence A. Kuzner in 1997, A. R. Radcliffe-Brown argued in 1924 that the definitions of matriarchy and patriarchy had "logical and empirical failings (...) [and] were too vague to be scientifically useful".
Most academics exclude egalitarian nonpatriarchal systems from matriarchies more strictly defined. According to Heide Göttner-Abendroth, a reluctance to accept the existence of matriarchies might be based on a specific culturally biased notion of how to define matriarchy: because in a patriarchy men rule over women, a matriarchy has frequently been conceptualized as women ruling over men, while she believed that matriarchies are egalitarian.
The word matriarchy, for a society politically led by females, especially mothers, who also control property, is often interpreted to mean the general opposite of patriarchy, but it is not an opposite. According to Peoples and Bailey, the view of anthropologist Peggy Reeves Sanday is that matriarchies are not a mirror or inverted form of patriarchies but rather that a matriarchy "emphasizes maternal meanings where 'maternal symbols are linked to social practices influencing the lives of both sexes and where women play a central role in these practices. Journalist Margot Adler wrote, "literally, ... ["matriarchy"] means government by mothers, or more broadly, government and power in the hands of women." Barbara Love and Elizabeth Shanklin wrote, "by 'matriarchy,' we mean a non-alienated society: a society in which women, those who produce the next generation, define motherhood, determine the conditions of motherhood, and determine the environment in which the next generation is reared." According to Cynthia Eller, "'matriarchy' can be thought of ... as a shorthand description for any society in which women's power is equal or superior to men's and in which the culture centers around values and life events described as 'feminine.'" Eller wrote that the idea of matriarchy mainly rests on two pillars, romanticism and modern social criticism. With respect to a prehistoric matriarchal Golden Age, according to Barbara Epstein, "matriarchy ... means a social system organized around matriliny and goddess worship in which women have positions of power." According to Adler, in the Marxist tradition, it usually refers to a pre-class society "where women and men share equally in production and power."
According to Adler, "a number of feminists note that few definitions of the word [matriarchy], despite its literal meaning, include any concept of power, and they suggest that centuries of oppression have made it impossible for women to conceive of themselves with such power."
Matriarchy has often been presented as negative, in contrast to patriarchy as natural and inevitable for society, and thus that matriarchy is hopeless. Love and Shanklin wrote:
When we hear the word "matriarchy", we are conditioned to a number of responses: that matriarchy refers to the past and that matriarchies have never existed; that matriarchy is a hopeless fantasy of female domination, of mothers dominating children, of women being cruel to men. Conditioning us negatively to matriarchy is, of course, in the interests of patriarchs. We are made to feel that patriarchy is natural; we are less likely to question it, and less likely to direct our energies to ending it.
The Matriarchal Studies school led by Göttner-Abendroth calls for an even more inclusive redefinition of the term: Göttner-Abendroth defines Modern Matriarchal Studies as the "investigation and presentation of non-patriarchal societies", effectively defining matriarchy as non-patriarchy. She has also defined matriarchy as characterized by the sharing of power equally between the two genders. According to Diane LeBow, "matriarchal societies are often described as ... egalitarian ...", although anthropologist Ruby Rohrlich has written of "the centrality of women in an egalitarian society."
Matriarchy is also the public formation in which the woman occupies the ruling position in a family. Some, including Daniel Moynihan, claimed that there is a matriarchy among Black families in the United States, because a quarter of them were headed by single women; thus, families composing a substantial minority of a substantial minority could be enough for the latter to constitute a matriarchy within a larger non-matriarchal society with non-matriarchal political dynamics.
Etymologically, it is from Latin māter (genitive mātris), "mother" and Greek ἄρχειν arkhein, "to rule". The notion of matriarchy was defined by Joseph-François Lafitau (1681–1746), who first named it ginécocratie. According to the OED, the earliest known attestation of the word matriarchy is in 1885. By contrast, gynæcocracy, meaning 'rule of women', has been in use since the 17th century, building on the Greek word found in Aristotle and Plutarch.
Terms with similar etymology are also used in various social sciences and humanities to describe matriarchal or matriological aspects of social, cultural, and political processes. Adjective matriological is derived from the noun matriology that comes from Latin word māter (mother) and Greek word λογος (logos, teaching about). The term matriology was used in theology and history of religion as a designation for the study of particular motherly aspects of various female deities. The term was subsequently borrowed by other social sciences and humanities and its meaning was widened in order to describe and define particular female-dominated and female-centered aspects of cultural and social life. The male alternative for matriology is patriology, with patriarchy being the male alternative to matriarchy.
Related concepts
In their works, Johann Jakob Bachofen and Lewis Morgan used such terms and expressions as mother-right, female rule, gyneocracy, and female authority. All these terms meant the same: the rule by females (mother or wife). Although Bachofen and Lewis Morgan confined the "mother-right" inside households, it was the basis of female influence upon the whole society. The authors of the classics did not think that gyneocracy meant 'female government' in politics. They were aware of the fact that the sexual structure of government had no relation to domestic rule and to roles of both sexes.
Words beginning with gyn-
A matriarchy is also sometimes called a gynarchy, a gynocracy, a gynecocracy, or a gynocentric society, although these terms do not definitionally emphasize motherhood. Cultural anthropologist Jules de Leeuwe argued that some societies were "mainly gynecocratic" (others being "mainly androcratic").
Gynecocracy, gynaecocracy, gynocracy, gyneocracy, and gynarchy generally mean 'government by women over women and men'. All of these words are synonyms in their most important definitions, and while these words all share that principal meaning, they differ a little in their additional meanings, so that gynecocracy also means 'women's social supremacy', gynaecocracy also means 'government by one woman', 'female dominance', and, derogatorily, 'petticoat government', and gynocracy also means 'women as the ruling class'. Gyneocracy is rarely used in modern times. None of these definitions are limited to mothers.
Some question whether a queen ruling without a king is sufficient to constitute female government, given the amount of participation of other men in most such governments. One view is that it is sufficient. "By the end of [Queen] Elizabeth's reign, gynecocracy was a fait accompli", according to historian Paula Louise Scalingi. Gynecocracy is defined by Scalingi as "government by women", similar to dictionary definitions (one dictionary adding 'women's social supremacy' to the governing role). Scalingi reported arguments for and against the validity of gynocracy and said, "the humanists treated the question of female rule as part of the larger controversy over sexual equality." Possibly, queenship, because of the power wielded by men in leadership and assisting a queen, leads to queen bee syndrome, contributing to the difficulty of other women in becoming heads of the government.
Some matriarchies have been described by historian Helen Diner as "a strong gynocracy" and "women monopolizing government" and she described matriarchal Amazons as "an extreme, feminist wing" of humanity and that North African women "ruled the country politically" before being overthrown by forms of patriarchy and, according to Adler, Diner "envision[ed] a dominance matriarchy".
Gynocentrism is the 'dominant or exclusive focus on women', is opposed to androcentrism, and "invert[s] ... the privilege of the ... [male/female] binary ...[,] [some feminists] arguing for 'the superiority of values embodied in traditionally female experience'".
Intergenerational relationships
Some people who sought evidence for the existence of a matriarchy often mixed matriarchy with anthropological terms and concepts describing specific arrangements in the field of family relationships and the organization of family life, such as matrilineality and matrilocality. These terms refer to intergenerational relationships (as matriarchy may), but do not distinguish between males and females insofar as they apply to specific arrangements for sons as well as daughters from the perspective of their relatives on their mother's side. Accordingly, these concepts do not represent matriarchy as 'power of women over men' but instead familial dynamics.
Words beginning with matri-
Anthropologists have begun to use the term matrifocality. There is some debate concerning the terminological delineation between matrifocality and matriarchy. Matrifocal societies are those in which women, especially mothers, occupy a central position. Anthropologist R. T. Smith refers to matrifocality as the kinship structure of a social system whereby the mothers assume structural prominence. The term does not necessarily imply domination by women or mothers. In addition, some authors depart from the premise of a mother-child dyad as the core of a human group where the grandmother was the central ancestor with her children and grandchildren clustered around her in an extended family.
The term matricentric means 'having a mother as head of the family or household'.
Matristic: Feminist scholars and archeologists such as Marija Gimbutas, Gerda Lerner, and Riane Eisler label their notion of a "woman-centered" society surrounding Mother Goddess worship during prehistory (in Paleolithic and Neolithic Europe) and in ancient civilizations by using the term matristic rather than matriarchal. Marija Gimbutas states that she uses "the term matristic simply to avoid the term matriarchy with the understanding that it incorporates matriliny."
Matrilineality, in which descent is traced through the female line, is sometimes conflated with historical matriarchy. Sanday favors redefining and reintroducing the word matriarchy, especially in reference to contemporary matrilineal societies such as the Minangkabau. The 19th-century belief that matriarchal societies existed was due to the transmission of "economic and social power ... through kinship lines" so that "in a matrilineal society all power would be channeled through women. Women may not have retained all power and authority in such societies ..., but they would have been in a position to control and dispense power... not unlike the nagging wife or the domineering mother."
A matrilocal society defines a society in which a couple resides close to the bride's family rather than the bridegroom's family.
History and distribution
Most anthropologists hold that there are no known societies that are unambiguously matriarchal. According to J. M. Adovasio, Olga Soffer, and Jake Page, no true matriarchy is known to have actually existed. Anthropologist Joan Bamberger argued that the historical record contains no primary sources on any society in which women dominated. Anthropologist Donald Brown's list of human cultural universals (viz., features shared by nearly all current human societies) includes men being the "dominant element" in public political affairs, which he asserts is the contemporary opinion of mainstream anthropology. There are some disagreements and possible exceptions. A belief that women's rule preceded men's rule was, according to Haviland, "held by many nineteenth-century intellectuals". The hypothesis survived into the 20th century and was notably advanced in the context of feminism and especially second-wave feminism, but the hypothesis is mostly discredited today, most experts saying that it was never true.
Matriarchs, according to Peoples and Bailey, do exist; there are "individual matriarchs of families and kin groups."
By region and culture
Ancient Near East
The Cambridge Ancient History (1975) stated that "the predominance of a supreme goddess is probably a reflection from the practice of matriarchy which at all times characterized Elamite civilization to a greater or lesser degree, before this practice was overthrown by the patriarchy".
Europe
Tacitus claimed in his book Germania that in "the nations of the Sitones woman is the ruling sex."
Anne Helene Gjelstad describes the women on the Estonian islands Kihnu and Manija as "the last matriarchal society in Europe" because "the older women here take care of almost everything on land as their husbands travel the seas".
Asia
Bangladesh
The Khasi and the Garo people residing in the Sylhet and Mymensingh regions are two of the top matriarchal societies of Bangladesh.
Burma
Possible matriarchies in Burma are, according to Jorgen Bisch, the Padaungs and, according to Andrew Marshall, the Kayaw.
China
The Mosuo culture, which is in China near Tibet, is frequently described as matriarchal. The term matrilineal is sometimes used, and, while more accurate, still does not reflect the full complexity of their social organization. In fact, it is not easy to categorize Mosuo culture within traditional Western definitions. They have aspects of a matriarchal culture: women are often the head of the house, inheritance is through the female line, and women make business decisions. However, unlike in a true matriarchy, political power tends to be in the hands of males, and the current culture of the Mosuo has been heavily shaped by their minority status.
India
In India, of communities recognized in the national Constitution as Scheduled Tribes, "some ... [are] matriarchal and matrilineal" "and thus have been known to be more egalitarian". According to interviewer Anuj Kumar, Manipur, India, "has a matriarchal society", but this may not be scholarly. In Kerala, Nairs, Thiyyas, Brahmins of Payyannoor village and Muslims of North Malabar and in Karnataka, Bunts and Billavas follow the matrilineal system.
Indonesia
Anthropologist Peggy Reeves Sanday has said that the Minangkabau society may be a matriarchy.
Ancient Vietnam (before 43 CE)
According to William S. Turley, "the role of women in traditional Vietnamese culture was determined [partly] by ... indigenous customs bearing traces of matriarchy", affecting "different social classes" to "varying degrees". Peter C. Phan explains that "the ancient Vietnamese family system was most likely matriarchal, with women ruling over the clan or tribe" until the Vietnamese "adopt[ed] ... the patriarchal system introduced by the Chinese." That being said, even after adopting the patriarchal Chinese system, Vietnamese women, especially peasant women, still held a higher position than women in most patriarchal societies. According to Chiricosta, the legend of Âu Cơ is said to be evidence of "the presence of an original 'matriarchy' in North Vietnam and [it] led to the double kinship system, which developed there .... [and which] combined matrilineal and patrilineal patterns of family structure and assigned equal importance to both lines." Chiricosta said that other scholars relied on "this 'matriarchal' aspect of the myth to differentiate Vietnamese society from the pervasive spread of Chinese Confucian patriarchy," and that "resistance to China's colonization of Vietnam ... [combined with] the view that Vietnam was originally a matriarchy ... [led to viewing] women's struggles for liberation from (Chinese) patriarchy as a metaphor for the entire nation's struggle for Vietnamese independence," and therefore, a "metaphor for the struggle of the matriarchy to resist being overthrown by the patriarchy." According to Keith Weller Taylor, "the matriarchal flavor of the time is ... attested by the fact that Trung Trac's mother's tomb and spirit temple have survived, although nothing remains of her father", and the "society of the Trung sisters" was "strongly matrilineal". According to Donald M. Seekins, an indication of "the strength of matriarchal values" was that a woman, Trưng Trắc, with her younger sister Trưng Nhị, raised an army of "over 80,000 soldiers ... [in which] many of her officers were women", with which they defeated the Chinese. According to Seekins, "in [the year] 40, Trung Trac was proclaimed queen, and a capital was built for her" and modern Vietnam considers the Trung sisters to be heroines. According to Karen G. Turner, in the third century A.D., Lady Triệu to personify the matriarchal culture that mitigated Confucianized patriarchal norms .... [although] she is also painted as something of a freak ... with her ... savage, violent streak."
Native Americans
The Hopi (in what is now the Hopi Reservation in northeastern Arizona), according to Alice Schlegel, had as its "gender ideology ... one of female superiority, and it operated within a social actuality of sexual equality." According to LeBow (based on Schlegel's work), in the Hopi, "gender roles ... are egalitarian .... [and] [n]either sex is inferior." LeBow concluded that Hopi women "participate fully in ... political decision-making." According to Schlegel, "the Hopi no longer live as they are described here" and "the attitude of female superiority is fading". Schlegel said the Hopi "were and still are matrilineal" and "the household ... was matrilocal". Schlegel explains why there was female superiority as that the Hopi believed in "life as the highest good ... [with] the female principle ... activated in women and in Mother Earth ... as its source" and that the Hopi had no need for an army as they did not have rivalries with neighbors. Women were central to institutions of clan and household and predominated "within the economic and social systems (in contrast to male predominance within the political and ceremonial systems)." The Clan Mother, for example, was empowered to overturn land distribution by men if she felt it was unfair since there was no "countervailing ... strongly centralized, male-centered political structure".
The Iroquois Confederacy or League, combining five to six Native American Haudenosaunee nations or tribes before the U.S. became a nation, operated by The Great Binding Law of Peace, a constitution by which women participated in the League's political decision-making, including deciding whether to proceed to war, through what may have been a matriarchy or gyneocracy. According to Doug George-Kanentiio, in this society, mothers exercise central moral and political roles. The dates of this constitution's operation are unknown; the League was formed in approximately 1000–1450, but the constitution was oral until written in about 1880. The League still exists.
George-Kanentiio explains:
In our society, women are the center of all things. Nature, we believe, has given women the ability to create; therefore it is only natural that women be in positions of power to protect this function....We traced our clans through women; a child born into the world assumed the clan membership of its mother. Our young women were expected to be physically strong....The young women received formal instruction in traditional planting....Since the Iroquois were absolutely dependent upon the crops they grew, whoever controlled this vital activity wielded great power within our communities. It was our belief that since women were the givers of life they naturally regulated the feeding of our people....In all countries, real wealth stems from the control of land and its resources. Our Iroquois philosophers knew this as well as we knew natural law. To us it made sense for women to control the land since they were far more sensitive to the rhythms of the Mother Earth. We did not own the land but were custodians of it. Our women decided any and all issues involving territory, including where a community was to be built and how land was to be used....In our political system, we mandated full equality. Our leaders were selected by a caucus of women before the appointments were subject to popular review....Our traditional governments are composed of an equal number of men and women. The men are chiefs and the women clan-mothers....As leaders, the women closely monitor the actions of the men and retain the right to veto any law they deem inappropriate....Our women not only hold the reins of political and economic power, they also have the right to determine all issues involving the taking of human life. Declarations of war had to be approved by the women, while treaties of peace were subject to their deliberations.
By chronology
Earliest prehistory and undated
The controversy surrounding prehistoric or "primal" matriarchy began in reaction to the 1861 book by Bachofen, Mother Right: An Investigation of the Religious and Juridical Character of Matriarchy in the Ancient World. Several generations of ethnologists were inspired by his pseudo-evolutionary theory of archaic matriarchy. Following him and Jane Ellen Harrison, several generations of scholars, usually arguing from known myths or oral traditions and examination of Neolithic female cult-figures, suggested that many ancient societies might have been matriarchal, or even that there existed a wide-ranging matriarchal society prior to the ancient cultures of which we are aware. After Bachofen's three-volume Myth, Religion, and Mother Right, classicists such as Harrison, Arthur Evans, Walter Burkert, and James Mellaart looked at the evidence of matriarchal religion in pre-Hellenic societies. The concept was further investigated by Lewis Morgan. According to Uwe Wesel, Bachofen's myth interpretations have proved to be untenable. According to historian Susan Mann, as of 2000, "few scholars these days find ... [a "notion of a stage of primal matriarchy"] persuasive."
Kurt Derungs is a recent non-academic author advocating an "anthropology of landscape" based on allegedly matriarchal traces in toponymy and folklore.
Paleolithic and Neolithic Ages
Friedrich Engels, in 1884, claimed that, in the earliest stages of human social development, there was group marriage and that therefore paternity was disputable, whereas maternity was not, so that a family could be traced only through the female line. This was a materialist interpretation of Bachofen's Mutterrecht. Engels speculated that the domestication of animals increased material wealth, which was claimed by men. Engels said that men wanted to control women to use as laborers and to pass on wealth to their children, requiring monogamy; as patriarchy rose, women's status declined until they became mere objects in the exchange trade between men, causing the global defeat of the female sex and the rise of individualism and competition. According to Eller, Engels may have been influenced with respect to women's status by August Bebel, according to whom matriarchy naturally resulted in communism, while patriarchy was characterized by exploitation.
Austrian writer Bertha Diener (or Helen Diner), wrote Mothers and Amazons (1930), the first work to focus on women's cultural history, a classic of feminist matriarchal study. Her view is that all past human societies were originally matriarchal, while most later shifted to patriarchy and degenerated. The controversy intensified with The White Goddess by Robert Graves (1948) and his later analysis of classical Greek mythology, focusing on the reconstruction of earlier myths that had conjecturally been rewritten after a transition from matriarchal to patriarchal religion in very early historical times.
From the 1950s, Marija Gimbutas developed a theory of an Old European culture in Neolithic Europe with matriarchal traits, which had been replaced by the patriarchal system of the Proto-Indo-Europeans in the Bronze Age. However, other anthropologists warned that "the goddess worship or matrilocality that evidently existed in many paleolithic societies was not necessarily associated with matriarchy in the sense of women's power over men. Many societies can be found that exhibit those qualities along with female subordination." According to Eller, Gimbutas had a large part in constructing a myth of historical matriarchy by examining Eastern European cultures that never really resembled the alleged universal matriarchy. She asserts that in "actually documented primitive societies" of recent (historical) times, paternity is never ignored and that the sacred status of goddesses does not automatically increase female social status, and she interprets utopian matriarchy as an invented inversion of antifeminism.
From the 1970s, ideas of matriarchy were taken up by popular writers of second-wave feminism such as Riane Eisler, Elizabeth Gould Davis, and Merlin Stone, and expanded with the speculations of Margaret Murray on witchcraft, by the Goddess movement, and in feminist Wicca. "A Golden Age of matriarchy" was prominently presented by Charlene Spretnak and "encouraged" by Stone and Eisler, but, at least for the Neolithic Age, it has been denounced as feminist wishful thinking in works such as The Inevitability of Patriarchy, Why Men Rule, Goddess Unmasked, and The Myth of Matriarchal Prehistory. The idea is not emphasized in third-wave feminism.
J.F. del Giorgio insists on a matrifocal, matrilocal, matrilineal Paleolithic society.
Bronze Age
According to Rohrlich, "many scholars are convinced that Crete was a matriarchy, ruled by a queen-priestess" and the "Cretan civilization" was "matriarchal" before "1500 BC," when it was overrun and colonized by the patriarchy.
Also according to Rohrlich, "in the early Sumerian city-states 'matriarchy seems to have left something more than a trace.
One common misconception among historians of the Bronze Age such as Stone and Eisler is the notion that the Semites were matriarchal while the Indo-Europeans practiced a patriarchal system. An example of this view is found in Stone's When God Was a Woman, wherein she makes the case that the worship of Yahweh was an Indo-European invention superimposed on an ancient matriarchal Semitic nation. Evidence from the Amorites and pre-Islamic Arabs, however, indicates that the primitive Semitic family was in fact patriarchal and patrilineal.
However, not all scholars agree. Anthropologist and Biblical scholar Raphael Patai writes in The Hebrew Goddess that the Jewish religion, far from being pure monotheism, contained from earliest times strong polytheistic elements, chief of which was the cult of Asherah, the mother goddess. A story in the Biblical Book of Judges places the worship of Asherah in the 12th century BC. Originally a Canaanite goddess, her worship was adopted by Hebrews who intermarried with Canaanites. She was worshipped in public and was represented by carved wooden poles. Numerous small nude female figurines of clay were found all over ancient Palestine and a seventh-century Hebrew text invokes her aid for a woman giving birth.
Shekinah is the name of the feminine holy spirit who embodies both divine radiance and compassion. Exemplifying various traits associated with mothers, she comforts the sick and dejected, accompanies the Jews whenever they are exiled, and intercedes with God to exercise mercy rather than to inflict retribution on sinners. While not a creation of the Hebrew Bible, Shekinah appears in a slightly later Aramaic translation of the Bible in the first or second century C.E., according to Patai. Initially portrayed as the presence of God, she later becomes distinct from God, taking on more physical attributes.
Meanwhile, the Indo-Europeans were known to have practiced multiple succession systems, and there is much better evidence of matrilineal customs among the Indo-European Celts and Germanics than among any ancient Semitic peoples.
Women ruled Sparta while the men were often away fighting, or when both kings were incapacitated or too young to rule. Gorgo, Queen of Sparta, was asked by a woman in Attica "You Spartan women are the only women that lord it over your men", to which Gorgo replied: "Yes, for we are the only women that are mothers of men!"
Iron Age to Middle Ages
Arising in the period ranging from the Iron Age to the Middle Ages, several northwestern European mythologies from the Irish (e.g. Macha and Scáthach), the Brittonic (e.g. Rhiannon), and the Germanic (e.g. Grendel's mother and Nerthus) contain ambiguous episodes of primal female power which have been interpreted as folk evidence of matriarchal attitudes in pre-Christian European Iron Age societies. Often transcribed from a retrospective, patriarchal, Romanised, and Catholic perspective, they hint at a possible earlier era when female power predominated. The first-century historical British figure of Boudicca indicates that Brittonnic society permitted explicit female autocracy or a form of gender equality which contrasted strongly with the patriarchal Mediterranean civilisation that later overthrew it.
20th–21st centuries
The Mosuo people are an ethnic group in southwest China. They are considered one of the most well-known matriarchal societies, although many scholars assert that they are rather matrilineal. , the sole heirs in the family are still daughters. Since 1990, when foreign tourism became permitted, tourists started visiting the Mosuo people. As pointed out by the Xinhua News Agency, "tourism has become so profitable that many Mosuo families in the area who have opened their homes have become wealthy." Although this revived their economy and lifted many out of poverty, it also altered the fabric of their society to have outsiders present who often look down on the Mosuo's cultural practices.
In 1995, in Kenya, according to Emily Wax, Umoja, a village only for women from one tribe with about 36 residents, was established under a matriarch. It was founded on an empty piece of land by women who fled their homes after being raped by British soldiers. They formed a safe-haven in rural Samburu County in northern Kenya. Men of the same tribe established a village nearby from which to observe the women's village, the men's leader objecting to the matriarch's questioning the culture and men suing to close the women's village. As of 2019, 48 women, most of whom who have fled gender-based violence like female genital mutilation, assault, rape, and abusive marriages call Umoja home, living with their children in this all female-village. Many of these women faced stigma in their communities following these attacks and had no choice but to flee. Others sought to escape from the nearby Samburu community, which practices child marriage and female genital mutilation. In the village, the women practice "collective economic cooperation." The sons are obligated to move out when they turn eighteen. Not only has the Umoja village protected its members, the members have also done extensive work for gender equity in Kenya. The message of the village has spread outside of Kenya as member "Lolosoli's passion for gender equity in Kenya has carried her to speak on social justice at the United Nations and to participate in an international women's rights conference in South Africa."
The Khasi people live in Northeast India in the state of Meghalaya. Although largely considered matrilineal, some women's studies scholars such as Roopleena Banerjee consider the Khasi to be matriarchal. Banerjee asserts that "to assess and account a matriarchal society through the parameters of the patriarchy would be wrong" and that "we should avoid looking at history only through the colonizer/colonized boundaries." The Khasi people consist of many clans who trace their lineage through the matriarchs of the families. A Khasi husband typically moves into his wife's home, and both wife and husband participate equally in raising their children. A Khasi woman named Passah explains that "[The father] would come to his wife's home late at night... In the morning, he's back at his mother's home to work in the fields," showing how a man's role consists of supporting his wife and family in Khasi society. Traditionally, the youngest daughter, called the Khadduh, receives and cares for ancestral property. As of 2021, the Khasi continue to practice many female-led customs, with wealth and property being passed down through the female side of the family.
Spokespersons for various indigenous peoples at the United Nations and elsewhere have highlighted the central role of women in their societies, referring to them as matriarchies, in danger of being overthrown by the patriarchy, or as matriarchal in character.
Mythology
Amazons
A legendary matriarchy related by several writers was Amazon society. According to Phyllis Chesler, "in Amazon societies, women were ... mothers and their society's only political and religious leaders", as well as the only warriors and hunters; "queens were elected" and apparently "any woman could aspire to and achieve full human expression." Herodotus reported that the Sarmatians were descendants of Amazons and Scythians, and that their females observed their ancient maternal customs, "frequently hunting on horseback with their husbands; in war taking the field; and wearing the very same dress as the men". Moreover, said Herodotus, "no girl shall wed till she has killed a man in battle". Amazons came to play a role in Roman historiography. Julius Caesar spoke of the conquest of large parts of Asia by Semiramis and the Amazons. Although Strabo was sceptical about their historicity, the Amazons were taken as historical throughout late Antiquity. Several Church Fathers spoke of the Amazons as a real people. Medieval authors continued a tradition of locating the Amazons in the North, Adam of Bremen placing them at the Baltic Sea and Paulus Diaconus in the heart of Germania.
Greece
Robert Graves suggested that a myth displaced earlier myths that had to change when a major cultural change brought patriarchy to replace a matriarchy. According to this myth, in Greek mythology, Zeus is said to have swallowed his pregnant lover, the titan goddess Metis, who was carrying their daughter, Athena. The mother and child created havoc inside Zeus. Either Hermes or Hephaestus split Zeus's head, allowing Athena, in full battle armor, to burst forth from his forehead. Athena was thus described as being "born" from Zeus. The outcome pleased Zeus as it didn't fulfill the prophecy of Themis which (according to Aeschylus) predicted that Zeus will one day bear a son that would overthrow him.
Celtic myth and society
According to Adler, "there is plenty of evidence of ancient societies where women held greater power than in many societies today. For example, Jean Markale's studies of Celtic societies show that the power of women was reflected not only in myth and legend but in legal codes pertaining to marriage, divorce, property ownership, and the right to rule...although this was overthrown by the patriarchy."
Basque myth and society
The hypothesis of Basque matriarchism or theory of Basque matriarchism is a theoretical proposal launched by Andrés Ortiz-Osés that maintains that the existence of a psychosocial structure centered or focused on the matriarchal-feminine archetype (mother / woman, which finds in the archetype of the great Basque mother Mari, her precipitate as a projection of Mother Earth / nature) that "permeates, coagulates and unites the traditional Basque social group in a way that is different from the patriarchal Indo-European peoples".
This mythical matriarchal conception corresponds to the conception of the Basques, clearly reflected in their mythology. The Earth is the mother of the Sun and the Moon, compared to Indo-European patriarchal conceptions, where the sun is reflected as a God, numen or male spirit. Prayers and greetings were dedicated to these two sisters at dawn and dusk, when they returned to the bosom of Mother Earth.
Franz-Karl Mayr, this philosopher argued that the archetypal background of Basque mythology had to be inscribed in the context of a Paleolithic dominated by the Great Mother, in which the cycle of Mari (goddess) and her metamorphoses offers all a typical symbolism of the matriarchal-naturalistic context. According to the archetype of the Great Mother, this is usually related to fertility cults, as in the case of Mari, who is the determinant of fertility-fecundity, the maker of rain or hail, that on whose telluric forces depend the crops, in space and time, life and death, luck (grace) and misfortune.
South America
Bamberger (1974) examines several matriarchal myths from South American cultures and concludes that portraying the women from this matriarchal period as immoral often serves to restrain contemporary women in these societies, providing reason for the overthrow by the patriarchy.
In feminist thought
While matriarchy has mostly fallen out of use for the anthropological description of existing societies, it remains current as a concept in feminism.
In first-wave feminist discourse, either Elizabeth Cady Stanton or Margaret Fuller (it is unclear who was first) introduced the concept of matriarchy and the discourse was joined in by Matilda Joslyn Gage. Victoria Woodhull, in 1871, called for men to open the U.S. government to women or a new constitution and government would be formed in a year; and, on a basis of equality, she ran to be elected president in 1872. Charlotte Perkins Gilman, in 1911 and 1914, argued for "a woman-centered, or better mother-centered, world" and described "government by women". She argued that a government led by either sex must be assisted by the other, both genders being "useful ... and should in our governments be alike used", because men and women have different qualities.
Cultural feminism includes "matriarchal worship", according to Prof. James Penner.
In feminist literature, matriarchy and patriarchy are not conceived as simple mirrors of each other. While matriarchy sometimes means "the political rule of women", that meaning is often rejected, on the ground that matriarchy is not a mirroring of patriarchy. Patriarchy is held to be about power over others while matriarchy is held to be about power from within, Starhawk having written on that distinction and Adler having argued that matriarchal power is not possessive and not controlling, but is harmonious with nature, arguing that women are uniquely capable of using power without exploitative purposes.
For radical feminists, the importance of matriarchy is that "veneration for the female principle ... somewhat lightens an oppressive system."
Feminist utopias are a form of advocacy. According to Tineke Willemsen, "a feminist utopia would ... be the description of a place where at least women would like to live." Willemsen continues, among "type[s] of feminist utopias[,] ... [one] stem[s] from feminists who emphasize the differences between women and men. They tend to formulate their ideal world in terms of a society where women's positions are better than men's. There are various forms of matriarchy, or even a utopia that resembles the Greek myth of the Amazons.... [V]ery few modern utopias have been developed in which women are absolute autocrats."
A minority of feminists, generally radical, have argued that women should govern societies of women and men. In all of these advocacies, the governing women are not limited to mothers:
In her book Scapegoat: The Jews, Israel, and Women's Liberation, Andrea Dworkin stated that she wanted women to have their own country, "Womenland," which, comparable to Israel, would serve as a "place of potential refuge". In the Palestine Solidarity Review, Veronica A. Ouma reviewed the book and argued her view that while Dworkin "pays lip service to the egalitarian nature of ... [stateless] societies [without hierarchies], she envisions a state whereby women either impose gender equality or a state where females rule supreme above males."
Starhawk, in The Fifth Sacred Thing (1993), fiction, wrote of "a utopia where women are leading societies but are doing so with the consent of men."
Phyllis Chesler wrote in Women and Madness (2005 and 1972) that feminist women must "dominate public and social institutions". She also wrote that women fare better when controlling the means of production and that equality with men should not be supported, even if female domination is no more "just" than male domination. On the other hand, in 1985, she was "probably more of a feminist-anarchist ... more mistrustful of the organisation of power into large bureaucratic states [than she was in 1972]". Between Chesler's 1972 and 2005 editions, Dale Spender wrote that Chesler "takes [as] a ... stand [that] .... [e]quality is a spurious goal, and of no use to women: the only way women can protect themselves is if they dominate particular institutions and can use them to serve women's interests. Reproduction is a case in point." Spender wrote Chesler "remarks ... women will be superior".
Monique Wittig authored, as fiction (not as fact), Les Guérillères, with her description of an asserted "female State". The work was described by Rohrlich as a "fictional counterpart" to "so-called Amazon societies". Scholarly interpretations of the fictional work include that women win a war against men, "reconcil[e]" with "those men of good will who come to join them", exercise feminist autonomy through polyandry, decide how to govern, and rule the men. The women confronting men are, according to Tucker Farley, diverse and thus stronger and more united and, continued Farley, permit a "few ... men, who are willing to accept a feminist society of primitive communism, ... to live." Another interpretation is that the author created an open structure' of freedom".
Mary Daly wrote of hag-ocracy, "the place we ["women traveling into feminist time/space"] govern", and of reversing phallocratic rule in the 1990s (i.e., when published). She considered equal rights as tokenism that works against sisterhood, even as she supported abortion being legal and other reforms. She considered her book pro-female and anti-male.
Rasa von Werder has also long advocated for a return to matriarchy, a restoration of its status before its overthrow by patriarchy, along with associated author William Bond as well.
Some such advocacies are informed by work on the matriarchies of the past:
According to Prof. , "an ancient matriarchy ... [was "in early second-wave feminism"] the lost object of women's freedom." Prof. Cynthia Eller found widespread acceptance of matriarchal myth during feminism's second wave. According to Kathryn Rountree, the belief in a prepatriarchal "Golden Age" of matriarchy may have been more specifically about a matrifocal society, although this was believed more in the 1970s than in the 1990s–2000s and was criticized within feminism and within archaeology, anthropology, and theological study as lacking a scholarly basis, and Prof. Harvey C. Mansfield wrote that "the evidence [is] ... of males ruling over all societies at almost all times". Eller said that, other than a few separatist radical lesbian feminists, spiritual feminists would generously include "a place for men ... in which they can be happy and productive, if not necessarily powerful and in control" and might have social power as well.
Jill Johnston envisioned a "return to the former glory and wise equanimity of the matriarchies" in the future and "imagined lesbians as constituting an imaginary radical state, and invoked 'the return to the harmony of statehood and biology.... Her work inspired efforts at implementation by the Lesbian Organization of Toronto (LOOT) in 1976–1980 and in Los Angeles.
Elizabeth Gould Davis believed that a "matriarchal counterrevolution [replacing "a[n old] patriarchal revolution"] ... is the only hope for the survival of the human race." She believed that "spiritual force", "mental and spiritual gifts", and "extrasensory perception" will be more important and therefore that "woman will ... predominate", and that it is "about ... ["woman" that] the next civilization will ... revolve", as in the kind of past that she believed existed. According to critic Prof. Ginette Castro, Elizabeth Gould Davis used the words matriarchy and gynocracy "interchangeably" and proposed a discourse "rooted in the purest female chauvinism" and seemed to support "a feminist counterattack stigmatizing the patriarchal present", "giv[ing] ... in to a revenge-seeking form of feminism", "build[ing] ... her case on the humiliation of men", and "asserti[ng] ... a specifically feminine nature ... [as] morally superior." Castro criticized Elizabeth Gould Davis' essentialism and assertion of superiority as "sexist" and "treason".
One organization that was named The Feminists was interested in matriarchy and was one of the largest of the radical feminist women's liberation groups of the 1960s. Two members wanted "the restoration of female rule", but the organization's founder, Ti-Grace Atkinson, would have objected had she remained in the organization, because, according to a historian, "[she] had always doubted that women would wield power differently from men."
Robin Morgan wrote of women fighting for and creating a "gynocratic world".
Adler reported, "if feminists have diverse views on the matriarchies of the past, they also are of several minds on the goals for the future. A woman in the coven of Ursa Maior told me, 'right now I am pushing for women's power in any way I can, but I don't know whether my ultimate aim is a society where all human beings are equal, regardless of the bodies they were born into, or whether I would rather see a society where women had institutional authority.
Some fiction caricatured the current gender hierarchy by describing an inverted matriarchal alternative without necessarily advocating for it. According to Karin Schönpflug, "Gerd Brantenberg's Egalia's Daughters is a caricature of powered gender relations which have been completely reversed, with the female sex on the top and the male sex a degraded, oppressed group"; "gender inequality is expressed through power inversion" and "all gender roles are reversed and women rule over a class of intimidated, effeminate men" compelled into that submissive gender role. "Egalia is not a typical example of gender inequality in the sense that a vision of a desirable matriarchy is created; Egalia is more a caricature of male hegemony by twisting gender hierarchy but not really offering a 'better world.
On egalitarian matriarchy, Heide Göttner-Abendroth's International Academy for Modern Matriarchal Studies and Matriarchal Spirituality (HAGIA) organized conferences in Luxembourg in 2003 and Texas in 2005, with papers published. Göttner-Abendroth argued that "matriarchies are all egalitarian at least in terms of gender—they have no gender hierarchy .... [, that, f]or many matriarchal societies, the social order is completely egalitarian at both local and regional levels", that, "for our own path toward new egalitarian societies, we can gain ... insight from ... ["tested"] matriarchal patterns", and that "matriarchies are not abstract utopias, constructed according to philosophical concepts that could never be implemented."
According to Eller, "a deep distrust of men's ability to adhere to" future matriarchal requirements may invoke a need "to retain at least some degree of female hegemony to insure against a return to patriarchal control", "feminists ... [having] the understanding that female dominance is better for society—and better for men—than the present world order", as is equalitarianism. On the other hand, Eller continued, if men can be trusted to accept equality, probably most feminists seeking future matriarchy would accept an equalitarian model.
"Demographic[ally]", "feminist matriarchalists run the gamut" but primarily are "in white, well-educated, middle-class circles"; many of the adherents are "religiously inclined" while others are "quite secular".
Biology as a ground for holding either males or females superior over the other has been criticized as invalid, such as by Andrea Dworkin and by Robin Morgan. A claim that women have unique characteristics that prevent women's assimilation with men has been apparently rejected by Ti-Grace Atkinson. On the other hand, not all advocates based their arguments on biology or essentialism.
A criticism by Mansfield of choosing who governs according to gender or sex is that the best qualified people should be chosen, regardless of gender or sex. On the other hand, Mansfield considered merit insufficient for office, because a legal right granted by a sovereign (e.g., a king), was more important than merit.
Diversity within a proposed community can, according to Becki L. Ross, make it especially challenging to complete forming the community. However, some advocacy includes diversity, in the views of Dworkin and Farley.
Prof. Christine Stansell, a feminist, wrote that, for feminists to achieve state power, women must democratically cooperate with men. "Women must take their place with a new generation of brothers in a struggle for the world's fortunes. Herland, whether of virtuous matrons or daring sisters, is not an option... [T]he well-being and liberty of women cannot be separated from democracy's survival." (Herland was feminist utopian fiction by Charlotte Perkins Gilman in 1911, featuring a community entirely of women except for three men who seek it out, strong women in a matriarchal utopia expected to last for generations, demonstrated a marked era of peace and personal satisfaction, although Charlotte Perkins Gilman was herself a feminist advocate of society being gender-integrated and of women's freedom.)
Other criticisms of matriarchy are that it could result in reverse sexism or discrimination against men, that it is opposed by most people including most feminists, or that many women do not want leadership positions. governing takes women away from family responsibilities, women are too likely to be unable to serve politically because of menstruation and pregnancy, public affairs are too sordid for women and would cost women their respect and femininity (apparently including fertility), superiority is not traditional, women lack the political capacity and authority men have, it is impractical because of a shortage of women with the ability to govern at that level of difficulty as well as the desire and ability to wage war, women are less aggressive, or less often so, than are men and politics is aggressive, women legislating would not serve men's interests or would serve only petty interests, it is contradicted by current science on genderal differences, it is unnatural, and, in the views of a playwright and a novelist, "women cannot govern on their own." On the other hand, another view is that "women have 'empire' over men" because of nature and "men ... are actually obeying" women.
Pursuing a future matriarchy would tend to risk sacrificing feminists' position in present social arrangements, and many feminists are not willing to take that chance, according to Eller. "Political feminists tend to regard discussions of what utopia would look like as a good way of setting themselves up for disappointment", according to Eller, and argue that immediate political issues must get the highest priority.
"Matriarchists", as typified by male-conceived comic book character Wonder Woman, were criticized by Kathie Sarachild, Carol Hanisch, and some others.
In religious thought
Exclusionary
Some theologies and theocracies limit or forbid women from being in civil government or public leadership or forbid them from voting, effectively criticizing and forbidding matriarchy. Within none of the following religions is the respective view necessarily universally held:
In Islam, some Muslim scholars hold a view that female political leadership should be restricted, according to . The restriction has been attributed to a hadith of Muhammad, the founder and last prophet of Islam. The hadith says, according to Roald, "a people which has a woman as leader will never prosper." The hadith's transmission, context, and meaning have been questioned, wrote Roald. According to Roald, the prohibition has also been attributed as an extension of a ban on women leading prayers "in mixed gatherings". Possibly, Roald noted, the hadith applies only against being head of state and not other high office. One source, wrote Roald, would allow a woman to "occupy every position except that of khalīfa (the leader of all Muslims)." One exception to the head-of-state prohibition was accepted without a general acceptance of women in political leadership, Roald reported. Political activism at lower levels may be more acceptable to Islamist women than top leadership positions, said Roald. The Muslim Brotherhood has stated that women may not be president or head of state but may hold other public offices but, "as for judiciary office, .... [t]he majority of jurispudents ... have forbidden it completely." In a study of 82 Islamists in Europe, according to Roald, 80% said women could not be state leaders but 75% said women could hold other high positions. In 1994, the Muslim Brotherhood said that "social circumstances and traditions" may justify gradualism in the exercise of women's right to hold office (below head of state). Whether the Muslim Brothers still support that statement is unclear. As reported in 1953, Roald reported later, "Islamic organizations held a conference in the office of the Muslim Brothers .... [and] claim[ed] ... that it had been proven that political rights for women were contrary to religion". Some nations have specific bans. In Iran at times, according to Elaheh Rostami Povey, women have been forbidden to fill some political office roles because of law or because of judgments made under the Islamic religion. According to Steven Pinker, in a 2001–2007 Gallup poll of 35 nations having 90% of the world's Muslims, "substantial majorities of both sexes in all the major Muslim countries say that women should be allowed to vote without influence from men ... and to serve in the highest levels of government."
In Rabbinical Judaism, among orthodox leaders, a position, beginning before Israel became a modern state, has been that for women to hold public office in Israel would threaten the state's existence, according to educator Tova Hartman, who reports the view has "wide consensus". When Israel ratified the international women's equality agreement known as CEDAW, according to Marsha Freeman, it reserved nonenforcement for any religious communities that forbid women from sitting on religious courts. According to Freeman, "the tribunals that adjudicate marital issues are by religious law and by custom entirely male." Men's superiority' is a fundamental tenet in Judaism", according to Irit Umanit. According to Freeman, Likud party-led "governments have been less than hospitable to women's high-level participation."
In Buddhism, according to Karma Lekshe Tsomo, some hold that "the Buddha allegedly hesitated to admit women to the Saṅgha ...." because their inclusion would hasten the demise of the monastic community and the very teachings of Buddhism itself. "In certain Buddhist countries—Burma, Cambodia, Laos, Sri Lanka, and Thailand—women are categorically denied admission to the Saṅgha, Buddhism's most fundamental institution", according to Tsomo. Tsomo wrote, "throughout history, the support of the Saṅgha has been actively sought as a means of legitimation by those wishing to gain and maintain positions of political power in Buddhist countries."
Among Hindus in India, the Rashtriya Swayamsevak Sangh, "India's most extensive all-male Hindu nationalist organization," has debated whether women can ever be Hindu nationalist political leaders but without coming to a conclusion, according to Paola Bacchetta. The Rashtriya Sevika Samiti, a counterpart organization composed of women, believes that women can be Hindu nationalist political leaders and has trained two in Parliament, but considers women only as exceptions, the norm for such leadership being men.
In Protestant Christianity, considered only historically, in 1558, John Knox (Maria Stuart's subject) wrote The First Blast of the Trumpet against the Monstrous Regiment of Women. According to Scalingi, the work is "perhaps the best known analysis of gynecocracy" and Knox was "the most notorious" writer on the subject. According to an 1878 edition, Knox's objection to any women reigning and having "empire" over men was theological and it was against nature for women to bear rule, superiority, dominion, or empire above any realm, nation, or city. Susan M. Felch said that Knox's argument was partly grounded on a statement of the apostle Paul against women teaching or usurping authority over men. According to Maria Zina Gonçalves de Abreu, Knox argued that a woman being a national ruler was unnatural and that women were unfit and ineligible for the post. Kathryn M. Brammall said Knox "considered the rule of female monarchs to be anathema to good government" and that Knox "also attacked those who obeyed or supported female leaders", including men. Robert M. Healey said that Knox objected to women's rule even if men accepted it. On whether Knox personally endorsed what he wrote, according to Felch, Jasper Ridley, in 1968, argued that even Knox may not have personally believed his stated position but may have merely pandered to popular sentiment, itself a point disputed by W. Stanford Reid. On the popularity of Knox's views, Patricia-Ann Lee said Knox's "fierce attack on the legitimacy of female rule ... [was one in which] he said ... little that was unacceptable ... to most of his contemporaries", although Judith M. Richards disagreed on whether the acceptance was quite so widespread. According to David Laing's Preface to Knox's work, Knox's views were agreed with by some people at the time, the Preface saying, "[Knox's] views were in harmony with those of his colleagues ... [Goodman, Whittingham, and Gilby]". Writing in agreement with Knox was Christopher Goodman, who, according to Lee, "considered the woman ruler to be a monster in nature, and used ... scriptural argument to prove that females were barred ... from any political power", even if, according to Richards, the woman was "virtuous". Some views included conditionality; while John Calvin said, according to Healey, "that government by a woman was a deviation from the original and proper order of nature, and therefore among the punishments humanity incurred for original sin". Nonetheless, Calvin would not always question a woman's right to inherit rule of a realm or principality. Heinrich Bullinger, according to Healey, "held that rule by a woman was contrary to God's law but cautioned against [always] using that reason to oppose such rule". According to Richards, Bullinger said women were normally not to rule. Around 1560, Calvin, in disagreeing with Knox, argued that the existence of the few women who were exceptions showed that theological ground existed for their exceptionalism. Knox's view was much debated in Europe at the time, the issue considered complicated by laws such as on inheritance and since several women were already in office, including as Queens, according to de Abreu. Knox's view is not said to be widely held in modern Protestantism among leadership or laity.
Inclusionary
According to Eller, feminist thealogy conceptualized humanity as beginning with "female-ruled or equalitarian societies", until displaced by patriarchies, and that in the millennial future gynocentric,' life-loving values" will return to prominence. This, according to Eller, produces "a virtually infinite number of years of female equality or superiority coming both at the beginning and end of historical time".
Among criticisms is that a future matriarchy, according to Eller, as a reflection of spirituality, is conceived as ahistorical, and thus may be unrealistic, unreachable, or even meaningless as a goal to secular feminists.
In popular culture
Ancient theatre
As criticism in 390 BC, Aristophanes wrote a play, Ecclesiazusae, about women gaining legislative power and governing Athens, Greece, on a limited principle of equality. In the play, according to Mansfield, Praxagora, a character, argues that women should rule because they are superior to men, not equal, and yet she declines to assert publicly her right to rule, although elected and although acting in office. The play, Mansfield wrote, also suggests that women would rule by not allowing politics, in order to prevent disappointment, and that affirmative action would be applied to heterosexual relationships. In the play, as Mansfield described it, written when Athens was a male-only democracy where women could not vote or rule, women were presented as unassertive and unrealistic, and thus not qualified to govern. The play, according to Sarah Ruden, was a fable on the theme that women should stay home.
Literature
Elizabeth Burgoyne Corbett's New Amazonia: A Foretaste of the Future is an early feminist utopian novel (published 1889), which is matriarchal in that all political leadership roles in New Amazonia are required to be held by women, according to Duangrudi Suksang.
Roquia Sakhawat Hussain's Sultana's Dream is an early feminist utopia (published 1905) based on advanced science and technology developed by women, set in a society, Ladyland, run by women, where "the power of males is taken away and given to females," and men are secluded and primarily attend to domestic duties, according to Seemin Hasan.
Marion Zimmer Bradley's book, The Ruins of Isis (1978), is, according to Batya Weinbaum, set within a "female supremacist world".
In Marion Zimmer Bradley's book, The Mists of Avalon (1983), Avalon is an island with a matriarchal culture, according to Ruben Valdes-Miyares.
In Orson Scott Card's Speaker for the Dead (1986) and its sequels, the alien pequenino species in every forest are matriarchal.
In Sheri S. Tepper's book, The Gate to Women's Country (1988), the only men who live in Women's Country are the "servitors," who are servants to the women, according to Peter Fitting.
Élisabeth Vonarburg's book, Chroniques du Pays des Mères (1992) (translated into English as In the Mothers' Land) is set in a matriarchal society where, due to a genetic mutation, women outnumber men by 70 to 1.
N. Lee Wood's book Master of None (2004) is set in a "closed matriarchal world where men have no legal rights", according to Publishers Weekly.
Wen Spencer's book A Brother's Price (2005) is set in a world where, according to Page Traynor, "women are in charge", "boys are rare and valued but not free", and "boys are kept at home to do the cooking and child caring until the time they marry".
Elizabeth Bear's Carnival (2006) introduces New Amazonia, a colony planet with a matriarchal and largely lesbian population who eschew the strict and ruthless population control and environmentalism instituted on Earth. The Amazonians are aggressive, warlike, and subjugate the few men they tolerate for reproduction and service, but they are also pragmatic and defensive of their freedom from the male-dominated Coalition that seeks to conquer them.
In Naomi Alderman's book, The Power (2016), women develop the ability to release electrical jolts from their fingers, thus leading them to become the dominant gender.
Jean M. Auel's Earth's Children (1980–2011).
In the SCP Foundation, which is a collaborative online horror fiction website, the Daevites are an ancient society in which women took the roles of both religious and political leaders, and men often take the place of slaves
Film
In the 2011 Disney animated film Mars Needs Moms, Mars is ruled by a female Martian known only as The Supervisor, who long ago deemed all male Martians to the trash underground and kept all females in functioning society. The film reveals The Supervisor, for an unexplained reason, changed how Martian society was being run (from children being raised by parents) to Martian children being raised by "Nannybots". The Supervisor sacrifices one Earth mother every twenty-five years for that mother's knowledge of order, discipline and control, which is transferred to the Nannybots who raise the female Martians.
The 2023 film Barbie depicts a world (Barbieland) ruled entirely by Barbies in positions such as doctors, scientists, lawyers, and politicians while the Kens spend their time at the beach.
Television
In the special The Powerpuff Girls Rule!!!, Blossom wanted society to based on the African Elephant; in which only women vote & "Stinky & Dumb" men are relegated to house tasks.
In the Futurama episode Amazon Women in the Mood, the crew land on a planet ruled by giant muscular women.
Fumi Yoshinaga's manga Ōoku: The Inner Chambers, published between 2004 and 2020, follows an alternate history of Japan in which most of the male population is killed by a disease, resulting in a matriarchal society. It is best known in the United States by its 2023 Netflix adaptation of the same name.
Other animals
Matriarchy may also refer to non-human animal species in which females hold higher status and hierarchical positions, such as among spotted hyenas, elephants, lemurs, naked mole rats, and bonobos. Such animal hierarchies have not been replaced by patriarchy. The social structure of European bison herds has also been described by specialists as a matriarchy – the cows of the group lead it as the entire herd follows them to grazing areas. Though heavier and larger than the females, the older and more powerful males of the European bison usually fulfill the role of satellites that hang around the edges of the herd. Apart from the mating season when they begin to compete with each other, European bison bulls serve a more active role in the herd only once a danger to the group's safety appears. In bonobos, even the highest ranking male will sometimes face aggression from females and is occasionally injured by them. Female bonobos secure feeding privileges and exude social confidence while the males generally cower on the sidelines. The only exceptions are males with influential mothers, so even the rank between the males is influenced strongly by females. Females also initiate group travels.
See also
Alain Daniélou
Çatalhöyük (denials of matriarchy)
Daughter preference
Female cosmetic coalitions
Feminist separatism
Gynocentrism
Gender role
Lumpa Church
Alice Lenshina
Marianismo Menstrual synchrony
Masculism
Matriarch (disambiguation)
Patriarchy
Other World Kingdom
Bissagos Islands
Radical feminism
Trưng sisters
Women in the EZLN
Notes
References
Bibliography
– translated from Radioscopie du féminisme américain (Paris, France: Presses de la Fondation Nationale des Sciences Politiques, 1984)
Further reading
Czaplicka, Marie Antoinette, Aboriginal Siberia, a Study in Social Anthropology (Oxford: Clarendon Press, 1914)
Finley, M.I., The World of Odysseus (London: Pelican Books, 1962)
Gimbutas, Marija, The Language of the Goddess (London: Thames & Hudson, 1991)
Goldberg, Steven, Why Men Rule: A Theory of Male Dominance (rev. ed. 1993)
Hutton, Ronald, The Pagan Religions of the Ancient British Isles (Hoboken, NJ: Wiley-Blackwell, 1993)
Lapatin, Kenneth, Mysteries of the Snake Goddess: Art, Desire, and the Forging of History (2002)
Lerner, Gerda, The Creation of Feminist Consciousness: From the Middle Ages to Eighteen-Seventy (Oxford: Oxford University Press, 1993)
Lerner, Gerda, The Creation of Patriarchy (Oxford: Oxford University Press, 1986)
Sanday, Peggy Reeves, Women at the Center: Life in a Modern Matriarchy (Cornell University Press, 2002)
Schiavoni, Giulio, Bachofen in-attuale? (chapter), in Il matriarcato. Ricerca sulla ginecocrazia del mondo antico nei suoi aspetti religiosi e giuridici (Turin, Italy: Giulio Einaudi editore, 2016) (Johann Jakob Bachofen, editor)
Shorrocks, Bryan, The Biology of African Savannahs (Oxford University Press, 2007)
Stearns, Peter N., Gender in World History'' (N.Y.: Routledge, 2000)
External links
Knight, Chris. Early Human Kinship was Matrilineal (2008).
Family
Motherhood
Mother goddesses
Feminist theory
Political anthropology | 0.772504 | 0.998287 | 0.77118 |
Evolution | Evolution is the change in the heritable characteristics of biological populations over successive generations. It occurs when evolutionary processes such as natural selection and genetic drift act on genetic variation, resulting in certain characteristics becoming more or less common within a population over successive generations. The process of evolution has given rise to biodiversity at every level of biological organisation.
The scientific theory of evolution by natural selection was conceived independently by two British naturalists, Charles Darwin and Alfred Russel Wallace, in the mid-19th century as an explanation for why organisms are adapted to their physical and biological environments. The theory was first set out in detail in Darwin's book On the Origin of Species. Evolution by natural selection is established by observable facts about living organisms: (1) more offspring are often produced than can possibly survive; (2) traits vary among individuals with respect to their morphology, physiology, and behaviour; (3) different traits confer different rates of survival and reproduction (differential fitness); and (4) traits can be passed from generation to generation (heritability of fitness). In successive generations, members of a population are therefore more likely to be replaced by the offspring of parents with favourable characteristics for that environment.
In the early 20th century, competing ideas of evolution were refuted and evolution was combined with Mendelian inheritance and population genetics to give rise to modern evolutionary theory. In this synthesis the basis for heredity is in DNA molecules that pass information from generation to generation. The processes that change DNA in a population include natural selection, genetic drift, mutation, and gene flow.
All life on Earth—including humanity—shares a last universal common ancestor (LUCA), which lived approximately 3.5–3.8 billion years ago. The fossil record includes a progression from early biogenic graphite to microbial mat fossils to fossilised multicellular organisms. Existing patterns of biodiversity have been shaped by repeated formations of new species (speciation), changes within species (anagenesis), and loss of species (extinction) throughout the evolutionary history of life on Earth. Morphological and biochemical traits tend to be more similar among species that share a more recent common ancestor, which historically was used to reconstruct phylogenetic trees, although direct comparison of genetic sequences is a more common method today.
Evolutionary biologists have continued to study various aspects of evolution by forming and testing hypotheses as well as constructing theories based on evidence from the field or laboratory and on data generated by the methods of mathematical and theoretical biology. Their discoveries have influenced not just the development of biology but also other fields including agriculture, medicine, and computer science.
Heredity
Evolution in organisms occurs through changes in heritable characteristics—the inherited characteristics of an organism. In humans, for example, eye colour is an inherited characteristic and an individual might inherit the "brown-eye trait" from one of their parents. Inherited traits are controlled by genes and the complete set of genes within an organism's genome (genetic material) is called its genotype.
The complete set of observable traits that make up the structure and behaviour of an organism is called its phenotype. Some of these traits come from the interaction of its genotype with the environment while others are neutral. Some observable characteristics are not inherited. For example, suntanned skin comes from the interaction between a person's genotype and sunlight; thus, suntans are not passed on to people's children. The phenotype is the ability of the skin to tan when exposed to sunlight. However, some people tan more easily than others, due to differences in genotypic variation; a striking example are people with the inherited trait of albinism, who do not tan at all and are very sensitive to sunburn.
Heritable characteristics are passed from one generation to the next via DNA, a molecule that encodes genetic information. DNA is a long biopolymer composed of four types of bases. The sequence of bases along a particular DNA molecule specifies the genetic information, in a manner similar to a sequence of letters spelling out a sentence. Before a cell divides, the DNA is copied, so that each of the resulting two cells will inherit the DNA sequence. Portions of a DNA molecule that specify a single functional unit are called genes; different genes have different sequences of bases. Within cells, each long strand of DNA is called a chromosome. The specific location of a DNA sequence within a chromosome is known as a locus. If the DNA sequence at a locus varies between individuals, the different forms of this sequence are called alleles. DNA sequences can change through mutations, producing new alleles. If a mutation occurs within a gene, the new allele may affect the trait that the gene controls, altering the phenotype of the organism. However, while this simple correspondence between an allele and a trait works in some cases, most traits are influenced by multiple genes in a quantitative or epistatic manner.
Sources of variation
Evolution can occur if there is genetic variation within a population. Variation comes from mutations in the genome, reshuffling of genes through sexual reproduction and migration between populations (gene flow). Despite the constant introduction of new variation through mutation and gene flow, most of the genome of a species is very similar among all individuals of that species. However, discoveries in the field of evolutionary developmental biology have demonstrated that even relatively small differences in genotype can lead to dramatic differences in phenotype both within and between species.
An individual organism's phenotype results from both its genotype and the influence of the environment it has lived in. The modern evolutionary synthesis defines evolution as the change over time in this genetic variation. The frequency of one particular allele will become more or less prevalent relative to other forms of that gene. Variation disappears when a new allele reaches the point of fixation—when it either disappears from the population or replaces the ancestral allele entirely.
Mutation
Mutations are changes in the DNA sequence of a cell's genome and are the ultimate source of genetic variation in all organisms. When mutations occur, they may alter the product of a gene, or prevent the gene from functioning, or have no effect.
About half of the mutations in the coding regions of protein-coding genes are deleterious — the other half are neutral. A small percentage of the total mutations in this region confer a fitness benefit. Some of the mutations in other parts of the genome are deleterious but the vast majority are neutral. A few are beneficial.
Mutations can involve large sections of a chromosome becoming duplicated (usually by genetic recombination), which can introduce extra copies of a gene into a genome. Extra copies of genes are a major source of the raw material needed for new genes to evolve. This is important because most new genes evolve within gene families from pre-existing genes that share common ancestors. For example, the human eye uses four genes to make structures that sense light: three for colour vision and one for night vision; all four are descended from a single ancestral gene.
New genes can be generated from an ancestral gene when a duplicate copy mutates and acquires a new function. This process is easier once a gene has been duplicated because it increases the redundancy of the system; one gene in the pair can acquire a new function while the other copy continues to perform its original function. Other types of mutations can even generate entirely new genes from previously noncoding DNA, a phenomenon termed de novo gene birth.
The generation of new genes can also involve small parts of several genes being duplicated, with these fragments then recombining to form new combinations with new functions (exon shuffling). When new genes are assembled from shuffling pre-existing parts, domains act as modules with simple independent functions, which can be mixed together to produce new combinations with new and complex functions. For example, polyketide synthases are large enzymes that make antibiotics; they contain up to 100 independent domains that each catalyse one step in the overall process, like a step in an assembly line.
One example of mutation is wild boar piglets. They are camouflage coloured and show a characteristic pattern of dark and light longitudinal stripes. However, mutations in the melanocortin 1 receptor (MC1R) disrupt the pattern. The majority of pig breeds carry MC1R mutations disrupting wild-type colour and different mutations causing dominant black colouring.
Sex and recombination
In asexual organisms, genes are inherited together, or linked, as they cannot mix with genes of other organisms during reproduction. In contrast, the offspring of sexual organisms contain random mixtures of their parents' chromosomes that are produced through independent assortment. In a related process called homologous recombination, sexual organisms exchange DNA between two matching chromosomes. Recombination and reassortment do not alter allele frequencies, but instead change which alleles are associated with each other, producing offspring with new combinations of alleles. Sex usually increases genetic variation and may increase the rate of evolution.
The two-fold cost of sex was first described by John Maynard Smith. The first cost is that in sexually dimorphic species only one of the two sexes can bear young. This cost does not apply to hermaphroditic species, like most plants and many invertebrates. The second cost is that any individual who reproduces sexually can only pass on 50% of its genes to any individual offspring, with even less passed on as each new generation passes. Yet sexual reproduction is the more common means of reproduction among eukaryotes and multicellular organisms. The Red Queen hypothesis has been used to explain the significance of sexual reproduction as a means to enable continual evolution and adaptation in response to coevolution with other species in an ever-changing environment. Another hypothesis is that sexual reproduction is primarily an adaptation for promoting accurate recombinational repair of damage in germline DNA, and that increased diversity is a byproduct of this process that may sometimes be adaptively beneficial.
Gene flow
Gene flow is the exchange of genes between populations and between species. It can therefore be a source of variation that is new to a population or to a species. Gene flow can be caused by the movement of individuals between separate populations of organisms, as might be caused by the movement of mice between inland and coastal populations, or the movement of pollen between heavy-metal-tolerant and heavy-metal-sensitive populations of grasses.
Gene transfer between species includes the formation of hybrid organisms and horizontal gene transfer. Horizontal gene transfer is the transfer of genetic material from one organism to another organism that is not its offspring; this is most common among bacteria. In medicine, this contributes to the spread of antibiotic resistance, as when one bacteria acquires resistance genes it can rapidly transfer them to other species. Horizontal transfer of genes from bacteria to eukaryotes such as the yeast Saccharomyces cerevisiae and the adzuki bean weevil Callosobruchus chinensis has occurred. An example of larger-scale transfers are the eukaryotic bdelloid rotifers, which have received a range of genes from bacteria, fungi and plants. Viruses can also carry DNA between organisms, allowing transfer of genes even across biological domains.
Large-scale gene transfer has also occurred between the ancestors of eukaryotic cells and bacteria, during the acquisition of chloroplasts and mitochondria. It is possible that eukaryotes themselves originated from horizontal gene transfers between bacteria and archaea.
Epigenetics
Some heritable changes cannot be explained by changes to the sequence of nucleotides in the DNA. These phenomena are classed as epigenetic inheritance systems. DNA methylation marking chromatin, self-sustaining metabolic loops, gene silencing by RNA interference and the three-dimensional conformation of proteins (such as prions) are areas where epigenetic inheritance systems have been discovered at the organismic level. Developmental biologists suggest that complex interactions in genetic networks and communication among cells can lead to heritable variations that may underlay some of the mechanics in developmental plasticity and canalisation. Heritability may also occur at even larger scales. For example, ecological inheritance through the process of niche construction is defined by the regular and repeated activities of organisms in their environment. This generates a legacy of effects that modify and feed back into the selection regime of subsequent generations. Other examples of heritability in evolution that are not under the direct control of genes include the inheritance of cultural traits and symbiogenesis.
Evolutionary forces
From a neo-Darwinian perspective, evolution occurs when there are changes in the frequencies of alleles within a population of interbreeding organisms, for example, the allele for black colour in a population of moths becoming more common. Mechanisms that can lead to changes in allele frequencies include natural selection, genetic drift, and mutation bias.
Natural selection
Evolution by natural selection is the process by which traits that enhance survival and reproduction become more common in successive generations of a population. It embodies three principles:
Variation exists within populations of organisms with respect to morphology, physiology and behaviour (phenotypic variation).
Different traits confer different rates of survival and reproduction (differential fitness).
These traits can be passed from generation to generation (heritability of fitness).
More offspring are produced than can possibly survive, and these conditions produce competition between organisms for survival and reproduction. Consequently, organisms with traits that give them an advantage over their competitors are more likely to pass on their traits to the next generation than those with traits that do not confer an advantage. This teleonomy is the quality whereby the process of natural selection creates and preserves traits that are seemingly fitted for the functional roles they perform. Consequences of selection include nonrandom mating and genetic hitchhiking.
The central concept of natural selection is the evolutionary fitness of an organism. Fitness is measured by an organism's ability to survive and reproduce, which determines the size of its genetic contribution to the next generation. However, fitness is not the same as the total number of offspring: instead fitness is indicated by the proportion of subsequent generations that carry an organism's genes. For example, if an organism could survive well and reproduce rapidly, but its offspring were all too small and weak to survive, this organism would make little genetic contribution to future generations and would thus have low fitness.
If an allele increases fitness more than the other alleles of that gene, then with each generation this allele has a higher probability of becoming common within the population. These traits are said to be "selected for." Examples of traits that can increase fitness are enhanced survival and increased fecundity. Conversely, the lower fitness caused by having a less beneficial or deleterious allele results in this allele likely becoming rarer—they are "selected against."
Importantly, the fitness of an allele is not a fixed characteristic; if the environment changes, previously neutral or harmful traits may become beneficial and previously beneficial traits become harmful. However, even if the direction of selection does reverse in this way, traits that were lost in the past may not re-evolve in an identical form. However, a re-activation of dormant genes, as long as they have not been eliminated from the genome and were only suppressed perhaps for hundreds of generations, can lead to the re-occurrence of traits thought to be lost like hindlegs in dolphins, teeth in chickens, wings in wingless stick insects, tails and additional nipples in humans etc. "Throwbacks" such as these are known as atavisms.
Natural selection within a population for a trait that can vary across a range of values, such as height, can be categorised into three different types. The first is directional selection, which is a shift in the average value of a trait over time—for example, organisms slowly getting taller. Secondly, disruptive selection is selection for extreme trait values and often results in two different values becoming most common, with selection against the average value. This would be when either short or tall organisms had an advantage, but not those of medium height. Finally, in stabilising selection there is selection against extreme trait values on both ends, which causes a decrease in variance around the average value and less diversity. This would, for example, cause organisms to eventually have a similar height.
Natural selection most generally makes nature the measure against which individuals and individual traits, are more or less likely to survive. "Nature" in this sense refers to an ecosystem, that is, a system in which organisms interact with every other element, physical as well as biological, in their local environment. Eugene Odum, a founder of ecology, defined an ecosystem as: "Any unit that includes all of the organisms...in a given area interacting with the physical environment so that a flow of energy leads to clearly defined trophic structure, biotic diversity, and material cycles (i.e., exchange of materials between living and nonliving parts) within the system...." Each population within an ecosystem occupies a distinct niche, or position, with distinct relationships to other parts of the system. These relationships involve the life history of the organism, its position in the food chain and its geographic range. This broad understanding of nature enables scientists to delineate specific forces which, together, comprise natural selection.
Natural selection can act at different levels of organisation, such as genes, cells, individual organisms, groups of organisms and species. Selection can act at multiple levels simultaneously. An example of selection occurring below the level of the individual organism are genes called transposons, which can replicate and spread throughout a genome. Selection at a level above the individual, such as group selection, may allow the evolution of cooperation.
Genetic drift
Genetic drift is the random fluctuation of allele frequencies within a population from one generation to the next. When selective forces are absent or relatively weak, allele frequencies are equally likely to drift upward or downward in each successive generation because the alleles are subject to sampling error. This drift halts when an allele eventually becomes fixed, either by disappearing from the population or by replacing the other alleles entirely. Genetic drift may therefore eliminate some alleles from a population due to chance alone. Even in the absence of selective forces, genetic drift can cause two separate populations that begin with the same genetic structure to drift apart into two divergent populations with different sets of alleles.
According to the neutral theory of molecular evolution most evolutionary changes are the result of the fixation of neutral mutations by genetic drift. In this model, most genetic changes in a population are thus the result of constant mutation pressure and genetic drift. This form of the neutral theory has been debated since it does not seem to fit some genetic variation seen in nature. A better-supported version of this model is the nearly neutral theory, according to which a mutation that would be effectively neutral in a small population is not necessarily neutral in a large population. Other theories propose that genetic drift is dwarfed by other stochastic forces in evolution, such as genetic hitchhiking, also known as genetic draft. Another concept is constructive neutral evolution (CNE), which explains that complex systems can emerge and spread into a population through neutral transitions due to the principles of excess capacity, presuppression, and ratcheting, and it has been applied in areas ranging from the origins of the spliceosome to the complex interdependence of microbial communities.
The time it takes a neutral allele to become fixed by genetic drift depends on population size; fixation is more rapid in smaller populations. The number of individuals in a population is not critical, but instead a measure known as the effective population size. The effective population is usually smaller than the total population since it takes into account factors such as the level of inbreeding and the stage of the lifecycle in which the population is the smallest. The effective population size may not be the same for every gene in the same population.
It is usually difficult to measure the relative importance of selection and neutral processes, including drift. The comparative importance of adaptive and non-adaptive forces in driving evolutionary change is an area of current research.
Mutation bias
Mutation bias is usually conceived as a difference in expected rates for two different kinds of mutation, e.g., transition-transversion bias, GC-AT bias, deletion-insertion bias. This is related to the idea of developmental bias. Haldane and Fisher argued that, because mutation is a weak pressure easily overcome by selection, tendencies of mutation would be ineffectual except under conditions of neutral evolution or extraordinarily high mutation rates. This opposing-pressures argument was long used to dismiss the possibility of internal tendencies in evolution, until the molecular era prompted renewed interest in neutral evolution.
Noboru Sueoka and Ernst Freese proposed that systematic biases in mutation might be responsible for systematic differences in genomic GC composition between species. The identification of a GC-biased E. coli mutator strain in 1967, along with the proposal of the neutral theory, established the plausibility of mutational explanations for molecular patterns, which are now common in the molecular evolution literature.
For instance, mutation biases are frequently invoked in models of codon usage. Such models also include effects of selection, following the mutation-selection-drift model, which allows both for mutation biases and differential selection based on effects on translation. Hypotheses of mutation bias have played an important role in the development of thinking about the evolution of genome composition, including isochores. Different insertion vs. deletion biases in different taxa can lead to the evolution of different genome sizes. The hypothesis of Lynch regarding genome size relies on mutational biases toward increase or decrease in genome size.
However, mutational hypotheses for the evolution of composition suffered a reduction in scope when it was discovered that (1) GC-biased gene conversion makes an important contribution to composition in diploid organisms such as mammals and (2) bacterial genomes frequently have AT-biased mutation.
Contemporary thinking about the role of mutation biases reflects a different theory from that of Haldane and Fisher. More recent work showed that the original "pressures" theory assumes that evolution is based on standing variation: when evolution depends on events of mutation that introduce new alleles, mutational and developmental biases in the introduction of variation (arrival biases) can impose biases on evolution without requiring neutral evolution or high mutation rates.
Several studies report that the mutations implicated in adaptation reflect common mutation biases though others dispute this interpretation.
Genetic hitchhiking
Recombination allows alleles on the same strand of DNA to become separated. However, the rate of recombination is low (approximately two events per chromosome per generation). As a result, genes close together on a chromosome may not always be shuffled away from each other and genes that are close together tend to be inherited together, a phenomenon known as linkage. This tendency is measured by finding how often two alleles occur together on a single chromosome compared to expectations, which is called their linkage disequilibrium. A set of alleles that is usually inherited in a group is called a haplotype. This can be important when one allele in a particular haplotype is strongly beneficial: natural selection can drive a selective sweep that will also cause the other alleles in the haplotype to become more common in the population; this effect is called genetic hitchhiking or genetic draft. Genetic draft caused by the fact that some neutral genes are genetically linked to others that are under selection can be partially captured by an appropriate effective population size.
Sexual selection
A special case of natural selection is sexual selection, which is selection for any trait that increases mating success by increasing the attractiveness of an organism to potential mates. Traits that evolved through sexual selection are particularly prominent among males of several animal species. Although sexually favoured, traits such as cumbersome antlers, mating calls, large body size and bright colours often attract predation, which compromises the survival of individual males. This survival disadvantage is balanced by higher reproductive success in males that show these hard-to-fake, sexually selected traits.
Natural outcomes
Evolution influences every aspect of the form and behaviour of organisms. Most prominent are the specific behavioural and physical adaptations that are the outcome of natural selection. These adaptations increase fitness by aiding activities such as finding food, avoiding predators or attracting mates. Organisms can also respond to selection by cooperating with each other, usually by aiding their relatives or engaging in mutually beneficial symbiosis. In the longer term, evolution produces new species through splitting ancestral populations of organisms into new groups that cannot or will not interbreed. These outcomes of evolution are distinguished based on time scale as macroevolution versus microevolution. Macroevolution refers to evolution that occurs at or above the level of species, in particular speciation and extinction, whereas microevolution refers to smaller evolutionary changes within a species or population, in particular shifts in allele frequency and adaptation. Macroevolution is the outcome of long periods of microevolution. Thus, the distinction between micro- and macroevolution is not a fundamental one—the difference is simply the time involved. However, in macroevolution, the traits of the entire species may be important. For instance, a large amount of variation among individuals allows a species to rapidly adapt to new habitats, lessening the chance of it going extinct, while a wide geographic range increases the chance of speciation, by making it more likely that part of the population will become isolated. In this sense, microevolution and macroevolution might involve selection at different levels—with microevolution acting on genes and organisms, versus macroevolutionary processes such as species selection acting on entire species and affecting their rates of speciation and extinction.
A common misconception is that evolution has goals, long-term plans, or an innate tendency for "progress", as expressed in beliefs such as orthogenesis and evolutionism; realistically, however, evolution has no long-term goal and does not necessarily produce greater complexity. Although complex species have evolved, they occur as a side effect of the overall number of organisms increasing, and simple forms of life still remain more common in the biosphere. For example, the overwhelming majority of species are microscopic prokaryotes, which form about half the world's biomass despite their small size and constitute the vast majority of Earth's biodiversity. Simple organisms have therefore been the dominant form of life on Earth throughout its history and continue to be the main form of life up to the present day, with complex life only appearing more diverse because it is more noticeable. Indeed, the evolution of microorganisms is particularly important to evolutionary research since their rapid reproduction allows the study of experimental evolution and the observation of evolution and adaptation in real time.
Adaptation
Adaptation is the process that makes organisms better suited to their habitat. Also, the term adaptation may refer to a trait that is important for an organism's survival. For example, the adaptation of horses' teeth to the grinding of grass. By using the term adaptation for the evolutionary process and adaptive trait for the product (the bodily part or function), the two senses of the word may be distinguished. Adaptations are produced by natural selection. The following definitions are due to Theodosius Dobzhansky:
Adaptation is the evolutionary process whereby an organism becomes better able to live in its habitat or habitats.
Adaptedness is the state of being adapted: the degree to which an organism is able to live and reproduce in a given set of habitats.
An adaptive trait is an aspect of the developmental pattern of the organism which enables or enhances the probability of that organism surviving and reproducing.
Adaptation may cause either the gain of a new feature, or the loss of an ancestral feature. An example that shows both types of change is bacterial adaptation to antibiotic selection, with genetic changes causing antibiotic resistance by both modifying the target of the drug, or increasing the activity of transporters that pump the drug out of the cell. Other striking examples are the bacteria Escherichia coli evolving the ability to use citric acid as a nutrient in a long-term laboratory experiment, Flavobacterium evolving a novel enzyme that allows these bacteria to grow on the by-products of nylon manufacturing, and the soil bacterium Sphingobium evolving an entirely new metabolic pathway that degrades the synthetic pesticide pentachlorophenol. An interesting but still controversial idea is that some adaptations might increase the ability of organisms to generate genetic diversity and adapt by natural selection (increasing organisms' evolvability).
Adaptation occurs through the gradual modification of existing structures. Consequently, structures with similar internal organisation may have different functions in related organisms. This is the result of a single ancestral structure being adapted to function in different ways. The bones within bat wings, for example, are very similar to those in mice feet and primate hands, due to the descent of all these structures from a common mammalian ancestor. However, since all living organisms are related to some extent, even organs that appear to have little or no structural similarity, such as arthropod, squid and vertebrate eyes, or the limbs and wings of arthropods and vertebrates, can depend on a common set of homologous genes that control their assembly and function; this is called deep homology.
During evolution, some structures may lose their original function and become vestigial structures. Such structures may have little or no function in a current species, yet have a clear function in ancestral species, or other closely related species. Examples include pseudogenes, the non-functional remains of eyes in blind cave-dwelling fish, wings in flightless birds, the presence of hip bones in whales and snakes, and sexual traits in organisms that reproduce via asexual reproduction. Examples of vestigial structures in humans include wisdom teeth, the coccyx, the vermiform appendix, and other behavioural vestiges such as goose bumps and primitive reflexes.
However, many traits that appear to be simple adaptations are in fact exaptations: structures originally adapted for one function, but which coincidentally became somewhat useful for some other function in the process. One example is the African lizard Holaspis guentheri, which developed an extremely flat head for hiding in crevices, as can be seen by looking at its near relatives. However, in this species, the head has become so flattened that it assists in gliding from tree to tree—an exaptation. Within cells, molecular machines such as the bacterial flagella and protein sorting machinery evolved by the recruitment of several pre-existing proteins that previously had different functions. Another example is the recruitment of enzymes from glycolysis and xenobiotic metabolism to serve as structural proteins called crystallins within the lenses of organisms' eyes.
An area of current investigation in evolutionary developmental biology is the developmental basis of adaptations and exaptations. This research addresses the origin and evolution of embryonic development and how modifications of development and developmental processes produce novel features. These studies have shown that evolution can alter development to produce new structures, such as embryonic bone structures that develop into the jaw in other animals instead forming part of the middle ear in mammals. It is also possible for structures that have been lost in evolution to reappear due to changes in developmental genes, such as a mutation in chickens causing embryos to grow teeth similar to those of crocodiles. It is now becoming clear that most alterations in the form of organisms are due to changes in a small set of conserved genes.
Coevolution
Interactions between organisms can produce both conflict and cooperation. When the interaction is between pairs of species, such as a pathogen and a host, or a predator and its prey, these species can develop matched sets of adaptations. Here, the evolution of one species causes adaptations in a second species. These changes in the second species then, in turn, cause new adaptations in the first species. This cycle of selection and response is called coevolution. An example is the production of tetrodotoxin in the rough-skinned newt and the evolution of tetrodotoxin resistance in its predator, the common garter snake. In this predator-prey pair, an evolutionary arms race has produced high levels of toxin in the newt and correspondingly high levels of toxin resistance in the snake.
Cooperation
Not all co-evolved interactions between species involve conflict. Many cases of mutually beneficial interactions have evolved. For instance, an extreme cooperation exists between plants and the mycorrhizal fungi that grow on their roots and aid the plant in absorbing nutrients from the soil. This is a reciprocal relationship as the plants provide the fungi with sugars from photosynthesis. Here, the fungi actually grow inside plant cells, allowing them to exchange nutrients with their hosts, while sending signals that suppress the plant immune system.
Coalitions between organisms of the same species have also evolved. An extreme case is the eusociality found in social insects, such as bees, termites and ants, where sterile insects feed and guard the small number of organisms in a colony that are able to reproduce. On an even smaller scale, the somatic cells that make up the body of an animal limit their reproduction so they can maintain a stable organism, which then supports a small number of the animal's germ cells to produce offspring. Here, somatic cells respond to specific signals that instruct them whether to grow, remain as they are, or die. If cells ignore these signals and multiply inappropriately, their uncontrolled growth causes cancer.
Such cooperation within species may have evolved through the process of kin selection, which is where one organism acts to help raise a relative's offspring. This activity is selected for because if the helping individual contains alleles which promote the helping activity, it is likely that its kin will also contain these alleles and thus those alleles will be passed on. Other processes that may promote cooperation include group selection, where cooperation provides benefits to a group of organisms.
Speciation
Speciation is the process where a species diverges into two or more descendant species.
There are multiple ways to define the concept of "species". The choice of definition is dependent on the particularities of the species concerned. For example, some species concepts apply more readily toward sexually reproducing organisms while others lend themselves better toward asexual organisms. Despite the diversity of various species concepts, these various concepts can be placed into one of three broad philosophical approaches: interbreeding, ecological and phylogenetic. The Biological Species Concept (BSC) is a classic example of the interbreeding approach. Defined by evolutionary biologist Ernst Mayr in 1942, the BSC states that "species are groups of actually or potentially interbreeding natural populations, which are reproductively isolated from other such groups." Despite its wide and long-term use, the BSC like other species concepts is not without controversy, for example, because genetic recombination among prokaryotes is not an intrinsic aspect of reproduction; this is called the species problem. Some researchers have attempted a unifying monistic definition of species, while others adopt a pluralistic approach and suggest that there may be different ways to logically interpret the definition of a species.
Barriers to reproduction between two diverging sexual populations are required for the populations to become new species. Gene flow may slow this process by spreading the new genetic variants also to the other populations. Depending on how far two species have diverged since their most recent common ancestor, it may still be possible for them to produce offspring, as with horses and donkeys mating to produce mules. Such hybrids are generally infertile. In this case, closely related species may regularly interbreed, but hybrids will be selected against and the species will remain distinct. However, viable hybrids are occasionally formed and these new species can either have properties intermediate between their parent species, or possess a totally new phenotype. The importance of hybridisation in producing new species of animals is unclear, although cases have been seen in many types of animals, with the gray tree frog being a particularly well-studied example.
Speciation has been observed multiple times under both controlled laboratory conditions and in nature. In sexually reproducing organisms, speciation results from reproductive isolation followed by genealogical divergence. There are four primary geographic modes of speciation. The most common in animals is allopatric speciation, which occurs in populations initially isolated geographically, such as by habitat fragmentation or migration. Selection under these conditions can produce very rapid changes in the appearance and behaviour of organisms. As selection and drift act independently on populations isolated from the rest of their species, separation may eventually produce organisms that cannot interbreed.
The second mode of speciation is peripatric speciation, which occurs when small populations of organisms become isolated in a new environment. This differs from allopatric speciation in that the isolated populations are numerically much smaller than the parental population. Here, the founder effect causes rapid speciation after an increase in inbreeding increases selection on homozygotes, leading to rapid genetic change.
The third mode is parapatric speciation. This is similar to peripatric speciation in that a small population enters a new habitat, but differs in that there is no physical separation between these two populations. Instead, speciation results from the evolution of mechanisms that reduce gene flow between the two populations. Generally this occurs when there has been a drastic change in the environment within the parental species' habitat. One example is the grass Anthoxanthum odoratum, which can undergo parapatric speciation in response to localised metal pollution from mines. Here, plants evolve that have resistance to high levels of metals in the soil. Selection against interbreeding with the metal-sensitive parental population produced a gradual change in the flowering time of the metal-resistant plants, which eventually produced complete reproductive isolation. Selection against hybrids between the two populations may cause reinforcement, which is the evolution of traits that promote mating within a species, as well as character displacement, which is when two species become more distinct in appearance.
Finally, in sympatric speciation species diverge without geographic isolation or changes in habitat. This form is rare since even a small amount of gene flow may remove genetic differences between parts of a population. Generally, sympatric speciation in animals requires the evolution of both genetic differences and nonrandom mating, to allow reproductive isolation to evolve.
One type of sympatric speciation involves crossbreeding of two related species to produce a new hybrid species. This is not common in animals as animal hybrids are usually sterile. This is because during meiosis the homologous chromosomes from each parent are from different species and cannot successfully pair. However, it is more common in plants because plants often double their number of chromosomes, to form polyploids. This allows the chromosomes from each parental species to form matching pairs during meiosis, since each parent's chromosomes are represented by a pair already. An example of such a speciation event is when the plant species Arabidopsis thaliana and Arabidopsis arenosa crossbred to give the new species Arabidopsis suecica. This happened about 20,000 years ago, and the speciation process has been repeated in the laboratory, which allows the study of the genetic mechanisms involved in this process. Indeed, chromosome doubling within a species may be a common cause of reproductive isolation, as half the doubled chromosomes will be unmatched when breeding with undoubled organisms.
Speciation events are important in the theory of punctuated equilibrium, which accounts for the pattern in the fossil record of short "bursts" of evolution interspersed with relatively long periods of stasis, where species remain relatively unchanged. In this theory, speciation and rapid evolution are linked, with natural selection and genetic drift acting most strongly on organisms undergoing speciation in novel habitats or small populations. As a result, the periods of stasis in the fossil record correspond to the parental population and the organisms undergoing speciation and rapid evolution are found in small populations or geographically restricted habitats and therefore rarely being preserved as fossils.
Extinction
Extinction is the disappearance of an entire species. Extinction is not an unusual event, as species regularly appear through speciation and disappear through extinction. Nearly all animal and plant species that have lived on Earth are now extinct, and extinction appears to be the ultimate fate of all species. These extinctions have happened continuously throughout the history of life, although the rate of extinction spikes in occasional mass extinction events. The Cretaceous–Paleogene extinction event, during which the non-avian dinosaurs became extinct, is the most well-known, but the earlier Permian–Triassic extinction event was even more severe, with approximately 96% of all marine species driven to extinction. The Holocene extinction event is an ongoing mass extinction associated with humanity's expansion across the globe over the past few thousand years. Present-day extinction rates are 100–1000 times greater than the background rate and up to 30% of current species may be extinct by the mid 21st century. Human activities are now the primary cause of the ongoing extinction event; global warming may further accelerate it in the future. Despite the estimated extinction of more than 99% of all species that ever lived on Earth, about 1 trillion species are estimated to be on Earth currently with only one-thousandth of 1% described.
The role of extinction in evolution is not very well understood and may depend on which type of extinction is considered. The causes of the continuous "low-level" extinction events, which form the majority of extinctions, may be the result of competition between species for limited resources (the competitive exclusion principle). If one species can out-compete another, this could produce species selection, with the fitter species surviving and the other species being driven to extinction. The intermittent mass extinctions are also important, but instead of acting as a selective force, they drastically reduce diversity in a nonspecific manner and promote bursts of rapid evolution and speciation in survivors.
Applications
Concepts and models used in evolutionary biology, such as natural selection, have many applications.
Artificial selection is the intentional selection of traits in a population of organisms. This has been used for thousands of years in the domestication of plants and animals. More recently, such selection has become a vital part of genetic engineering, with selectable markers such as antibiotic resistance genes being used to manipulate DNA. Proteins with valuable properties have evolved by repeated rounds of mutation and selection (for example modified enzymes and new antibodies) in a process called directed evolution.
Understanding the changes that have occurred during an organism's evolution can reveal the genes needed to construct parts of the body, genes which may be involved in human genetic disorders. For example, the Mexican tetra is an albino cavefish that lost its eyesight during evolution. Breeding together different populations of this blind fish produced some offspring with functional eyes, since different mutations had occurred in the isolated populations that had evolved in different caves. This helped identify genes required for vision and pigmentation.
Evolutionary theory has many applications in medicine. Many human diseases are not static phenomena, but capable of evolution. Viruses, bacteria, fungi and cancers evolve to be resistant to host immune defences, as well as to pharmaceutical drugs. These same problems occur in agriculture with pesticide and herbicide resistance. It is possible that we are facing the end of the effective life of most of available antibiotics and predicting the evolution and evolvability of our pathogens and devising strategies to slow or circumvent it is requiring deeper knowledge of the complex forces driving evolution at the molecular level.
In computer science, simulations of evolution using evolutionary algorithms and artificial life started in the 1960s and were extended with simulation of artificial selection. Artificial evolution became a widely recognised optimisation method as a result of the work of Ingo Rechenberg in the 1960s. He used evolution strategies to solve complex engineering problems. Genetic algorithms in particular became popular through the writing of John Henry Holland. Practical applications also include automatic evolution of computer programmes. Evolutionary algorithms are now used to solve multi-dimensional problems more efficiently than software produced by human designers and also to optimise the design of systems.
Evolutionary history of life
Origin of life
The Earth is about 4.54 billion years old. The earliest undisputed evidence of life on Earth dates from at least 3.5 billion years ago, during the Eoarchean Era after a geological crust started to solidify following the earlier molten Hadean Eon. Microbial mat fossils have been found in 3.48 billion-year-old sandstone in Western Australia. Other early physical evidence of a biogenic substance is graphite in 3.7 billion-year-old metasedimentary rocks discovered in Western Greenland as well as "remains of biotic life" found in 4.1 billion-year-old rocks in Western Australia. Commenting on the Australian findings, Stephen Blair Hedges wrote: "If life arose relatively quickly on Earth, then it could be common in the universe." In July 2016, scientists reported identifying a set of 355 genes from the last universal common ancestor (LUCA) of all organisms living on Earth.
More than 99% of all species, amounting to over five billion species, that ever lived on Earth are estimated to be extinct. Estimates on the number of Earth's current species range from 10 million to 14 million, of which about 1.9 million are estimated to have been named and 1.6 million documented in a central database to date, leaving at least 80% not yet described.
Highly energetic chemistry is thought to have produced a self-replicating molecule around 4 billion years ago, and half a billion years later the last common ancestor of all life existed. The current scientific consensus is that the complex biochemistry that makes up life came from simpler chemical reactions. The beginning of life may have included self-replicating molecules such as RNA and the assembly of simple cells.
Common descent
All organisms on Earth are descended from a common ancestor or ancestral gene pool. Current species are a stage in the process of evolution, with their diversity the product of a long series of speciation and extinction events. The common descent of organisms was first deduced from four simple facts about organisms: First, they have geographic distributions that cannot be explained by local adaptation. Second, the diversity of life is not a set of completely unique organisms, but organisms that share morphological similarities. Third, vestigial traits with no clear purpose resemble functional ancestral traits. Fourth, organisms can be classified using these similarities into a hierarchy of nested groups, similar to a family tree.
Due to horizontal gene transfer, this "tree of life" may be more complicated than a simple branching tree, since some genes have spread independently between distantly related species. To solve this problem and others, some authors prefer to use the "Coral of life" as a metaphor or a mathematical model to illustrate the evolution of life. This view dates back to an idea briefly mentioned by Darwin but later abandoned.
Past species have also left records of their evolutionary history. Fossils, along with the comparative anatomy of present-day organisms, constitute the morphological, or anatomical, record. By comparing the anatomies of both modern and extinct species, palaeontologists can infer the lineages of those species. However, this approach is most successful for organisms that had hard body parts, such as shells, bones or teeth. Further, as prokaryotes such as bacteria and archaea share a limited set of common morphologies, their fossils do not provide information on their ancestry.
More recently, evidence for common descent has come from the study of biochemical similarities between organisms. For example, all living cells use the same basic set of nucleotides and amino acids. The development of molecular genetics has revealed the record of evolution left in organisms' genomes: dating when species diverged through the molecular clock produced by mutations. For example, these DNA sequence comparisons have revealed that humans and chimpanzees share 98% of their genomes and analysing the few areas where they differ helps shed light on when the common ancestor of these species existed.
Evolution of life
Prokaryotes inhabited the Earth from approximately 3–4 billion years ago. No obvious changes in morphology or cellular organisation occurred in these organisms over the next few billion years. The eukaryotic cells emerged between 1.6 and 2.7 billion years ago. The next major change in cell structure came when bacteria were engulfed by eukaryotic cells, in a cooperative association called endosymbiosis. The engulfed bacteria and the host cell then underwent coevolution, with the bacteria evolving into either mitochondria or hydrogenosomes. Another engulfment of cyanobacterial-like organisms led to the formation of chloroplasts in algae and plants.
The history of life was that of the unicellular eukaryotes, prokaryotes and archaea until about 610 million years ago when multicellular organisms began to appear in the oceans in the Ediacaran period. The evolution of multicellularity occurred in multiple independent events, in organisms as diverse as sponges, brown algae, cyanobacteria, slime moulds and myxobacteria. In January 2016, scientists reported that, about 800 million years ago, a minor genetic change in a single molecule called GK-PID may have allowed organisms to go from a single cell organism to one of many cells.
Soon after the emergence of these first multicellular organisms, a remarkable amount of biological diversity appeared over approximately 10 million years, in an event called the Cambrian explosion. Here, the majority of types of modern animals appeared in the fossil record, as well as unique lineages that subsequently became extinct. Various triggers for the Cambrian explosion have been proposed, including the accumulation of oxygen in the atmosphere from photosynthesis.
About 500 million years ago, plants and fungi colonised the land and were soon followed by arthropods and other animals. Insects were particularly successful and even today make up the majority of animal species. Amphibians first appeared around 364 million years ago, followed by early amniotes and birds around 155 million years ago (both from "reptile"-like lineages), mammals around 129 million years ago, Homininae around 10 million years ago and modern humans around 250,000 years ago. However, despite the evolution of these large animals, smaller organisms similar to the types that evolved early in this process continue to be highly successful and dominate the Earth, with the majority of both biomass and species being prokaryotes.
History of evolutionary thought
Classical antiquity
The proposal that one type of organism could descend from another type goes back to some of the first pre-Socratic Greek philosophers, such as Anaximander and Empedocles. Such proposals survived into Roman times. The poet and philosopher Lucretius followed Empedocles in his masterwork De rerum natura.
Middle Ages
In contrast to these materialistic views, Aristotelianism had considered all natural things as actualisations of fixed natural possibilities, known as forms. This became part of a medieval teleological understanding of nature in which all things have an intended role to play in a divine cosmic order. Variations of this idea became the standard understanding of the Middle Ages and were integrated into Christian learning, but Aristotle did not demand that real types of organisms always correspond one-for-one with exact metaphysical forms and specifically gave examples of how new types of living things could come to be.
A number of Arab Muslim scholars wrote about evolution, most notably Ibn Khaldun, who wrote the book Muqaddimah in 1377 AD, in which he asserted that humans developed from "the world of the monkeys", in a process by which "species become more numerous".
Pre-Darwinian
The "New Science" of the 17th century rejected the Aristotelian approach. It sought to explain natural phenomena in terms of physical laws that were the same for all visible things and that did not require the existence of any fixed natural categories or divine cosmic order. However, this new approach was slow to take root in the biological sciences: the last bastion of the concept of fixed natural types. John Ray applied one of the previously more general terms for fixed natural types, "species", to plant and animal types, but he strictly identified each type of living thing as a species and proposed that each species could be defined by the features that perpetuated themselves generation after generation. The biological classification introduced by Carl Linnaeus in 1735 explicitly recognised the hierarchical nature of species relationships, but still viewed species as fixed according to a divine plan.
Other naturalists of this time speculated on the evolutionary change of species over time according to natural laws. In 1751, Pierre Louis Maupertuis wrote of natural modifications occurring during reproduction and accumulating over many generations to produce new species. Georges-Louis Leclerc, Comte de Buffon, suggested that species could degenerate into different organisms, and Erasmus Darwin proposed that all warm-blooded animals could have descended from a single microorganism (or "filament"). The first full-fledged evolutionary scheme was Jean-Baptiste Lamarck's "transmutation" theory of 1809, which envisaged spontaneous generation continually producing simple forms of life that developed greater complexity in parallel lineages with an inherent progressive tendency, and postulated that on a local level, these lineages adapted to the environment by inheriting changes caused by their use or disuse in parents. (The latter process was later called Lamarckism.) These ideas were condemned by established naturalists as speculation lacking empirical support. In particular, Georges Cuvier insisted that species were unrelated and fixed, their similarities reflecting divine design for functional needs. In the meantime, Ray's ideas of benevolent design had been developed by William Paley into the Natural Theology or Evidences of the Existence and Attributes of the Deity (1802), which proposed complex adaptations as evidence of divine design and which was admired by Charles Darwin.
Darwinian revolution
The crucial break from the concept of constant typological classes or types in biology came with the theory of evolution through natural selection, which was formulated by Charles Darwin and Alfred Wallace in terms of variable populations. Darwin used the expression "descent with modification" rather than "evolution". Partly influenced by An Essay on the Principle of Population (1798) by Thomas Robert Malthus, Darwin noted that population growth would lead to a "struggle for existence" in which favourable variations prevailed as others perished. In each generation, many offspring fail to survive to an age of reproduction because of limited resources. This could explain the diversity of plants and animals from a common ancestry through the working of natural laws in the same way for all types of organism. Darwin developed his theory of "natural selection" from 1838 onwards and was writing up his "big book" on the subject when Alfred Russel Wallace sent him a version of virtually the same theory in 1858. Their separate papers were presented together at an 1858 meeting of the Linnean Society of London. At the end of 1859, Darwin's publication of his "abstract" as On the Origin of Species explained natural selection in detail and in a way that led to an increasingly wide acceptance of Darwin's concepts of evolution at the expense of alternative theories. Thomas Henry Huxley applied Darwin's ideas to humans, using paleontology and comparative anatomy to provide strong evidence that humans and apes shared a common ancestry. Some were disturbed by this since it implied that humans did not have a special place in the universe.
Pangenesis and heredity
The mechanisms of reproductive heritability and the origin of new traits remained a mystery. Towards this end, Darwin developed his provisional theory of pangenesis. In 1865, Gregor Mendel reported that traits were inherited in a predictable manner through the independent assortment and segregation of elements (later known as genes). Mendel's laws of inheritance eventually supplanted most of Darwin's pangenesis theory. August Weismann made the important distinction between germ cells that give rise to gametes (such as sperm and egg cells) and the somatic cells of the body, demonstrating that heredity passes through the germ line only. Hugo de Vries connected Darwin's pangenesis theory to Weismann's germ/soma cell distinction and proposed that Darwin's pangenes were concentrated in the cell nucleus and when expressed they could move into the cytoplasm to change the cell's structure. De Vries was also one of the researchers who made Mendel's work well known, believing that Mendelian traits corresponded to the transfer of heritable variations along the germline. To explain how new variants originate, de Vries developed a mutation theory that led to a temporary rift between those who accepted Darwinian evolution and biometricians who allied with de Vries. In the 1930s, pioneers in the field of population genetics, such as Ronald Fisher, Sewall Wright and J. B. S. Haldane set the foundations of evolution onto a robust statistical philosophy. The false contradiction between Darwin's theory, genetic mutations, and Mendelian inheritance was thus reconciled.
The 'modern synthesis'
In the 1920s and 1930s, the modern synthesis connected natural selection and population genetics, based on Mendelian inheritance, into a unified theory that included random genetic drift, mutation, and gene flow. This new version of evolutionary theory focused on changes in allele frequencies in population. It explained patterns observed across species in populations, through fossil transitions in palaeontology.
Further syntheses
Since then, further syntheses have extended evolution's explanatory power in the light of numerous discoveries, to cover biological phenomena across the whole of the biological hierarchy from genes to populations.
The publication of the structure of DNA by James Watson and Francis Crick with contribution of Rosalind Franklin in 1953 demonstrated a physical mechanism for inheritance. Molecular biology improved understanding of the relationship between genotype and phenotype. Advances were also made in phylogenetic systematics, mapping the transition of traits into a comparative and testable framework through the publication and use of evolutionary trees. In 1973, evolutionary biologist Theodosius Dobzhansky penned that "nothing in biology makes sense except in the light of evolution", because it has brought to light the relations of what first seemed disjointed facts in natural history into a coherent explanatory body of knowledge that describes and predicts many observable facts about life on this planet.
One extension, known as evolutionary developmental biology and informally called "evo-devo", emphasises how changes between generations (evolution) act on patterns of change within individual organisms (development). Since the beginning of the 21st century, some biologists have argued for an extended evolutionary synthesis, which would account for the effects of non-genetic inheritance modes, such as epigenetics, parental effects, ecological inheritance and cultural inheritance, and evolvability.
Social and cultural responses
In the 19th century, particularly after the publication of On the Origin of Species in 1859, the idea that life had evolved was an active source of academic debate centred on the philosophical, social and religious implications of evolution. Today, the modern evolutionary synthesis is accepted by a vast majority of scientists. However, evolution remains a contentious concept for some theists.
While various religions and denominations have reconciled their beliefs with evolution through concepts such as theistic evolution, there are creationists who believe that evolution is contradicted by the creation myths found in their religions and who raise various objections to evolution. As had been demonstrated by responses to the publication of Vestiges of the Natural History of Creation in 1844, the most controversial aspect of evolutionary biology is the implication of human evolution that humans share common ancestry with apes and that the mental and moral faculties of humanity have the same types of natural causes as other inherited traits in animals. In some countries, notably the United States, these tensions between science and religion have fuelled the current creation–evolution controversy, a religious conflict focusing on politics and public education. While other scientific fields such as cosmology and Earth science also conflict with literal interpretations of many religious texts, evolutionary biology experiences significantly more opposition from religious literalists.
The teaching of evolution in American secondary school biology classes was uncommon in most of the first half of the 20th century. The Scopes Trial decision of 1925 caused the subject to become very rare in American secondary biology textbooks for a generation, but it was gradually re-introduced later and became legally protected with the 1968 Epperson v. Arkansas decision. Since then, the competing religious belief of creationism was legally disallowed in secondary school curricula in various decisions in the 1970s and 1980s, but it returned in pseudoscientific form as intelligent design (ID), to be excluded once again in the 2005 Kitzmiller v. Dover Area School District case. The debate over Darwin's ideas did not generate significant controversy in China.
See also
Chronospecies
References
Bibliography
The notebook is available from The Complete Work of Charles Darwin Online . Retrieved 2019-10-09.
The book is available from The Complete Work of Charles Darwin Online . Retrieved 2014-11-21.
"Proceedings of a symposium held at the American Museum of Natural History in New York, 2002."
. Retrieved 2014-11-29.
"Papers from the Symposium on the Limits of Reductionism in Biology, held at the Novartis Foundation, London, May 13–15, 1997."
"Based on a conference held in Bellagio, Italy, June 25–30, 1989"
Further reading
Introductory reading
American version.
Advanced reading
External links
General information
Adobe Flash required.
"History of Evolution in the United States". Salon. Retrieved 2021-08-24.
Experiments
Online lectures
Biology theories | 0.771644 | 0.999394 | 0.771176 |
Posthumanism | Posthumanism or post-humanism (meaning "after humanism" or "beyond humanism") is an idea in continental philosophy and critical theory responding to the presence of anthropocentrism in 21st-century thought. Posthumanization comprises "those processes by which a society comes to include members other than 'natural' biological human beings who, in one way or another, contribute to the structures, dynamics, or meaning of the society."
It encompasses a wide variety of branches, including:
Antihumanism: a branch of theory that is critical of traditional humanism and traditional ideas about the human condition, vitality and agency.
Cultural posthumanism: A branch of cultural theory critical of the foundational assumptions of humanism and its legacy that examines and questions the historical notions of "human" and "human nature", often challenging typical notions of human subjectivity and embodiment and strives to move beyond "archaic" concepts of "human nature" to develop ones which constantly adapt to contemporary technoscientific knowledge.
Philosophical posthumanism: A philosophical direction that draws on cultural posthumanism, the philosophical strand examines the ethical implications of expanding the circle of moral concern and extending subjectivities beyond the human species.
Posthuman condition: The deconstruction of the human condition by critical theorists.
Existential posthumanism: it embraces posthumanism as a praxis of existence. Its sources are drawn from non-dualistic global philosophies, such as Advaita Vedanta, Taoism and Zen Buddhism, the philosophies of Yoga, continental existentialism, native epistemologies and Sufism, among others. It examines and challenges hegemonic notions of being "human" by delving into the history of embodied practices of being human and, thus, expanding the reflection on human nature.
Posthuman transhumanism: A transhuman ideology and movement which, drawing from posthumanist philosophy, seeks to develop and make available technologies that enable immortality and greatly enhance human intellectual, physical, and psychological capacities in order to achieve a "posthuman future".
AI takeover: A variant of transhumanism in which humans will not be enhanced, but rather eventually replaced by artificial intelligences. Some philosophers and theorists, including Nick Land, promote the view that humans should embrace and accept their eventual demise as a consequence of a technological singularity. This is related to the view of "cosmism", which supports the building of strong artificial intelligence even if it may entail the end of humanity, as in their view it "would be a cosmic tragedy if humanity freezes evolution at the puny human level".
Voluntary human extinction: Seeks a "posthuman future" that in this case is a future without humans.
Philosophical posthumanism
Philosopher Theodore Schatzki suggests there are two varieties of posthumanism of the philosophical kind:
One, which he calls "objectivism", tries to counter the overemphasis of the subjective, or intersubjective, that pervades humanism, and emphasises the role of the nonhuman agents, whether they be animals and plants, or computers or other things, because "Humans and nonhumans, it [objectivism] proclaims, codetermine one another", and also claims "independence of (some) objects from human activity and conceptualization".
A second posthumanist agenda is "the prioritization of practices over individuals (or individual subjects)", which, they say, constitute the individual.
There may be a third kind of posthumanism, propounded by the philosopher Herman Dooyeweerd. Though he did not label it "posthumanism", he made an immanent critique of humanism, and then constructed a philosophy that presupposed neither humanist, nor scholastic, nor Greek thought but started with a different religious ground motive. Dooyeweerd prioritized law and meaningfulness as that which enables humanity and all else to exist, behave, live, occur, etc. "Meaning is the being of all that has been created", Dooyeweerd wrote, "and the nature even of our selfhood". Both human and nonhuman alike function subject to a common law-side, which is diverse, composed of a number of distinct law-spheres or aspects. The temporal being of both human and non-human is multi-aspectual; for example, both plants and humans are bodies, functioning in the biotic aspect, and both computers and humans function in the formative and lingual aspect, but humans function in the aesthetic, juridical, ethical and faith aspects too. The Dooyeweerdian version is able to incorporate and integrate both the objectivist version and the practices version, because it allows nonhuman agents their own subject-functioning in various aspects and places emphasis on aspectual functioning.
Emergence of philosophical posthumanism
Ihab Hassan, theorist in the academic study of literature, once stated: "Humanism may be coming to an end as humanism transforms itself into something one must helplessly call posthumanism." This view predates most currents of posthumanism which have developed over the late 20th century in somewhat diverse, but complementary, domains of thought and practice. For example, Hassan is a known scholar whose theoretical writings expressly address postmodernity in society. Beyond postmodernist studies, posthumanism has been developed and deployed by various cultural theorists, often in reaction to problematic inherent assumptions within humanistic and enlightenment thought.
Theorists who both complement and contrast Hassan include Michel Foucault, Judith Butler, cyberneticists such as Gregory Bateson, Warren McCullouch, Norbert Wiener, Bruno Latour, Cary Wolfe, Elaine Graham, N. Katherine Hayles, Benjamin H. Bratton, Donna Haraway, Peter Sloterdijk, Stefan Lorenz Sorgner, Evan Thompson, Francisco Varela, Humberto Maturana, Timothy Morton, and Douglas Kellner. Among the theorists are philosophers, such as Robert Pepperell, who have written about a "posthuman condition", which is often substituted for the term posthumanism.
Posthumanism differs from classical humanism by relegating humanity back to one of many natural species, thereby rejecting any claims founded on anthropocentric dominance. According to this claim, humans have no inherent rights to destroy nature or set themselves above it in ethical considerations a priori. Human knowledge is also reduced to a less controlling position, previously seen as the defining aspect of the world. Human rights exist on a spectrum with animal rights and posthuman rights. The limitations and fallibility of human intelligence are confessed, even though it does not imply abandoning the rational tradition of humanism.
Proponents of a posthuman discourse, suggest that innovative advancements and emerging technologies have transcended the traditional model of the human, as proposed by Descartes among others associated with philosophy of the Enlightenment period. Posthumanistic views were also found in the works of Shakespeare. In contrast to humanism, the discourse of posthumanism seeks to redefine the boundaries surrounding modern philosophical understanding of the human. Posthumanism represents an evolution of thought beyond that of the contemporary social boundaries and is predicated on the seeking of truth within a postmodern context. In so doing, it rejects previous attempts to establish "anthropological universals" that are imbued with anthropocentric assumptions. Recently, critics have sought to describe the emergence of posthumanism as a critical moment in modernity, arguing for the origins of key posthuman ideas in modern fiction, in Nietzsche, or in a modernist response to the crisis of historicity.
Although Nietzsche's philosophy has been characterized as posthumanist, Foucault placed posthumanism within a context that differentiated humanism from Enlightenment thought. According to Foucault, the two existed in a state of tension: as humanism sought to establish norms while Enlightenment thought attempted to transcend all that is material, including the boundaries that are constructed by humanistic thought. Drawing on the Enlightenment's challenges to the boundaries of humanism, posthumanism rejects the various assumptions of human dogmas (anthropological, political, scientific) and takes the next step by attempting to change the nature of thought about what it means to be human. This requires not only decentering the human in multiple discourses (evolutionary, ecological and technological) but also examining those discourses to uncover inherent humanistic, anthropocentric, normative notions of humanness and the concept of the human.
Contemporary posthuman discourse
Posthumanistic discourse aims to open up spaces to examine what it means to be human and critically question the concept of "the human" in light of current cultural and historical contexts. In her book How We Became Posthuman, N. Katherine Hayles, writes about the struggle between different versions of the posthuman as it continually co-evolves alongside intelligent machines. Such coevolution, according to some strands of the posthuman discourse, allows one to extend their subjective understandings of real experiences beyond the boundaries of embodied existence. According to Hayles's view of posthuman, often referred to as "technological posthumanism", visual perception and digital representations thus paradoxically become ever more salient. Even as one seeks to extend knowledge by deconstructing perceived boundaries, it is these same boundaries that make knowledge acquisition possible. The use of technology in a contemporary society is thought to complicate this relationship.
Hayles discusses the translation of human bodies into information (as suggested by Hans Moravec) in order to illuminate how the boundaries of our embodied reality have been compromised in the current age and how narrow definitions of humanness no longer apply. Because of this, according to Hayles, posthumanism is characterized by a loss of subjectivity based on bodily boundaries. This strand of posthumanism, including the changing notion of subjectivity and the disruption of ideas concerning what it means to be human, is often associated with Donna Haraway's concept of the cyborg. However, Haraway has distanced herself from posthumanistic discourse due to other theorists' use of the term to promote utopian views of technological innovation to extend the human biological capacity (even though these notions would more correctly fall into the realm of transhumanism).
While posthumanism is a broad and complex ideology, it has relevant implications today and for the future. It attempts to redefine social structures without inherently humanly or even biological origins, but rather in terms of social and psychological systems where consciousness and communication could potentially exist as unique disembodied entities. Questions subsequently emerge with respect to the current use and the future of technology in shaping human existence, as do new concerns with regards to language, symbolism, subjectivity, phenomenology, ethics, justice and creativity.
Technological versus non-technological
Posthumanism can be divided into non-technological and technological forms.
Non-technological posthumanism
While posthumanization has links with the scholarly methodologies of posthumanism, it is a distinct phenomenon. The rise of explicit posthumanism as a scholarly approach is relatively recent, occurring since the late 1970s; however, some of the processes of posthumanization that it studies are ancient. For example, the dynamics of non-technological posthumanization have existed historically in all societies in which animals were incorporated into families as household pets or in which ghosts, monsters, angels, or semidivine heroes were considered to play some role in the world.
Such non-technological posthumanization has been manifested not only in mythological and literary works but also in the construction of temples, cemeteries, zoos, or other physical structures that were considered to be inhabited or used by quasi- or para-human beings who were not natural, living, biological human beings but who nevertheless played some role within a given society, to the extent that, according to philosopher Francesca Ferrando: "the notion of spirituality dramatically broadens our understanding of the posthuman, allowing us to investigate not only technical technologies (robotics, cybernetics, biotechnology, nanotechnology, among others), but also, technologies of existence."
Technological posthumanism
Some forms of technological posthumanization involve efforts to directly alter the social, psychological, or physical structures and behaviors of the human being through the development and application of technologies relating to genetic engineering or neurocybernetic augmentation; such forms of posthumanization are studied, e.g., by cyborg theory. Other forms of technological posthumanization indirectly "posthumanize" human society through the deployment of social robots or attempts to develop artificial general intelligences, sentient networks, or other entities that can collaborate and interact with human beings as members of posthumanized societies.
The dynamics of technological posthumanization have long been an important element of science fiction; genres such as cyberpunk take them as a central focus. In recent decades, technological posthumanization has also become the subject of increasing attention by scholars and policymakers. The expanding and accelerating forces of technological posthumanization have generated diverse and conflicting responses, with some researchers viewing the processes of posthumanization as opening the door to a more meaningful and advanced transhumanist future for humanity, while other bioconservative critiques warn that such processes may lead to a fragmentation of human society, loss of meaning, and subjugation to the forces of technology.
Common features
Processes of technological and non-technological posthumanization both tend to result in a partial "de-anthropocentrization" of human society, as its circle of membership is expanded to include other types of entities and the position of human beings is decentered. A common theme of posthumanist study is the way in which processes of posthumanization challenge or blur simple binaries, such as those of "human versus non-human", "natural versus artificial", "alive versus non-alive", and "biological versus mechanical".
Relationship with transhumanism
Sociologist James Hughes comments that there is considerable confusion between the two terms. In the introduction to their book on post- and transhumanism, Robert Ranisch and Stefan Sorgner address the source of this confusion, stating that posthumanism is often used as an umbrella term that includes both transhumanism and critical posthumanism.
Although both subjects relate to the future of humanity, they differ in their view of anthropocentrism. Pramod Nayar, author of Posthumanism, states that posthumanism has two main branches: ontological and critical. Ontological posthumanism is synonymous with transhumanism. The subject is regarded as "an intensification of humanism". Transhumanist thought suggests that humans are not post human yet, but that human enhancement, often through technological advancement and application, is the passage of becoming post human. Transhumanism retains humanism's focus on the Homo sapiens as the center of the world but also considers technology to be an integral aid to human progression. Critical posthumanism, however, is opposed to these views. Critical posthumanism "rejects both human exceptionalism (the idea that humans are unique creatures) and human instrumentalism (that humans have a right to control the natural world)". These contrasting views on the importance of human beings are the main distinctions between the two subjects.
Transhumanism is also more ingrained in popular culture than critical posthumanism, especially in science fiction. The term is referred to by Pramod Nayar as "the pop posthumanism of cinema and pop culture".
Criticism
Some critics have argued that all forms of posthumanism, including transhumanism, have more in common than their respective proponents realize. Linking these different approaches, Paul James suggests that "the key political problem is that, in effect, the position allows the human as a category of being to flow down the plughole of history":
However, some posthumanists in the humanities and the arts are critical of transhumanism (the brunt of James's criticism), in part, because they argue that it incorporates and extends many of the values of Enlightenment humanism and classical liberalism, namely scientism, according to performance philosopher Shannon Bell:
While many modern leaders of thought are accepting of nature of ideologies described by posthumanism, some are more skeptical of the term. Haraway, the author of A Cyborg Manifesto, has outspokenly rejected the term, though acknowledges a philosophical alignment with posthumanism. Haraway opts instead for the term of companion species, referring to nonhuman entities with which humans coexist.
Questions of race, some argue, are suspiciously elided within the "turn" to posthumanism. Noting that the terms "post" and "human" are already loaded with racial meaning, critical theorist Zakiyyah Iman Jackson argues that the impulse to move "beyond" the human within posthumanism too often ignores "praxes of humanity and critiques produced by black people", including Frantz Fanon, Aime Cesaire, Hortense Spillers and Fred Moten. Interrogating the conceptual grounds in which such a mode of "beyond" is rendered legible and viable, Jackson argues that it is important to observe that "blackness conditions and constitutes the very nonhuman disruption and/or disruption" which posthumanists invite. In other words, given that race in general and blackness in particular constitute the very terms through which human-nonhuman distinctions are made, for example in enduring legacies of scientific racism, a gesture toward a "beyond" actually "returns us to a Eurocentric transcendentalism long challenged". Posthumanist scholarship, due to characteristic rhetorical techniques, is also frequently subject to the same critiques commonly made of postmodernist scholarship in the 1980s and 1990s.
See also
Bioconservatism
Cyborg anthropology
Posthuman
Superhuman
Technological change
Technological transitions
Transhumanism
References
Works cited
Via Project Muse .
Critical theory
Ontology
Philosophical theories
Philosophical schools and traditions
Postmodernism | 0.77436 | 0.995869 | 0.771161 |
Descriptive research | Descriptive research is used to describe characteristics of a population or phenomenon being studied. It does not answer questions about how/when/why the characteristics occurred. Rather it addresses the "what" question (what are the characteristics of the population or situation being studied?). The characteristics used to describe the situation or population are usually some kind of categorical scheme also known as descriptive categories. For example, the periodic table categorizes the elements. Scientists use knowledge about the nature of electrons, protons and neutrons to devise this categorical scheme. We now take for granted the periodic table, yet it took descriptive research to devise it. Descriptive research generally precedes explanatory research. For example, over time the periodic table's description of the elements allowed scientists to explain chemical reaction and make sound prediction when elements were combined.
Hence, descriptive research cannot describe what caused a situation. Thus, descriptive research cannot be used as the basis of a causal relationship, where one variable affects another. In other words, descriptive research can be said to have a low requirement for internal validity.
The description is used for frequencies, averages, and other statistical calculations. Often the best approach, prior to writing descriptive research, is to conduct a survey investigation. Qualitative research often has the aim of description and researchers may follow up with examinations of why the observations exist and what the implications of the findings are.
Social science research
In addition, the conceptualizing of descriptive research (categorization or taxonomy) precedes the hypotheses of explanatory research. (For a discussion of how the underlying conceptualization of exploratory research, descriptive research and explanatory research fit together, see: Conceptual framework.)
Descriptive research can be statistical research. The main objective of this type of research is to describe the data and characteristics of what is being studied. The idea behind this type of research is to study frequencies, averages, and other statistical calculations. Although this research is highly accurate, it does not gather the causes behind a situation. Descriptive research is mainly done when a researcher wants to gain a better understanding of a topic. That is, analysis of the past as opposed to the future. Descriptive research is the exploration of the existing certain phenomena. The details of the facts won't be known. The existing phenomena's facts are not known to the person.
Descriptive science
Descriptive science is a category of science that involves descriptive research; that is, observing, recording, describing, and classifying phenomena. Descriptive research is sometimes contrasted with hypothesis-driven research, which is focused on testing a particular hypothesis by means of experimentation.
David A. Grimaldi and Michael S. Engel suggest that descriptive science in biology is currently undervalued and misunderstood:
"Descriptive" in science is a pejorative, almost always preceded by "merely," and typically applied to the array of classical -ologies and -omies: anatomy, archaeology, astronomy, embryology, morphology, paleontology, taxonomy, botany, cartography, stratigraphy, and the various disciplines of zoology, to name a few. [...] First, an organism, object, or substance is not described in a vacuum, but rather in comparison with other organisms, objects, and substances. [...] Second, descriptive science is not necessarily low-tech science, and high tech is not necessarily better. [...] Finally, a theory is only as good as what it explains and the evidence (i.e., descriptions) that supports it.
A negative attitude by scientists toward descriptive science is not limited to biological disciplines: Lord Rutherford's notorious quote, "All science is either physics or stamp collecting," displays a clear negative attitude about descriptive science, and it is known that he was dismissive of astronomy, which at the beginning of the 20th century was still gathering largely descriptive data about stars, nebulae, and galaxies, and was only beginning to develop a satisfactory integration of these observations within the framework of physical law, a cornerstone of the philosophy of physics.
Descriptive versus design sciences
Ilkka Niiniluoto has used the terms "descriptive sciences" and "design sciences" as an updated version of the distinction between basic and applied science. According to Niiniluoto, descriptive sciences are those that seek to describe reality, while design sciences seek useful knowledge for human activities.
See also
Methodology
Normative science
Procedural knowledge
Scientific method
References
External links
Descriptive Research from BYU linguistics department
Research
Descriptive statistics
Philosophy of science | 0.777727 | 0.991542 | 0.771149 |
Asabiyyah | 'Asabiyyah (, also 'asabiyya, 'group feeling' or 'social cohesion') is a concept of social solidarity with an emphasis on unity, group consciousness, and a sense of shared purpose and social cohesion, originally used in the context of tribalism and clanism.
Asabiyya is neither necessarily nomadic nor based on blood relations; rather, it resembles a philosophy of classical republicanism. In the modern period, it is generally analogous to solidarity. However, it is often negatively associated because it can sometimes suggest nationalism or partisanship, i.e., loyalty to one's group regardless of circumstances.
The concept was familiar in the pre-Islamic era, but became popularized in Ibn Khaldun's Muqaddimah, in which it is described as the fundamental bond of human society and the basic motive force of history, pure only in its nomadic form.<ref>Ibn Khaldun. The Muqaddimah , translated by F. Rosenthal.</ref> Ibn Khaldun argued that asabiyya is cyclical and directly relevant to the rise and fall of civilizations: it is strongest at the start of a civilization, declines as the civilization advances, and then another more compelling asabiyyah eventually takes its place to help establish a different civilization.
Overview
Ibn Khaldun describes asabiyya as the bond of cohesion among humans in a group-forming community. The bond exists at any level of civilization, from nomadic society to states and empires. Asabiyyah is strongest in the nomadic phase, and decreases as civilization advances. As this declines, another more compelling asabiyyah may take its place; thus, civilizations rise and fall, and history describes these cycles as they play out.
Ibn Khaldun argued that some dynasty (or civilization) has within itself the seeds of its own downfall. He explains that ruling houses tend to emerge on the peripheries of existing empires and use the much stronger asabiyya present in their areas to their advantage, in order to bring about a change in leadership. This implies that the new rulers are at first considered 'barbarians' in comparison to the previous ones. As they establish themselves at the center of their empire, they become increasingly lax, less coordinated, disciplined and watchful, and more concerned with maintaining their new power and lifestyle. Their asabiyya dissolves into factionalism and individualism, diminishing their capacity as a political unit. Conditions are thus created wherein a new dynasty can emerge at the periphery of their control, grow strong, and effect a change in leadership, continuing the cycle. Ibn Khaldun also further states in the Muqaddimah that "dynasties have a natural life span like individuals", and that no dynasty generally lasts beyond three generations of about 40 years each.
See also
Tribalism
Ethnocentrism
Secular Cycles
Historic recurrence
Guilt–shame–fear spectrum of cultures
Superpower collapse
References
Citations
Bibliography
Durkheim, Émile. [1893] 1997. The Division of Labor in Society. New York: The Free Press.
Gabrieli, F. 1930. Il concetto della 'asabiyyah nel pensiero storico di Ibn Khaldun, Atti della R. Accad. delle scienze di Torino, lxv
Ibn Khaldun. The Muqaddimah , translated by F. Rosenthal.
Further reading
Ahmed, Akbar S. 2003. Islam under siege: living dangerously in a post-honor world''. Cambridge: Polity.
Korotayev, Andrey. 2006. Secular Cycles and Millennial Trends in Africa. Moscow: URSS.
Turchin, Peter. 2003. Historical Dynamics: Why States Rise and Fall. Princeton, NJ: Princeton University Press.
External links
Asabiyya: Re-Interpreting Value Change in Globalized Societies
Sociological terminology
Islamic terminology
Arabic words and phrases
ar:ابن خلدون#علم الاجتماع | 0.777368 | 0.991998 | 0.771148 |
Cultural identity | Cultural identity is a part of a person's identity, or their self-conception and self-perception, and is related to nationality, ethnicity, religion, social class, generation, locality, gender, or any kind of social group that has its own distinct culture. In this way, cultural identity is both characteristic of the individual but also of the culturally identical group of members sharing the same cultural identity or upbringing. Cultural identity is an unfixed process that is continually evolving within the discourses of social, cultural, and historical experiences. Some people undergo more cultural identity changes as opposed to others, those who change less often have a clear cultural identity. This means that they have a dynamic yet stable integration of their culture.
There are three pieces that make up a person's cultural identity: cultural knowledge, category label, and social connections. Cultural knowledge refers to a person's connection to their identity through understanding their culture's core characteristics. Category label refers to a person's connection to their identity through indirect membership of said culture. Social connections refers to a person's connection to their identity through their social relationships. Cultural identity is developed through a series of steps. First, a person comes to understand a culture through being immersed in those values, beliefs, and practices. Second, the person then identifies as a member of that culture dependent on their rank within that community. Third, they develop relationships such as immediate family, close friends, coworkers, and neighbors.
Culture is a term that is highly complex and often contested with academics recording about 160 variations in meaning. Underpinning the notion of culture is that it is dynamic and changes over time and in different contexts resulting in many people today identifying with one or more cultures and many different ways.
It is a defining feature of a person's identity, contributing to how they see themselves and the groups with which they identify. A person's understanding of their own and other's identities develops from birth and is shaped by the values and attitudes prevalent at home and in the surrounding community.
Description
Various modern cultural studies and social theories have investigated cultural identity and understanding. In recent decades, a new form of identification has emerged that breaks down the understanding of the individual as a coherent whole subject into a collection of various cultural identifiers. These cultural identifiers may be the result of various conditions including: location, sex, race, history, nationality, language, sexuality, religious beliefs, ethnicity, aesthetics, and food. As one author writes:
When talking about identity, we generally define this word as the series of physical features that differentiate a person. Thus at birth, our parents declare us and give us a name with which they will identify us based on whether we are a boy or a girl. Identity is not only a right that declares the name, sex, time, and place that one is born; the word identity goes beyond what we define it. Identity is a function of elements that portrays one in a dynamic way, in constant evolution, throughout the stages of life identity develops based on personal experiences, tastes, and choices of a sexual and religious nature, as well as the social environment, these being some of the main parameters that influence and transform the day to day and allow us to discover a new part of ourselves.
The divisions between cultures can be very fine in some parts of the world, especially in rapidly changing cities where the population is ethnically diverse and social unity is based primarily on locational contiguity.
As a "historical reservoir," culture is an important factor in shaping identity. Since one of the main characteristics of a culture is its "historical reservoir," many if not all groups entertain revisions, either consciously or unconsciously, in their historical record in order to either bolster the strength of their cultural identity or to forge one which gives them precedent for actual reform or change.
Some critics of cultural identity argue that the preservation of cultural identity, being based upon difference, is a divisive force in society and that cosmopolitanism gives individuals a greater sense of shared citizenship. When considering practical association in international society, states may share an inherent part of their 'make up' that gives common ground and an alternative means of identifying with each other. Nations provide the framework for cultural identities called external cultural reality, which influences the unique internal cultural realities of the individuals within the nation.
There is a relationship between cultural identity and new media.
Rather than necessarily representing an individual's interaction within a certain group, cultural identity may be defined by the social network of people imitating and following the social norms as presented by the media. Accordingly, instead of learning behavior and knowledge from cultural/religious groups, individuals may be learning these social norms from the media to build on their cultural identity.
A range of cultural complexities structures the way individuals operate with the cultural realities in their lives. Nation is a large factor of the cultural complexity, as it constructs the foundation for an individual's identity, but it may contrast with one's cultural reality. Cultural identities are influenced by several different factors such as ones religion, ancestry, skin color, language, class, education, profession, skill, family and political attitudes. These factors contribute to the development of one's identity.
History
The history of cultural identity develops out of the observations of a number of social scientists. A history of cultural identity is important because it outlines the understanding of how our identities provide a way to see ourselves in relation to the world in which we live. "Cultural identities...are the natural, and most fundamental, constitutive elements of individual and collective identity."
Franz Boas is an important figure in the creation of the idea of cultural identity. Boas is known for challenging ideas about culture. Boas promoted the importance of viewing a culture from within its own perspective and understanding, not from the outsider's view point. This was a somewhat radical perspective at the time. Additionally, Myron Lustig is credited with contributing the concept of cultural identity theory.
A number of contemporary theorists continue to contribute to the concept of cultural identity. For instance, contemporary work completed by Stuart Hall is considered essential to understand cultural identity. According to Hall, identity is defined by at least two specific actions, which are similarity and difference. Specifically, in settings of slavery and colonization, identity provides a connection to the past as well as disintegration from a shared origination.
Theorists' questions about identity include “whether identity is to be understood as something internal that persists through change or as something ascribed from without that changes according to circumstance." Whatever the case may be, Gleason advocates for “sensitivity to the intrinsic complexities of the subject matter with which it deals, and careful attention to the need for precision and consistency in its application. Cultural identity can also become a marker of difference that requires sensitivity.
Kuper presents concepts on cultural identity within the framework of a power dynamic. He writes, "The privileged lie and mislead, but the oppressed come gradually to appreciate their objective circumstances and formulate a new consciousness that will ultimately liberate them." The consciousness is a facet of their identity. Similarly, identity plays a role in mediating between a human being and the environment in which they exist.
The identity of a person is “a result of socialization and customs” that promotes the maintenance of distinct cultural identities from generation to generation. Additionally, identity can be considered that which forms cultures and results in “dictated appropriate behavior." Put another way, identity may dictate behavior that results in the reification of identity with the individual as a “replicate in miniature of the larger social and cultural entity. Another way to consider cultural identity is that it is “the sum of material wealth and spiritual wealth created by human beings in the practice of social history."
Globalization is connected to influences in economics, politics, and society. Accordingly, globalization has an impact on cultural identity. As societies become even more connected, there are concerns that cultural identities will become homogenized through the increased level of connection and communication. However, there are alternative perspectives on this issue. For instance, Wright theorizes that "The spread of global culture and globalised ideas has led to many movements designed to embrace the uniqueness and diversity of an individual’s particular culture."
Cultural arena
It is also noted that an individual's "cultural arena," or place where one lives, impacts the culture that person abides by. The surroundings, environment, and people in these places play a role in how one feels about the culture they wish to adopt. Many immigrants find the need to change their culture in order to fit into the culture of most citizens in the country. This can conflict with an immigrant's current belief in their culture and might pose a problem, as the immigrant feels compelled to choose between the two presenting cultures.
Some might be able to adjust to the various cultures in the world by committing to two or more cultures. It is not required to stick to one culture. Many people socialize and interact with people in one culture in addition to another group of people in another culture. Thus, cultural identity is able to take many forms and can change depending on the cultural area. The impact of the cultural arena has changed with the advent of the Internet, bringing together groups of people with shared cultural interests who before would have been more likely to integrate into their real-world cultural arena. This adaptability is what allows people to feel a part of society and culture wherever they go.
Language
Language allows for people in a group to communicate their values, beliefs, and customs, all of which contribute to creating a cultural identity. It was for a long time believed that if children lose their languages, they lose part or all of their cultural identity. When students who are non-native English speakers, go to classes where they are required to speak only English, they feel that their native language has no value. Some studies found, that this leads to loss of their culture and language altogether and this can lead to either a massive change in cultural identity, or they find themselves struggling to understand who they are. Language also includes the way people speak with peers, family members, authority figures, and strangers, including the tone and familiarity that is included in the language. The learning process can also be affected by cultural identity via the understanding of specific words, and the preference for specific words when learning and using a second language. Since many aspects of a person's cultural identity can be changed, such as citizenship or influence from outside cultures, language is a major component of cultural identity. However, more recent research could show, that language may be not a crucial part of a person's identity or cultural identity.
Education
Cultural identity is often not discussed in the classroom or learning environment where an instructor presides over the class. This often happens when the instructor attempts to discuss cultural identity and the issues that come with it in the classroom and is met with disagreement and cannot make forward progress in the conversation. Moreover, not talking about cultural identity can lead to issues such as prohibiting growth of education, development of a sense of self, and social competency. In these environments there are often many different cultures and problems can occur due to different worldviews that prevent others from being able to think outwardly about their peers' values and differing backgrounds. If students are able to think outwardly, then they can not only better connect with their peers, but also further develop their own worldview. In addition to this, instructors should take into account the needs of different students' backgrounds in order to best relay the material in a way that engages the student.
When students learn that knowledge and truth are relevant to each person, that instructors do not know everything, and that their own personal experiences dictate what they believe they can better contextualize new information using their own experiences as well as taking into account the different cultural experiences of others. This in turn increases the ability to critically think and challenge new information which benefits all students learning in a classroom setting. There are two ways instructors can better elicit this response from their students through active communication of cultural identity. The first is by having students engage in class discussion with their peers. Doing so creates community and allows for students to share their knowledge as well as question their peers and instructors, thereby, learning about each other's cultural identity and creating acceptance of differing worldviews in the classroom. The second way is by using active learning methods such as "forming small groups and analyzing case studies". Through engaging in active learning students learn that their cultural identity is welcomed and accepted.
Cultural identity and immigrant experience
Identity development among immigrant groups has been studied across a multi-dimensional view of acculturation. Acculturation is the phenomenon that results when groups or individuals from different cultures come into continuous contact with one another and adopt certain values and practices that were not originally their own. Acculturation is unique from assimilation. Dina Birman and Edison Trickett (2001) conducted a qualitative study through informal interviews with first-generation Soviet Jewish refugee adolescents looking at the process of acculturation through three different dimensions: language competence, behavioral acculturation, and cultural identity. The results indicated that "acculturation appears to occur in a linear pattern over time for most dimensions of acculturation, with acculturation to the American culture increasing and acculturation to the Russian culture decreasing. However, Russian language competence for the parents did not diminish with length of residence in the country" (Birman & Trickett, 2001).
In a similar study, Phinney, Horencyzk, Liebkind, and Vedder (2001) focused on a model, which concentrates on the interaction between immigrant characteristics and the responses of the majority society to understand the psychological effects of immigration. The researchers concluded that most studies find that being bicultural, the combination of a strong ethnic and a strong national identity, yields the best adaptation in the new country of residence. An article by LaFromboise, L. K. Colemna, and Gerton, reviews the literature on the impact of being bicultural. It showed that it is possible to have the ability to obtain competence within two cultures without losing one's sense of identity or having to identity with one culture over the other. (LaFromboise Et Al. 1993) The importance of ethnic and national identity in the educational adaptation of immigrants indicates that a bicultural orientation is advantageous for school performance (Portes & Rumbaut, 1990). Educators can assume their positions of power in beneficially impactful ways for immigrant students, by providing them with access to their native cultural support groups, language classes, after-school activities, and clubs in order to help them feel more connected to both native and national cultures. It is clear that the new country of residence can impact immigrants' identity development across multiple dimensions. Biculturalism can allow for a healthy adaptation to life and school. With many new immigrant youth, a school district in Alberta, Canada, has gone as far as to partner with various agencies and professionals in an effort to aid the cultural adjustment of new Filipino immigrant youths. In the study cited, a combination of family workshops and teacher professional development aimed to improve the language learning and emotional development of these youths and families.
School Transitions
How great is "Achievement Loss Associated with the Transition to Middle School and High School"? John W. Alspaugh's research is in the September/October 1998 Journal of Educational Research (vol. 92, no. 1), 2026. Comparing three groups of 16 school districts, the loss was greater where the transition was from sixth grade than from a K-8 system. It was also greater when students from multiple elementary schools merged into a single middle school. Students from both K-8 and middle schools lost achievement in transition to high school, though this was greater for middle school students, and high school dropout rates were higher for districts with grades 6-8 middle schools than for those with K-8 elementary schools.
The Jean S. Phinney Three-Stage Model of Ethnic Identity Development is a widely accepted view of the formation of cultural identity. In this model cultural Identity is often developed through a three-stage process: unexamined cultural identity, cultural identity search, and cultural identity achievement.
Unexamined cultural identity: "a stage where one's cultural characteristics are taken for granted, and consequently there is little interest in exploring cultural issues." This for example is the stage one is in throughout their childhood when one doesn't distinguish between cultural characteristics of their household and others. Usually, a person in this stage accepts the ideas they find on culture from their parents, the media, community, and others.
An example of thought in this stage: "I don't have a culture I'm just an American." "My parents tell me about where they lived, but what do I care? I've never lived there."
Cultural identity search: "is the process of exploration and questioning about one's culture in order to learn more about it and to understand the implications of membership in that culture." During this stage a person will begin to question why they hold their beliefs and compare it to the beliefs of other cultures. For some this stage may arise from a turning point in their life or from a growing awareness of other cultures. This stage is characterized by growing awareness in social and political forums and a desire to learn more about culture. This can be expressed by asking family members questions about heritage, visiting museums, reading of relevant cultural sources, enrolling in school courses, or attendance at cultural events. This stage might have an emotional component as well.
An example of thought in this stage: "I want to know what we do and how our culture is different from others." "There are a lot of non-Japanese people around me, and it gets pretty confusing to try and decide who I am."
Cultural identity achievement: "is characterized by a clear, confident acceptance of oneself and an internalization of one's cultural identity." In this stage people often allow the acceptance of their cultural identity play a role in their future choices such as how to raise children, how to deal with stereotypes and any discrimination and approach negative perceptions. This usually leads to an increase in self-confidence and positive psychological adjustment
The role of the internet
There is a set of phenomena that occur in conjunction between virtual culture – understood as the modes and norms of behavior associated with the internet and the online world – and youth culture. While we can speak of a duality between the virtual (online) and real sphere (face-to-face relations), for youth, this frontier is implicit and permeable. On occasions – to the annoyance of parents and teachers – these spheres are even superposed, meaning that young people may be in the real world without ceasing to be connected.
In the present techno-cultural context, the relationship between the real world and the virtual world cannot be understood as a link between two independent and separate worlds, possibly coinciding at a point, but as a Moebius strip where there exists no inside and outside and where it is impossible to identify limits between both. For new generations, to an ever-greater extent, digital life merges with their home life as yet another element of nature. In this naturalizing of digital life, the learning processes from that environment are frequently mentioned not just since they are explicitly asked but because the subject of the internet comes up spontaneously among those polled. The ideas of active learning, of googling 'when you don't know', of recourse to tutorials for learning a program or a game, or the expression 'I learnt English better and in a more entertaining way by playing' are examples often cited as to why the internet is the place most frequented by the young people polled.
The internet is becoming an extension of the expressive dimension of the youth condition. There, youth talk about their lives and concerns, design the content that they make available to others and assess others' reactions to it in the form of optimized and electronically mediated social approval. Many of today's youth go through processes of affirmation procedures and is often the case for how youth today grow dependent on peer approval. When connected, youth speak of their daily routines and lives. With each post, image or video they upload, they have the possibility of asking themselves who they are and to try out profiles differing from those they assume in the 'real' world. The connections they feel in more recent times have become much less interactive through personal means compared to past generations. The influx of new technology and access has created new fields of research on effects on teens and young adults. They thus negotiate their identity and create senses of belonging, putting the acceptance and censure of others to the test, an essential mark of the process of identity construction.
Youth ask themselves about what they think of themselves, how they see themselves personally and, especially, how others see them. On the basis of these questions, youth make decisions which, through a long process of trial and error, shape their identity. This experimentation is also a form through which they can think about their insertion, membership and sociability in the 'real' world.
From other perspectives, the question arises on what impact the internet has had on youth through accessing this sort of 'identity laboratory' and what role it plays in the shaping of youth identity. On the one hand, the internet enables young people to explore and perform various roles and personifications while on the other, the virtual forums – some of them highly attractive, vivid and absorbing (e.g. video games or virtual games of personification) – could present a risk to the construction of a stable and viable personal identity.
See also
Sources
References
Sources
Gad Barzilai, Communities and Law: Politics and Cultures of Legal Identities University of Michigan Press, 2003.
Tan, S.-h. (2005). Challenging citizenship: group membership and cultural identity in a global age. Aldershot, Hants, England: Ashgate.
Bunschoten, R., Binet, H., & Hoshino, T. (2001). Urban flotsam: stirring the city : Chora. Rotterdam: 010 Publishers.
Mandelbaum, M. (2000). The new European diasporas: national minorities and conflict in Eastern Europe. New York: Council on Foreign Relations Press
Houtman, G. (1999). Mental culture in Burmese crisis politics: Aung San Suu Kyi and the National League for Democracy. Tokyo: Institute for the Study of Languages and Cultures of Asia and Africa, Tokyo University of Foreign Studies. (library.cornell.edu).
Sagasti, F. R., & Alcalde, G. (1999). Development cooperation in a fractured global order: an arduous transition. Ottawa: International Development Research Centre.
Crahan, M. E., & Vourvoulias-Bush, A. (1997). The city and the world: New York's global future. New York: Council on Foreign relations.
Hall, S., & Du Gay, P. (1996). Questions of cultural identity. London: Sage.
Cable, V. (1994). The world's new fissures: identities in crisis. London: Demos.
Berkson, I. B. (1920).Theories of Americanization a critical study, with special reference to the Jewish group. New York City: Teachers College, Columbia University.
Mora, Necha. (2008).
Further reading
Anderson, Benedict (1983). Imagined Communities. London: Verso.
Balibar, Renée & Laporte, Dominique (1974). Le français national: Politique et pratique de la langue nationale sous la Révolution. Paris: Hachette.
(full-text IDENTITIES: how Governed, Who Pays?)
de Certeau, Michel; Julia, Dominique; & Revel, Jacques (1975). Une politique de la langue: La Révolution française et les patois. Paris: Gallimard.
Evangelista, M. (2003). "Culture, Identity, and Conflict: The Influence of Gender," in Conflict and Reconstruction in Multiethnic Societies, Washington, D.C.: The National Academies Press
Fishman, Joshua A. (1973). Language and Nationalism: Two Integrative Essays. Rowley, MA: Newbury House.
Gellner, Ernest (1983). Nations and Nationalism. Oxford: Basil Blackwell.
Gordon, David C. (1978). The French Language and National Identity (1930–1975). The Hague: Mouton.
Milstein, T. & Castro-Sotomayor, J. (2020). "Routledge Handbook of Ecocultural Identity". London, UK: Routledge. https://doi.org/10.4324/9781351068840
Robyns, Clem (1995). "Defending the national identity". In Andreas Poltermann (Ed.), Literaturkanon, Medienereignis, Kultureller Text. Berlin: Erich Schmidt Verlag .
Sparrow, Lise M. (2014). Beyond multicultural man: Complexities of identity. In Molefi Kete Asante, Yoshitaka Miike, & Jing Yin (Eds.), The global intercultural communication reader (2nd ed., pp. 393–414). New York, NY: Routledge.
Stewart, Edward C., & Bennet, Milton J. (1991). American cultural patterns: A cross-cultural perspective (Rev. ed.). Yarmouth, ME: Intercultural Press.
Woolf, Stuart. "Europe and the Nation-State". EUI Working Papers in History 91/11. Florence: European University Institute.
Anthropology
Cultural geography
Identity
Identity
Cross-cultural psychology | 0.773764 | 0.996609 | 0.771139 |
Technology and society | Technology, society and life or technology and culture refers to the inter-dependency, co-dependence, co-influence, and co-production of technology and society upon one another. Evidence for this synergy has been found since humanity first started using simple tools. The inter-relationship has continued as modern technologies such as the printing press and computers have helped shape society. The first scientific approach to this relationship occurred with the development of tektology, the "science of organization", in early twentieth century Imperial Russia. In modern academia, the interdisciplinary study of the mutual impacts of science, technology, and society, is called science and technology studies.
The simplest form of technology is the development and use of basic tools. The prehistoric discovery of how to control fire and the later Neolithic Revolution increased the available sources of food, and the invention of the wheel helped humans to travel in and control their environment. Developments in historic times have lessened physical barriers to communication and allowed humans to interact freely on a global scale, such as the printing press, telephone, and Internet.
Technology has developed advanced economies, such as the modern global economy, and has led to the rise of a leisure class. Many technological processes produce by-products known as pollution, and deplete natural resources to the detriment of Earth's environment. Innovations influence the values of society and raise new questions in the ethics of technology. Examples include the rise of the notion of efficiency in terms of human productivity, and the challenges of bioethics.
Philosophical debates have arisen over the use of technology, with disagreements over whether technology improves the human condition or worsens it. Neo-Luddism, anarcho-primitivism, and similar reactionary movements criticize the pervasiveness of technology, arguing that it harms the environment and alienates people. However, proponents of ideologies such as transhumanism and techno-progressivism view continued technological progress as beneficial to society and the human condition.
Pre-historical
The importance of stone tools, circa 2.5 million years ago, is considered fundamental in the human development in the hunting hypothesis.
Primatologist, Richard Wrangham, theorizes that the control of fire by early humans and the associated development of cooking was the spark that radically changed human evolution. Texts such as Guns, Germs, and Steel suggest that early advances in plant agriculture and husbandry fundamentally shifted the way that collective groups of individuals, and eventually societies, developed.
Modern examples and effects
Technology has taken a large role in society and day-to-day life. When societies know more about the development in a technology, they become able to take advantage of it. When an innovation achieves a certain point after it has been presented and promoted, this technology becomes part of the society. The use of technology in education provides students with technology literacy, information literacy, capacity for life-long learning, and other skills necessary for the 21st century workplace. Digital technology has entered each process and activity made by the social system. In fact, it constructed another worldwide communication system in addition to its origin.
A 1982 study by The New York Times described a technology assessment study by the Institute for the Future, "peering into the future of an electronic world." The study focused on the emerging videotex industry, formed by the marriage of two older technologies, communications, and computing. It estimated that 40 percent of American households will have two-way videotex service by the end of the century. By comparison, it took television 16 years to penetrate 90 percent of households from the time commercial service was begun.
Since the creation of computers achieved an entire better approach to transmit and store data. Digital technology became commonly used for downloading music and watching movies at home either by DVDs or purchasing it online.
Digital music records are not quite the same as traditional recording media. Obviously, because digital ones are reproducible, portable and free.
Around the globe many schools have implemented educational technology in primary schools, universities and colleges. According to the statistics, in the early beginnings of 1990s the use of Internet in schools was, on average, 2–3%. Continuously, by the end of 1990s the evolution of technology increases rapidly and reaches to 60%, and by the year of 2008 nearly 100% of schools use Internet on educational form. According to ISTE researchers, technological improvements can lead to numerous achievements in classrooms. E-learning system, collaboration of students on project based learning, and technological skills for future results in motivation of students.
Although these previous examples only show a few of the positive aspects of technology in society, there are negative side effects as well. Within this virtual realm, social media platforms such as Instagram, Facebook, and Snapchat have altered the way Generation Y culture is understanding the world and thus how they view themselves. In recent years, there has been more research on the development of social media depression in users of sites like these. "Facebook Depression" is when users are so affected by their friends' posts and lives that their own jealousy depletes their sense of self-worth. They compare themselves to the posts made by their peers and feel unworthy or monotonous because they feel like their lives are not nearly as exciting as the lives of others.
Technology has a serious effect on youth's health. The overuse of technology is said to be associated with sleep deprivation which is linked to obesity and poor academic performance in the lives of adolescents.
Economics and technological development
In ancient history, economics began when spontaneous exchange of goods and services was replaced over time by deliberate trade structures. Makers of arrowheads, for example, might have realized they could do better by concentrating on making arrowheads and barter for other needs. Regardless of goods and services bartered, some amount of technology was involved—if no more than in the making of shell and bead jewelry. Even the shaman's potions and sacred objects can be said to have involved some technology. So, from the very beginnings, technology can be said to have spurred the development of more elaborate economies. Technology is seen as primary source in economic development.
Technology advancement and economic growth are related to each other. The level of technology is important to determine the economic growth. It is the technological process which keeps the economy moving.
In the modern world, superior technologies, resources, geography, and history give rise to robust economies; and in a well-functioning, robust economy, economic excess naturally flows into greater use of technology. Moreover, because technology is such an inseparable part of human society, especially in its economic aspects, funding sources for (new) technological endeavors are virtually illimitable. However, while in the beginning, technological investment involved little more than the time, efforts, and skills of one or a few men, today, such investment may involve the collective labor and skills of many millions.
Most recently, because of the COVID-19 pandemic, the proportion of firms employing advanced digital technology in their operations expanded dramatically. It was found that firms that adopted technology were better prepared to deal with the pandemic's disruptions. Adaptation strategies in the form of remote working, 3D printing, and the use of big data analytics and AI to plan activities to adapt to the pandemic were able to ensure positive job growth.
Funding
Consequently, the sources of funding for large technological efforts have dramatically narrowed, since few have ready access to the collective labor of a whole society, or even a large part. It is conventional to divide up funding sources into governmental (involving whole, or nearly whole, social enterprises) and private (involving more limited, but generally more sharply focused) business or individual enterprises.
Government funding for new technology
The government is a major contributor to the development of new technology in many ways. In the United States alone, many government agencies specifically invest billions of dollars in new technology.
In 1980, the UK government invested just over six million pounds in a four-year program, later extended to six years, called the Microelectronics Education Programme (MEP), which was intended to give every school in Britain at least one computer, software, training materials, and extensive teacher training. Similar programs have been instituted by governments around the world.
Technology has frequently been driven by the military, with many modern applications developed for the military before they were adapted for civilian use. However, this has always been a two-way flow, with industry often developing and adopting a technology only later adopted by the military.
Entire government agencies are specifically dedicated to research, such as America's National Science Foundation, the United Kingdom's scientific research institutes, America's Small Business Innovative Research effort. Many other government agencies dedicate a major portion of their budget to research and development.
Private funding
Research and development is one of the smallest areas of investments made by corporations toward new and innovative technology.
Many foundations and other nonprofit organizations contribute to the development of technology. In the OECD, about two-thirds of research and development in scientific and technical fields is carried out by industry, and 98 percent and 10 percent, respectively, by universities and government. But in poorer countries such as Portugal and Mexico the industry contribution is significantly less. The U.S. government spends more than other countries on military research and development, although the proportion has fallen from about 30 percent in the 1980s to less than 10 percent.
The 2009 founding of Kickstarter allows individuals to receive funding via crowdsourcing for many technology related products including both new physical creations as well as documentaries, films, and web-series that focus on technology management. This circumvents the corporate or government oversight most inventors and artists struggle against but leaves the accountability of the project completely with the individual receiving the funds.
Other economic considerations
Appropriate technology, sometimes called "intermediate" technology, more of an economics concern, refers to compromises between central and expensive technologies of developed nations and those that developing nations find most effective to deploy given an excess of labour and scarcity of cash.
Persuasion technology: In economics, definitions or assumptions of progress or growth are often related to one or more assumptions about technology's economic influence. Challenging prevailing assumptions about technology and its usefulness has led to alternative ideas like uneconomic growth or measuring well-being. These, and economics itself, can often be described as technologies, specifically, as persuasion technology.
Technocapitalism
Technological diffusion
Technology acceptance model
Technology life cycle
Technology transfer
Relation to science
The relationship between science and technology can be complex. Science may drive technological development, by generating demand for new instruments to address a scientific question, or by illustrating technical possibilities previously unconsidered. An environment of encouraged science will also produce scientists and engineers, and technical schools, which encourages innovation and entrepreneurship that are capable of taking advantage of the existing science. In fact, it is recognized that "innovators, like scientists, do require access to technical information and ideas" and "must know enough to recognize useful knowledge when they see it." Science spillover also contributes to greater technological diffusion. Having a strong policy contributing to basic science allows a country to have access to a strong a knowledge base that will allow them to be "ready to exploit unforeseen developments in technology," when needed in times of crisis.
For most of human history, technological improvements were arrived at by chance, trial and error, or spontaneous inspiration. Stokes referred to these innovators as improvers of technology'…who knew no science and would not have been helped by it if they had." This idea is supported by Diamond who further indicated that these individuals are "more likely to achieve a breakthrough if [they do] not hold the currently dominant theory in too high regard." Research and development directed towards immediate technical application is a relatively recent occurrence, arising with the Industrial Revolution and becoming commonplace in the 20th century. In addition, there are examples of economies that do not emphasize science research that have been shown to be technological leaders despite this. For example, the United States relied on the scientific output of Europe in the early 20th century, though it was regarded as a leader in innovation. Another example is the technological advancement of Japan in the latter part of the same century, which emphasized more applied science (directly applicable to technology).
Though the link between science and technology has need for more clarity, what is known is that a society without sufficient building blocks to encourage this link are critical. A nation without emphasis on science is likely to eventually stagnate technologically and risk losing competitive advantage. The most critical areas for focus by policymakers are discouraging too many protections on job security, leading to less mobility of the workforce, encouraging the reliable availability of sufficient low-cost capital for investment in R&D, by favorable economic and tax policies, and supporting higher education in the sciences to produce scientists and engineers.
Sociological factors and effects
Values
The implementation of technology influences the values of a society by changing expectations and realities. The implementation of technology is also influenced by values. There are (at least) three major, interrelated values that inform, and are informed by, technological innovations:
Mechanistic world view: Viewing the universe as a collection of parts (like a machine), that can be individually analyzed and understood. This is a form of reductionism that is rare nowadays. However, the "neo-mechanistic world view" holds that nothing in the universe cannot be understood by the human intellect. Also, while all things are greater than the sum of their parts (e.g., even if we consider nothing more than the information involved in their combination), in principle, even this excess must eventually be understood by human intelligence. That is, no divine or vital principle or essence is involved.
Efficiency: A value, originally applied only to machines, but now applied to all aspects of society, so that each element is expected to attain a higher and higher percentage of its maximal possible performance, output, or ability.
Social progress: The belief that there is such a thing as social progress, and that, in the main, it is beneficent. Before the Industrial Revolution, and the subsequent explosion of technology, almost all societies believed in a cyclical theory of social movement and, indeed, of all history and the universe. This was, obviously, based on the cyclicity of the seasons, and an agricultural economy's and society's strong ties to that cyclicity. Since much of the world is closer to their agricultural roots, they are still much more amenable to cyclicity than progress in history. This may be seen, for example, in Prabhat Rainjan Sarkar's modern social cycles theory. For a more westernized version of social cyclicity, see Generations: The History of America's Future, 1584 to 2069 (Paperback) by Neil Howe and William Strauss; Harper Perennial; Reprint edition (September 30, 1992); , and subsequent books by these authors.
Institutions and groups
Technology often enables organizational and bureaucratic group structures that otherwise and heretofore were simply not possible. Examples of this might include:
The rise of very large organizations: e.g., governments, the military, health and social welfare institutions, supranational corporations.
The commercialization of leisure: sports events, products, etc. (McGinn)
The almost instantaneous dispersal of information (especially news) and entertainment around the world.
International
Technology enables greater knowledge of international issues, values, and cultures. Due mostly to mass transportation and mass media, the world seems to be a much smaller place, due to the following:
Globalization of ideas
Embeddedness of values
Population growth and control
Environment
Technology can provide understanding of and appreciation for the world around us, enable sustainability and improve environmental conditions but also degrade the environment and facilitate unsustainability.
Some polities may conclude that certain technologies' environmental detriments and other risks to outweigh their benefits, especially if or once substitutive technologies have been or can be invented, leading to directed technological phase-outs such as the fossil fuel phase-out and the nuclear fission power phase-out.
Most modern technological processes produce unwanted byproducts in addition to the desired products, which are known as waste and pollution. While material waste is often re-used in industrial processes, many processes lead to a release into the environment with negative environmental side effects, such as pollution and lack of sustainability.
Development and technologies' implications
Some technologies are designed specifically with the environment in mind, but most are designed first for financial or economic effects such as the free market's profit motive. The effects of a specific technology is often not only dependent on how it is used – e.g. its usage context – but also predetermined by the technology's design or characteristics, as in the theory of "the medium is the message" which relates to media-technologies in specific. In many cases, such predetermined or built-in implications may vary depending on factors of contextual contemporary conditions such as human biology, international relations and socioeconomics. However, many technologies may be harmful to the environment only when used in specific contexts or for specific purposes that not necessarily result from the nature of the technology.
Values
Historically, from the perspective of economic agent-centered responsibility, an increased, as of 2021 commonly theoretic and informal, value of healthy environments and more efficient productive processes may be the result of an increase in the wealth of society. Once people are able to provide for their basic needs, they can – and are often facilitated to – not only afford more environmentally destructive products and services, but could often also be able to put an – e.g. individual morality-motivated – effort into valuing less tangible goods such as clean air and water if product-, alternatives-, consequences- and services-information are adequate.
From the perspective of systems science and cybernetics, economies (systems) have economic actors and sectors make decisions based upon a range of system-internal factors with structures – or sometimes forms of leveraging existing structures – that lead to other outcomes being the result of other architectures – or systems-level configurations of the existing designs – which are considered to be possible in the sense that such could be modeled, tested, priorly assessed, developed and studied.
Negative effects on the environment
The effects of technology on the environment are both obvious and subtle. The more obvious effects include the depletion of nonrenewable natural resources (such as petroleum, coal, ores), and the added pollution of air, water, and land. The more subtle effects may include long-term effects (e.g. global warming, deforestation, natural habitat destruction, coastal wetland loss.)
Pollution and energy requirements
Each wave of technology creates a set of waste previously unknown by humans: toxic waste, radioactive waste, electronic waste, plastic waste, space waste.
Electronic waste creates direct environmental impacts through the production and maintaining the infrastructure necessary for using technology and indirect impacts by breaking barriers for global interaction through the use of information and communications technology. Certain usages of information technology and infrastructure maintenance consume energy that contributes global warming. This includes software-designs such as international cryptocurrencies and most hardware powered by nonrenewable sources.
One of the main problems is the lack of societal decision-making processes – such as the contemporary economy and politics – that lead to sufficient implementation of existing as well as potential efficient ways to remove, recycle and prevent these pollutants on a large scale expediently.
Digital technologies, however, are important in achieving the green transition and specifically, the SDGs and European Green Deal's environmental targets. Emerging digital technologies, if correctly applied, have the potential to play a critical role in addressing environmental issues. A few examples are: smart city mobility, precision agriculture, sustainable supply chains, environmental monitoring, and catastrophe prediction.
Construction and shaping
Choice
Society also controls technology through the choices it makes. These choices not only include consumer demands; they also include:
the channels of distribution, how do products go from raw materials to consumption to disposal;
the cultural beliefs regarding style, freedom of choice, consumerism, materialism, etc.;
the economic values we place on the environment, individual wealth, government control, capitalism, etc.
According to Williams and Edge, the construction and shaping of technology includes the concept of choice (and not necessarily conscious choice). Choice is inherent in both the design of individual artifacts and systems, and in the making of those artifacts and systems.
The idea here is that a single technology may not emerge from the unfolding of a predetermined logic or a single determinant, technology could be a garden of forking paths, with different paths potentially leading to different technological outcomes. This is a position that has been developed in detail by Judy Wajcman. Therefore, choices could have differing implications for society and for particular social groups.
Autonomous technology
In one line of thought, technology develops autonomously, in other words, technology seems to feed on itself, moving forward with a force irresistible by humans. To these individuals, technology is "inherently dynamic and self-augmenting."
Jacques Ellul is one proponent of the irresistibleness of technology to humans. He espouses the idea that humanity cannot resist the temptation of expanding our knowledge and our technological abilities. However, he does not believe that this seeming autonomy of technology is inherent. But the perceived autonomy is because humans do not adequately consider the responsibility that is inherent in technological processes.
Langdon Winner critiques the idea that technological evolution is essentially beyond the control of individuals or society in his book Autonomous Technology. He argues instead that the apparent autonomy of technology is a result of "technological somnambulism," the tendency of people to uncritically and unreflectively embrace and utilize new technologies without regard for their broader social and political effects.
In 1980, Mike Cooley published a critique of the automation and computerisation of engineering work under the title "Architect or Bee? The human/technology relationship". The title alludes to a comparison made by Karl Marx, on the issue of the creative achievements of human imaginative power. According to Cooley ""Scientific and technological developments have invariably proved to be double-edged. They produced the beauty of Venice and the hideousness of Chernobyl; the caring therapies of Rontgen's X-rays and the destruction of Hiroshima,"
Government
Individuals rely on governmental assistance to control the side effects and negative consequences of technology.
Supposed independence of government. An assumption commonly made about the government is that their governance role is neutral or independent. However, some argue that governing is a political process, so government will be influenced by political winds of influence. In addition, because government provides much of the funding for technological research and development, it has a vested interest in certain outcomes. Other point out that the world's biggest ecological disasters, such as the Aral Sea, Chernobyl, and Lake Karachay have been caused by government projects, which are not accountable to consumers.
Liability. One means for controlling technology is to place responsibility for the harm with the agent causing the harm. Government can allow more or less legal liability to fall to the organizations or individuals responsible for damages.
Legislation. A source of controversy is the role of industry versus that of government in maintaining a clean environment. While it is generally agreed that industry needs to be held responsible when pollution harms other people, there is disagreement over whether this should be prevented by legislation or civil courts, and whether ecological systems as such should be protected from harm by governments.
Recently, the social shaping of technology has had new influence in the fields of e-science and e-social science in the United Kingdom, which has made centers focusing on the social shaping of science and technology a central part of their funding programs.
See also
References
Sources
{{cite book | url=https://books.google.com/books?id=pyY5EusnjBcC&pg=PA88 |title=Digital technology and mediation: A challenge to activity theory. Learning and expanding with activity theory'''| last= Rückriem |first= F |year=2009 |publisher=Cambridge University Press |isbn=9780521760751}}
Further reading
Bereano, P. (1977). Technology as a Social and Political Phenomenon. Wiley & Sons, .
Dickson, D. (1977). Politics of Alternative Technology. Universe Publisher, .
Easton, T. (2011). Taking Sides: Clashing Views in Science, Technology, and Society. McGraw-Hill/Dushkin, .
(Google Books preview)
Huesemann, Michael H., and Joyce A. Huesemann (2011). Technofix: Why Technology Won't Save Us or the Environment, New Society Publishers, Gabriola Island, British Columbia, Canada, , 464 pp.
Andrey Korotayev, Artemy Malkov, and Daria Khaltourina. Introduction to Social Macrodynamics: Compact Macromodels of the World System Growth ]
MacKenzie, D., and J. Wajcman. (1999). The Social Shaping of Technology. McGraw Hill Education, .
Mesthene, E.G. (1970). Technological Change: Its Impact on Man and Society. Harvard University Press, .
Mumford, L. (2010). Technics and Civilization. University of Chicago Press, .
Postman, N. (1993). Technopoly: The Surrender of Culture to Technology. Vintage, .
Sclove, R.E. (1995). Democracy and Technology. The Guilford Press, .
Dan Senor and Saul Singer, Start-up Nation: The Story of Israel's Economic Miracle, Hachette Book Group, New York, (2009)
Shaw, Jeffrey M. (2014). Illusions of Freedom: Thomas Merton and Jacques Ellul on Technology and the Human Condition. Eugene, OR: Wipf and Stock. .
Sicilia, David B.; Wittner, David G. Strands of Modernization: The Circulation of Technology and Business Practices in East Asia, 1850–1920 (University of Toronto Press, 2021) online review
Cited at Technology Chronology (accessed September 11, 2005).
Volti, Rudi (2017). society and technological change''. New York: Worth. p. 3. .
External links
Science, Technology, and Society: An International Journal
Scientists for Global Responsibility
Technology and Society Books and Journal Articles
Technology and Society Book Reviews
The Loka Institute
The New Atlantis: A Journal of Technology and Society
Union of Concerned Scientists
Interdisciplinary historical research
Social information processing
Sociological theories
Technological change
Technology
Technology systems | 0.77736 | 0.991934 | 0.77109 |
Social class | A social class or social stratum is a grouping of people into a set of hierarchical social categories, the most common being the working class, middle class, and upper class. Membership of a social class can for example be dependent on education, wealth, occupation, income, and belonging to a particular subculture or social network.
Class is a subject of analysis for sociologists, political scientists, anthropologists and social historians. The term has a wide range of sometimes conflicting meanings, and there is no broad consensus on a definition of class. Some people argue that due to social mobility, class boundaries do not exist. In common parlance, the term social class is usually synonymous with socioeconomic class, defined as "people having the same social, economic, cultural, political or educational status", e.g. the working class, "an emerging professional class" etc. However, academics distinguish social class from socioeconomic status, using the former to refer to one's relatively stable cultural background and the latter to refer to one's current social and economic situation which is consequently more changeable over time.
The precise measurements of what determines social class in society have varied over time. Karl Marx defined class by one's relationship to the means of production (their relations of production). His understanding of classes in modern capitalist society is that the proletariat work but do not own the means of production, and the bourgeoisie, those who invest and live off the surplus generated by the proletariat's operation of the means of production, do not work at all. This contrasts with the view of the sociologist Max Weber, who contrasted class as determined by economic position, with social status (Stand) which is determined by social prestige rather than simply just relations of production. The term class is etymologically derived from the Latin classis, which was used by census takers to categorize citizens by wealth in order to determine military service obligations.
In the late 18th century, the term class began to replace classifications such as estates, rank and orders as the primary means of organizing society into hierarchical divisions. This corresponded to a general decrease in significance ascribed to hereditary characteristics and increase in the significance of wealth and income as indicators of position in the social hierarchy.
The existence of social classes is considered normal in many societies, both historic and modern, to varying degrees.
History
Ancient Egypt
The existence of a class system dates back to times of Ancient Egypt, where the position of elite was also characterized by literacy. The wealthier people were at the top in the social order and common people and slaves being at the bottom. However, the class was not rigid; a man of humble origins could ascend to a high post.
The ancient Egyptians viewed men and women, including people from all social classes, as essentially equal under the law, and even the lowliest peasant was entitled to petition the vizier and his court for redress.
Farmers made up the bulk of the population, but agricultural produce was owned directly by the state, temple, or noble family that owned the land. Farmers were also subject to a labor tax and were required to work on irrigation or construction projects in a corvée system. Artists and craftsmen were of higher status than farmers, but they were also under state control, working in the shops attached to the temples and paid directly from the state treasury. Scribes and officials formed the upper class in ancient Egypt, known as the "white kilt class" in reference to the bleached linen garments that served as a mark of their rank. The upper class prominently displayed their social status in art and literature. Below the nobility were the priests, physicians, and engineers with specialized training in their field. It is unclear whether slavery as understood today existed in ancient Egypt; there is difference of opinions among authors.
Although slaves were mostly used as indentured servants, they were able to buy and sell their servitude, work their way to freedom or nobility, and were usually treated by doctors in the workplace.
Elsewhere
In Ancient Greece when the clan system was declining. The classes replaced the clan society when it became too small to sustain the needs of increasing population. The division of labor is also essential for the growth of classes.
Historically, social class and behavior were laid down in law. For example, permitted mode of dress in some times and places was strictly regulated, with sumptuous dressing only for the high ranks of society and aristocracy, whereas sumptuary laws stipulated the dress and jewelry appropriate for a person's social rank and station. In Europe, these laws became increasingly commonplace during the Middle Ages. However, these laws were prone to change due to societal changes, and in many cases, these distinctions may either almost disappear, such as the distinction between a patrician and a plebeian being almost erased during the late Roman Republic.
Jean-Jacques Rousseau had a large influence over political ideals of the French Revolution because of his views of inequality and classes. Rousseau saw humans as "naturally pure and good," meaning that humans from birth were seen as innocent and any evilness was learned. He believed that social problems arise through the development of society and suppress the innate pureness of humankind. He also believed that private property is the main reason for social issues in society because private property creates inequality through the property's value. Even though his theory predicted if there were no private property then there would be wide spread equality, Rousseau accepted that there will always be social inequality because of how society is viewed and run.
Later Enlightenment thinkers viewed inequality as valuable and crucial to society's development and prosperity. They also acknowledged that private property will ultimately cause inequality because specific resources that are privately owned can be stored and the owners profit off of the deficit of the resource. This can create competition between the classes that was seen as necessary by these thinkers. This also creates stratification between the classes keeping a distinct difference between lower, poorer classes and the higher, wealthier classes.
India (↑), Nepal, North Korea (↑), Sri Lanka (↑) and some Indigenous peoples maintain social classes today.
In class societies, class conflict has tended to recur or is ongoing, depending on the sociological and anthropolitical perspective. Class societies have not always existed; there have been widely different types of class communities. For example, societies based on age rather than capital. During colonialism, social relations were dismantled by force, which gave rise to societies based on the social categories of waged labor, private property, and capital.
Class society
Class society or class-based society is an organizing principle society in which ownership of property, means of production, and wealth is the determining factor of the distribution of power, in which those with more property and wealth are stratified higher in the society and those without access to the means of production and without wealth are stratified lower in the society. In a class society, at least implicitly, people are divided into distinct social strata, commonly referred to as social classes or castes. The nature of class society is a matter of sociological research. Class societies exist all over the globe in both industrialized and developing nations. Class stratification is theorized to come directly from capitalism. In terms of public opinion, nine out of ten people in a Swedish survey considered it correct that they are living in a class society.
Comparative sociological research
One may use comparative methods to study class societies, using, for example, comparison of Gini coefficients, de facto educational opportunities, unemployment, and culture.
Effect on the population
Societies with large class differences have a greater proportion of people who suffer from mental health issues such as anxiety and depression symptoms. A series of scientific studies have demonstrated this relationship. Statistics support this assertion and results are found in life expectancy and overall health; for example, in the case of high differences in life expectancy between two Stockholm suburbs. The differences between life expectancy of the poor and less-well-educated inhabitants who live in proximity to the station , and the highly educated and more affluent inhabitants living near Danderyd differ by 18 years.
Similar data about New York is also available for life expectancy, average income per capita, income distribution, median income mobility for people who grew up poor, share with a bachelor's degree or higher.
In class societies, the lower classes systematically receive lower-quality education and care. There are more explicit effects where those within the higher class actively demonize parts of the lower-class population.
Theoretical models
Definitions of social classes reflect a number of sociological perspectives, informed by anthropology, economics, psychology and sociology. The major perspectives historically have been Marxism and structural functionalism. The common stratum model of class divides society into a simple hierarchy of working class, middle class and upper class. Within academia, two broad schools of definitions emerge: those aligned with 20th-century sociological stratum models of class society and those aligned with the 19th-century historical materialist economic models of the Marxists and anarchists.
Another distinction can be drawn between analytical concepts of social class, such as the Marxist and Weberian traditions, as well as the more empirical traditions such as socioeconomic status approach, which notes the correlation of income, education and wealth with social outcomes without necessarily implying a particular theory of social structure.
Marxist
For Marx, class is a combination of objective and subjective factors. Objectively, a class shares a common relationship to the means of production. The class society itself is understood as the aggregated phenomenon to the "interlinked movement", which generates the quasi-objective concept of capital. Subjectively, the members will necessarily have some perception ("class consciousness") of their similarity and common interest. Class consciousness is not simply an awareness of one's own class interest but is also a set of shared views regarding how society should be organized legally, culturally, socially and politically. These class relations are reproduced through time.
In Marxist theory, the class structure of the capitalist mode of production is characterized by the conflict between two main classes: the bourgeoisie, the capitalists who own the means of production and the much larger proletariat (or "working class") who must sell their own labour power (wage labour). This is the fundamental economic structure of work and property, a state of inequality that is normalized and reproduced through cultural ideology.
For Marxists, every person in the process of production has separate social relationships and issues. Along with this, every person is placed into different groups that have similar interests and values that can differ drastically from group to group. Class is special in that does not relate to specifically to a singular person, but to a specific role.
Marxists explain the history of "civilized" societies in terms of a war of classes between those who control production and those who produce the goods or services in society. In the Marxist view of capitalism, this is a conflict between capitalists (bourgeoisie) and wage-workers (the proletariat). For Marxists, class antagonism is rooted in the situation that control over social production necessarily entails control over the class which produces goods—in capitalism this is the exploitation of workers by the bourgeoisie.
Furthermore, "in countries where modern civilisation has become fully developed, a new class of petty bourgeois has been formed". "An industrial army of workmen, under the command of a capitalist, requires, like a real army, officers (managers) and sergeants (foremen, over-lookers) who, while the work is being done, command in the name of the capitalist".
Marx makes the argument that, as the bourgeoisie reach a point of wealth accumulation, they hold enough power as the dominant class to shape political institutions and society according to their own interests. Marx then goes on to claim that the non-elite class, owing to their large numbers, have the power to overthrow the elite and create an equal society.
In The Communist Manifesto, Marx himself argued that it was the goal of the proletariat itself to displace the capitalist system with socialism, changing the social relationships underpinning the class system and then developing into a future communist society in which: "the free development of each is the condition for the free development of all". This would mark the beginning of a classless society in which human needs rather than profit would be motive for production. In a society with democratic control and production for use, there would be no class, no state and no need for financial and banking institutions and money.
These theorists have taken this binary class system and expanded it to include contradictory class locations, the idea that a person can be employed in many different class locations that fall between the two classes of proletariat and bourgeoisie. Erik Olin Wright stated that class definitions are more diverse and elaborate through identifying with multiple classes, having familial ties with people in different a class, or having a temporary leadership role.
Weberian
Max Weber formulated a three-component theory of stratification that saw social class as emerging from an interplay between "class", "status" and "power". Weber believed that class position was determined by a person's relationship to the means of production, while status or "Stand" emerged from estimations of honor or prestige.
Weber views class as a group of people who have common goals and opportunities that are available to them. This means that what separates each class from each other is their value in the marketplace through their own goods and services. This creates a divide between the classes through the assets that they have such as property and expertise.
Weber derived many of his key concepts on social stratification by examining the social structure of many countries. He noted that contrary to Marx's theories, stratification was based on more than simply ownership of capital. Weber pointed out that some members of the aristocracy lack economic wealth yet might nevertheless have political power. Likewise in Europe, many wealthy Jewish families lacked prestige and honor because they were considered members of a "pariah group".
Class: A person's economic position in a society. Weber differs from Marx in that he does not see this as the supreme factor in stratification. Weber noted how managers of corporations or industries control firms they do not own.
Status: A person's prestige, social honour or popularity in a society. Weber noted that political power was not rooted in capital value solely, but also in one's status. Poets and saints, for example, can possess immense influence on society with often little economic worth.
Power: A person's ability to get their way despite the resistance of others. For example, individuals in state jobs, such as an employee of the Federal Bureau of Investigation, or a member of the United States Congress, may hold little property or status, but they still hold immense power.
Bourdieu
For Bourdieu, the place in the social strata for any person is vaguer than the equivalent in Weberian sociology. Bourdieu introduced an array of concepts of what he refers to as types of capital. These types were economic capital, in the form assets convertible to money and secured as private property. This type of capital is separated from the other types of culturally constituted types of capital, which Bourdieu introduces, which are: personal cultural capital (formal education, knowledge); objective cultural capital (books, art); and institutionalized cultural capital (honours and titles).
Great British Class Survey
On 2 April 2013, the results of a survey conducted by BBC Lab UK developed in collaboration with academic experts and slated to be published in the journal Sociology were published online. The results released were based on a survey of 160,000 residents of the United Kingdom most of whom lived in England and described themselves as "white". Class was defined and measured according to the amount and kind of economic, cultural and social resources reported. Economic capital was defined as income and assets; cultural capital as amount and type of cultural interests and activities; and social capital as the quantity and social status of their friends, family and personal and business contacts. This theoretical framework was developed by Pierre Bourdieu who first published his theory of social distinction in 1979.
Three-level economic class model
Today, concepts of social class often assume three general economic categories: a very wealthy and powerful upper class that owns and controls the means of production; a middle class of professional workers, small business owners and low-level managers; and a lower class, who rely on low-paying jobs for their livelihood and experience poverty.
Upper class
The upper class is the social class composed of those who are rich, well-born, powerful, or a combination of those. They usually wield the greatest political power. In some countries, wealth alone is sufficient to allow entry into the upper class. In others, only people who are born or marry into certain aristocratic bloodlines are considered members of the upper class and those who gain great wealth through commercial activity are looked down upon by the aristocracy as nouveau riche. In the United Kingdom, for example, the upper classes are the aristocracy and royalty, with wealth playing a less important role in class status. Many aristocratic peerages or titles have seats attached to them, with the holder of the title (e.g. Earl of Bristol) and his family being the custodians of the house, but not the owners. Many of these require high expenditures, so wealth is typically needed. Many aristocratic peerages and their homes are parts of estates, owned and run by the title holder with moneys generated by the land, rents or other sources of wealth. However, in the United States where there is no aristocracy or royalty, the upper class status exclusive of Americans of ancestral wealth or patricians of European ancestry is referred to in the media as the extremely wealthy, the so-called "super-rich", though there is some tendency even in the United States for those with old family wealth to look down on those who have accrued their money through business, the struggle between new money and old money.
The upper class is generally contained within the richest one or two percent of the population. Members of the upper class are often born into it and are distinguished by immense wealth which is passed from generation to generation in the form of estates. Based on some new social and political theories upper class consists of the most wealthy decile group in society which holds nearly 87% of the whole society's wealth.
Middle class
See also: Middle-class squeeze
The middle class is the most contested of the three categories, the broad group of people in contemporary society who fall socio-economically between the lower and upper classes. One example of the contest of this term is that in the United States "middle class" is applied very broadly and includes people who would elsewhere be considered working class. Middle-class workers are sometimes called "white-collar workers".
Theorists such as Ralf Dahrendorf have noted the tendency toward an enlarged middle class in modern Western societies, particularly in relation to the necessity of an educated work force in technological economies. Perspectives concerning globalization and neocolonialism, such as dependency theory, suggest this is due to the shift of low-level labour to developing nations and the Third World.
Middle class is the group of people with jobs that pay significantly more than the poverty line. Examples of these types of jobs are factory workers, salesperson, teacher, cooks and nurses. There is a new trend by some scholars which assumes that the size of the middle class in every society is the same. For example, in paradox of interest theory, middle class are those who are in 6th–9th decile groups which hold nearly 12% of the whole society's wealth.
Lower class
Lower class (occasionally described as working class) are those employed in low-paying wage jobs with very little economic security. The term "lower class" also refers to persons with low income.
The working class is sometimes separated into those who are employed but lacking financial security (the "working poor") and an underclass—those who are long-term unemployed and/or homeless, especially those receiving welfare from the state. The latter is today considered analogous to the Marxist term "lumpenproletariat". However, during the time of Marx's writing the lumpenproletariat referred to those in dire poverty; such as the homeless. Members of the working class are sometimes called blue-collar workers.
Consequences of class position
A person's socioeconomic class has wide-ranging effects. It can determine the schools they are able to attend, their health, the jobs open to them, when they exit the labour market, whom they may marry and their treatment by police and the courts.
Angus Deaton and Anne Case have analyzed the mortality rates related to the group of white, middle-aged Americans between the ages of 45 and 54 and its relation to class. There has been a growing number of suicides and deaths by substance abuse in this particular group of middle-class Americans. This group also has been recorded to have an increase in reports of chronic pain and poor general health. Deaton and Case came to the conclusion from these observations that because of the constant stress that these white, middle aged Americans feel fighting poverty and wavering between the middle and lower classes, these strains have taken a toll on these people and affected their whole bodies.
Social classifications can also determine the sporting activities that such classes take part in. It is suggested that those of an upper social class are more likely to take part in sporting activities, whereas those of a lower social background are less likely to participate in sport. However, upper-class people tend to not take part in certain sports that have been commonly known to be linked with the lower class.
Social privilege
Education
A person's social class has a significant effect on their educational opportunities. Not only are upper-class parents able to send their children to exclusive schools that are perceived to be better, but in many places, state-supported schools for children of the upper class are of a much higher quality than those the state provides for children of the lower classes. This lack of good schools is one factor that perpetuates the class divide across generations.
In the UK, the educational consequences of class position have been discussed by scholars inspired by the cultural studies framework of the CCCS and/or, especially regarding working-class girls, feminist theory. On working-class boys, Paul Willis' 1977 book Learning to Labour: How Working Class Kids Get Working Class Jobs is seen within the British Cultural Studies field as a classic discussion of their antipathy to the acquisition of knowledge. Beverley Skeggs described Learning to Labour as a study on the "irony" of "how the process of cultural and economic reproduction is made possible by 'the lads' ' celebration of the hard, macho world of work."
Health and nutrition
A person's social class often affects their physical health, their ability to receive adequate medical care and nutrition and their life expectancy.
Lower-class people experience a wide array of health problems as a result of their economic status. They are unable to use health care as often and when they do it is of lower quality, even though they generally tend to experience a much higher rate of health issues. Lower-class families have higher rates of infant mortality, cancer, cardiovascular disease and disabling physical injuries. Additionally, poor people tend to work in much more hazardous conditions, yet generally have much less (if any) health insurance provided for them, as compared to middle- and upper-class workers.
Employment
The conditions at a person's job vary greatly depending on class. Those in the upper-middle class and middle class enjoy greater freedoms in their occupations. They are usually more respected, enjoy more diversity and are able to exhibit some authority. Those in lower classes tend to feel more alienated and have lower work satisfaction overall. The physical conditions of the workplace differ greatly between classes. While middle-class workers may "suffer alienating conditions" or "lack of job satisfaction", blue-collar workers are more apt to suffer alienating, often routine, work with obvious physical health hazards, injury and even death.
In the UK, a 2015 government study by the Social Mobility Commission suggested the existence of a "glass floor" in British society preventing those who are less able, but who come from wealthier backgrounds, from slipping down the social ladder. The report proposed a 35% greater likelihood of less able, better-off children becoming high earners than bright poor children.
Class conflict
Class conflict, frequently referred to as class struggle, is the tension or antagonism which exists in society due to competing socioeconomic interests and desires between people of different classes.
For Marx, the history of class society was a history of class conflict. He pointed to the successful rise of the bourgeoisie and the necessity of revolutionary violence—a heightened form of class conflict—in securing the bourgeois rights that supported the capitalist economy.
Marx believed that the exploitation and poverty inherent in capitalism were a pre-existing form of class conflict. Marx believed that wage labourers would need to revolt to bring about a more equitable distribution of wealth and political power.
Classless society
A "classless" society is one in which no one is born into a social class. Distinctions of wealth, income, education, culture or social network might arise and would only be determined by individual experience and achievement in such a society.
Since these distinctions are difficult to avoid, advocates of a classless society (such as anarchists and communists) propose various means to achieve and maintain it and attach varying degrees of importance to it as an end in their overall programs/philosophy.
Relationship between ethnicity and class
Race and other large-scale groupings can also influence class standing. The association of particular ethnic groups with class statuses is common in many societies, and is linked with race as well. Class and ethnicity can impact a person's or community's socioeconomic standing, which in turn influences everything including job availability and the quality of available health and education. The labels ascribed to an individual change the way others perceive them, with multiple labels associated with stigma combining to worsen the social consequences of being labelled.
As a result of conquest or internal ethnic differentiation, a ruling class is often ethnically homogenous and particular races or ethnic groups in some societies are legally or customarily restricted to occupying particular class positions. Which ethnicities are considered as belonging to high or low classes varies from society to society.
In modern societies, strict legal links between ethnicity and class have been drawn, such as the caste system in Africa, apartheid, the position of the Burakumin in Japanese society and the casta system in Latin America.
See also
Caste
Class stratification
Drift hypothesis
Elite theory
Elitism
Four occupations
Health equity
Hostile architecture
Inca society
Mass society
National Statistics Socio-economic Classification
Passing (sociology)
Post-industrial society
Ranked society
Raznochintsy
Psychology of social class
Welfare state
Notes
References
Bibliography
Ojämlikhetens dimensioner – Marie Evertsson & Charlotta Magnusson (red.)
Om konsten att lyfta sig själv i håret och behålla barnet i badvattnet : kritiska synpunkter på samhällsvetenskapens vetenskapsteori – Israel, Joachim
The inner level : how more equal societies reduce stress, restore sanity and improve everyone's well-being – Richard G Wilkinson; Kate Pickett
Further reading
Archer, Louise et al. Higher Education and Social Class: Issues of Exclusion and Inclusion (RoutledgeFalmer, 2003)
Aronowitz, Stanley, How Class Works: Power and Social Movement , Yale University Press, 2003.
Beckert, Sven, and Julia B. Rosenbaum, eds. The American Bourgeoisie: Distinction and Identity in the Nineteenth Century (Palgrave Macmillan; 2011) 284 pages; Scholarly studies on the habits, manners, networks, institutions, and public roles of the American middle class with a focus on cities in the North.
Benschop, Albert. Classes – Transformational Class Analysis (Amsterdam: Spinhuis; 1993/2012).
Bertaux, Daniel & Thomson, Paul; Pathways to Social Class: A Qualitative Approach to Social Mobility (Clarendon Press, 1997)
Bisson, Thomas N.; Cultures of Power: Lordship, Status, and Process in Twelfth-Century Europe (University of Pennsylvania Press, 1995)
Blau, Peter & Duncan Otis D.; The American Occupational Structure (1967) classic study of structure and mobility
Brady, David "Rethinking the Sociological Measurement of Poverty" Social Forces Vol. 81 No. 3, (March 2003), pp. 715–751 (abstract online in Project Muse).
Broom, Leonard & Jones, F. Lancaster; Opportunity and Attainment in Australia (1977)
Cohen, Lizabeth; Consumer's Republic, (Knopf, 2003). (Historical analysis of the working out of class in the United States).
Connell, R.W and Irving, T.H., 1992. Class Structure in Australian History: Poverty and Progress. Longman Cheshire.
(Good study of Marx's concept.)
Dargin, Justin The Birth of Russia's Energy Class, Asia Times (2007) (good study of contemporary class formation in Russia, post communism)
Day, Gary; Class, (Routledge, 2001)
Domhoff, G. William, Who Rules America? Power, Politics, and Social Change, Englewood Cliffs, NJ : Prentice-Hall, 1967. (Domhoff's companion site to the book at the University of California, Santa Cruz)
Eichar, Douglas M.; Occupation and Class Consciousness in America (Greenwood Press, 1989)
Fantasia, Rick; Levine, Rhonda F.; McNall, Scott G., eds.; Bringing Class Back in Contemporary and Historical Perspectives (Westview Press, 1991)
Featherman, David L. & Hauser Robert M.; Opportunity and Change (1978).
Fotopoulos, Takis, Class Divisions Today: The Inclusive Democracy approach , Democracy & Nature, Vol. 6, No. 2, (July 2000)
Fussell, Paul; Class (a painfully accurate guide through the American status system), (1983)
Giddens, Anthony; The Class Structure of the Advanced Societies, (London: Hutchinson, 1981).
Giddens, Anthony & Mackenzie, Gavin (Eds.), Social Class and the Division of Labour. Essays in Honour of Ilya Neustadt (Cambridge: Cambridge University Press, 1982).
Goldthorpe, John H. & Erikson Robert; The Constant Flux: A Study of Class Mobility in Industrial Society (1992)
Grusky, David B. ed.; Social Stratification: Class, Race, and Gender in Sociological Perspective (2001) scholarly articles
Hazelrigg, Lawrence E. & Lopreato, Joseph; Class, Conflict, and Mobility: Theories and Studies of Class Structure (1972).
Hymowitz, Kay; Marriage and Caste in America: Separate and Unequal Families in a Post-Marital Age (2006)
Kaeble, Helmut; Social Mobility in the Nineteenth and Twentieth Centuries: Europe and America in Comparative Perspective (1985)
Jakopovich, Daniel, The Concept of Class , Cambridge Studies in Social Research, No. 14, Social Science Research Group, University of Cambridge, 2014
Jens Hoff, "The Concept of Class and Public Employees". Acta Sociologica, vol. 28, no. 3, July 1985, pp. 207–226.
Mahalingam, Ramaswami; "Essentialism, Culture, and Power: Representations of Social Class" Journal of Social Issues, Vol. 59, (2003), pp. 733+ on India
Mahony, Pat & Zmroczek, Christine; Class Matters: 'Working-Class' Women's Perspectives on Social Class (Taylor & Francis, 1997)
Manza, Jeff & Brooks, Clem; Social Cleavages and Political Change: Voter Alignments and U.S. Party Coalitions (Oxford University Press, 1999).
Manza, Jeff; "Political Sociological Models of the U.S. New Deal" Annual Review of Sociology, (2000) pp. 297+
Marmot, Michael; The Status Syndrome: How Social Standing Affects Our Health and Longevity (2004)
Marx, Karl & Engels, Frederick; The Communist Manifesto, (1848). (The key statement of class conflict as the driver of historical change).
Merriman, John M.; Consciousness and Class Experience in Nineteenth-Century Europe (Holmes & Meier Publishers, 1979)
Ostrander, Susan A.; Women of the Upper Class (Temple University Press, 1984).
Owensby, Brian P.; Intimate Ironies: Modernity and the Making of Middle-Class Lives in Brazil (Stanford University, 1999).
Pakulski, Jan & Waters, Malcolm; The Death of Class (Sage, 1996). (rejection of the relevance of class for modern societies)
Payne, Geoff; The Social Mobility of Women: Beyond Male Mobility Models (1990)
Savage, Mike; Class Analysis and Social Transformation (London: Open University Press, 2000).
Stahl, Garth; "Identity, Neoliberalism and Aspiration: Educating White Working-Class Boys" (London, Routledge, 2015).
Sennett, Richard & Cobb, Jonathan; The Hidden Injuries of Class, (Vintage, 1972) (classic study of the subjective experience of class).
Siegelbaum, Lewis H., Suny, Ronald; eds.; Making Workers Soviet: Power, Class, and Identity. (Cornell University Press, 1994). Russia 1870–1940
Wlkowitz, Daniel J.; Working with Class: Social Workers and the Politics of Middle-Class Identity (University of North Carolina Press, 1999).
Weber, Max. "Class, Status and Party", in e.g. Gerth, Hans and C. Wright Mills, From Max Weber: Essays in Sociology, (Oxford University Press, 1958). (Weber's key statement of the multiple nature of stratification).
Weinburg, Mark; "The Social Analysis of Three Early 19th century French liberals: Say, Comte, and Dunoyer" , Journal of Libertarian Studies, Vol. 2, No. 1, pp. 45–63, (1978).
Wood, Ellen Meiksins; The Retreat from Class: A New 'True' Socialism, (Schocken Books, 1986) and (Verso Classics, January 1999) reprint with new introduction.
Wood, Ellen Meiksins; "Labor, the State, and Class Struggle", Monthly Review, Vol. 49, No. 3, (1997).
Wouters, Cas.; "The Integration of Social Classes". Journal of Social History. Volume 29, Issue 1, (1995). pp 107+. (on social manners)
Wright, Erik Olin; The Debate on Classes (Verso, 1990). (neo-Marxist)
Wright, Erik Olin; Class Counts: Comparative Studies in Class Analysis (Cambridge University Press, 1997)
Wright, Erik Olin ed. Approaches to Class Analysis (2005). (scholarly articles)
Zmroczek, Christine & Mahony, Pat (Eds.), Women and Social Class: International Feminist Perspectives. (London: UCL Press 1999)
The lower your social class, the 'wiser' you are, suggests new study . Science. 20 December 2017.
External links
Domhoff, G. William, "The Class Domination Theory of Power", University of California, Santa Cruz
Graphic: How Class Works. New York Times, 2005.
Anthropology
Social divisions
Social stratification
Sociological terminology | 0.772154 | 0.998589 | 0.771065 |
Art history | Art history is, briefly, the history of art—or the study of a specific type of objects created in the past.
Traditionally, the discipline of art history emphasized painting, drawing, sculpture, architecture, ceramics and decorative arts; yet today, art history examines broader aspects of visual culture, including the various visual and conceptual outcomes related to an ever-evolving definition of art. Art history encompasses the study of objects created by different cultures around the world and throughout history that convey meaning, importance or serve usefulness primarily through visual representations.
As a discipline, art history is distinguished from art criticism, which is concerned with establishing a relative artistic value for individual works with respect to others of comparable style or sanctioning an entire style or movement; and art theory or "philosophy of art", which is concerned with the fundamental nature of art. One branch of this area of study is aesthetics, which includes investigating the enigma of the sublime and determining the essence of beauty. Technically, art history is not these things, because the art historian uses historical method to answer the questions: How did the artist come to create the work?, Who were the patrons?, Who were their teachers?, Who was the audience?, Who were their disciples?, What historical forces shaped the artist's oeuvre and how did he or she and the creation, in turn, affect the course of artistic, political and social events? It is, however, questionable whether many questions of this kind can be answered satisfactorily without also considering basic questions about the nature of art. The current disciplinary gap between art history and the philosophy of art (aesthetics) often hinders this inquiry.
Methodologies
Art history is an interdisciplinary practice that analyzes the various factors—cultural, political, religious, economic or artistic—which contribute to visual appearance of a work of art.
Art historians employ a number of methods in their research into the ontology and history of objects.
Art historians often examine work in the context of its time. At best, this is done in a manner which respects its creator's motivations and imperatives; with consideration of the desires and prejudices of its patrons and sponsors; with a comparative analysis of themes and approaches of the creator's colleagues and teachers; and with consideration of iconography and symbolism. In short, this approach examines the work of art in the context of the world within which it was created.
Art historians also often examine work through an analysis of form; that is, the creator's use of line, shape, color, texture and composition. This approach examines how the artist uses a two-dimensional picture plane or the three dimensions of sculptural or architectural space to create their art. The way these individual elements are employed results in representational or non-representational art. Is the artist imitating an object or can the image be found in nature? If so, it is representational. The closer the art hews to perfect imitation, the more the art is realistic. Is the artist not imitating, but instead relying on symbolism or in an important way striving to capture nature's essence, rather than copy it directly? If so the art is non-representational—also called abstract. Realism and abstraction exist on a continuum. Impressionism is an example of a representational style that was not directly imitative, but strove to create an "impression" of nature. If the work is not representational and is an expression of the artist's feelings, longings and aspirations or is a search for ideals of beauty and form, the work is non-representational or a work of expressionism.
An iconographical analysis is one which focuses on particular design elements of an object. Through a close reading of such elements, it is possible to trace their lineage, and with it draw conclusions regarding the origins and trajectory of these motifs. In turn, it is possible to make any number of observations regarding the social, cultural, economic and aesthetic values of those responsible for producing the object.
Many art historians use critical theory to frame their inquiries into objects. Theory is most often used when dealing with more recent objects, those from the late 19th century onward. Critical theory in art history is often borrowed from literary scholars and it involves the application of a non-artistic analytical framework to the study of art objects. Feminist, Marxist, critical race, queer and postcolonial theories are all well established in the discipline. As in literary studies, there is an interest among scholars in nature and the environment, but the direction that this will take in the discipline has yet to be determined.
Timeline of prominent methods
Pliny the Elder and ancient precedents
The earliest surviving writing on art that can be classified as art history are the passages in Pliny the Elder's Natural History (–79), concerning the development of Greek sculpture and painting. From them it is possible to trace the ideas of Xenokrates of Sicyon, a Greek sculptor who was perhaps the first art historian. Pliny's work, while mainly an encyclopaedia of the sciences, has thus been influential from the Renaissance onwards. (Passages about techniques used by the painter Apelles c. (332–329 BC), have been especially well-known.) Similar, though independent, developments occurred in the 6th century China, where a canon of worthy artists was established by writers in the scholar-official class. These writers, being necessarily proficient in calligraphy, were artists themselves. The artists are described in the Six Principles of Painting formulated by Xie He.
Vasari and artists' biographies
While personal reminiscences of art and artists have long been written and read (see Lorenzo Ghiberti Commentarii, for the best early example), it was Giorgio Vasari, the Tuscan painter, sculptor and author of the Lives of the Most Excellent Painters, Sculptors, and Architects, who wrote the first true history of art. He emphasized art's progression and development, which was a milestone in this field. His was a personal and a historical account, featuring biographies of individual Italian artists, many of whom were his contemporaries and personal acquaintances. The most renowned of these was Michelangelo.
Vasari's ideas about art were enormously influential, and served as a model for many, including in the north of Europe Karel van Mander's Schilder-boeck and Joachim von Sandrart's Teutsche Akademie. Vasari's approach held sway until the 18th century, when criticism was leveled at his biographical account of history.
Winckelmann and art criticism
Scholars such as Johann Joachim Winckelmann (1717–1768) criticized Vasari's "cult" of artistic personality, and they argued that the real emphasis in the study of art should be the views of the learned beholder and not the viewpoint of the artist. Winckelmann's writings thus were the beginnings of art criticism. His two most notable works that introduced the concept of art criticism were , published in 1755, shortly before he left for Rome (Fuseli published an English translation in 1765 under the title Reflections on the Painting and Sculpture of the Greeks), and (History of Art in Antiquity), published in 1764 (this is the first occurrence of the phrase 'history of art' in the title of a book). Winckelmann critiqued the artistic excesses of Baroque and Rococo forms, and was instrumental in reforming taste in favor of the more sober Neoclassicism. Jacob Burckhardt (1818–1897), one of the founders of art history, noted that Winckelmann was 'the first to distinguish between the periods of ancient art and to link the history of style with world history'. From Winckelmann until the mid-20th century, the field of art history was dominated by German-speaking academics. Winckelmann's work thus marked the entry of art history into the high-philosophical discourse of German culture.
Winckelmann was read avidly by Johann Wolfgang von Goethe and Friedrich Schiller, both of whom began to write on the history of art, and his account of the Laocoön group occasioned a response by Lessing. The emergence of art as a major subject of philosophical speculation was solidified by the appearance of Immanuel Kant's Critique of Judgment in 1790, and was furthered by Hegel's Lectures on Aesthetics. Hegel's philosophy served as the direct inspiration for Karl Schnaase's work. Schnaase's Niederländische Briefe established the theoretical foundations for art history as an autonomous discipline, and his Geschichte der bildenden Künste, one of the first historical surveys of the history of art from antiquity to the Renaissance, facilitated the teaching of art history in German-speaking universities. Schnaase's survey was published contemporaneously with a similar work by Franz Theodor Kugler.
Wölfflin and stylistic analysis
Heinrich Wölfflin (1864–1945), who studied under Burckhardt in Basel, is the "father" of modern art history. Wölfflin taught at the universities of Berlin, Basel, Munich, and Zurich. A number of students went on to distinguished careers in art history, including Jakob Rosenberg and . He introduced a scientific approach to the history of art, focusing on three concepts. Firstly, he attempted to study art using psychology, particularly by applying the work of Wilhelm Wundt. He argued, among other things, that art and architecture are good if they resemble the human body. For example, houses were good if their façades looked like faces. Secondly, he introduced the idea of studying art through comparison. By comparing individual paintings to each other, he was able to make distinctions of style. His book Renaissance and Baroque developed this idea, and was the first to show how these stylistic periods differed from one another. In contrast to Giorgio Vasari, Wölfflin was uninterested in the biographies of artists. In fact he proposed the creation of an "art history without names." Finally, he studied art based on ideas of nationhood. He was particularly interested in whether there was an inherently "Italian" and an inherently "German" style. This last interest was most fully articulated in his monograph on the German artist Albrecht Dürer.
Riegl, Wickhoff, and the Vienna School
Contemporaneous with Wölfflin's career, a major school of art-historical thought developed at the University of Vienna. The first generation of the Vienna School was dominated by Alois Riegl and Franz Wickhoff, both students of Moritz Thausing, and was characterized by a tendency to reassess neglected or disparaged periods in the history of art. Riegl and Wickhoff both wrote extensively on the art of late antiquity, which before them had been considered as a period of decline from the classical ideal. Riegl also contributed to the revaluation of the Baroque.
The next generation of professors at Vienna included Max Dvořák, Julius von Schlosser, Hans Tietze, Karl Maria Swoboda, and Josef Strzygowski. A number of the most important twentieth-century art historians, including Ernst Gombrich, received their degrees at Vienna at this time. The term "Second Vienna School" (or "New Vienna School") usually refers to the following generation of Viennese scholars, including Hans Sedlmayr, Otto Pächt, and Guido Kaschnitz von Weinberg. These scholars began in the 1930s to return to the work of the first generation, particularly to Riegl and his concept of Kunstwollen, and attempted to develop it into a full-blown art-historical methodology. Sedlmayr, in particular, rejected the minute study of iconography, patronage, and other approaches grounded in historical context, preferring instead to concentrate on the aesthetic qualities of a work of art. As a result, the Second Vienna School gained a reputation for unrestrained and irresponsible formalism, and was furthermore colored by Sedlmayr's overt racism and membership in the Nazi party. This latter tendency was, however, by no means shared by all members of the school; Pächt, for example, was himself Jewish, and was forced to leave Vienna in the 1930s.
Panofsky and iconography
Our 21st-century understanding of the symbolic content of art comes from a group of scholars who gathered in Hamburg in the 1920s. The most prominent among them were Erwin Panofsky, Aby Warburg, Fritz Saxl and Gertrud Bing. Together they developed much of the vocabulary that continues to be used in the 21st century by art historians. "Iconography"—with roots meaning "symbols from writing" refers to subject matter of art derived from written sources—especially scripture and mythology. "Iconology" is a broader term that referred to all symbolism, whether derived from a specific text or not. Today art historians sometimes use these terms interchangeably.
Panofsky, in his early work, also developed the theories of Riegl, but became eventually more preoccupied with iconography, and in particular with the transmission of themes related to classical antiquity in the Middle Ages and Renaissance. In this respect his interests coincided with those of Warburg, the son of a wealthy family who had assembled a library in Hamburg, devoted to the study of the classical tradition in later art and culture. Under Saxl's auspices, this library was developed into a research institute, affiliated with the University of Hamburg, where Panofsky taught.
Warburg died in 1929, and in the 1930s Saxl and Panofsky, both Jewish, were forced to leave Hamburg. Saxl settled in London, bringing Warburg's library with him and establishing the Warburg Institute. Panofsky settled in Princeton at the Institute for Advanced Study. In this respect they were part of an extraordinary influx of German art historians into the English-speaking academy in the 1930s. These scholars were largely responsible for establishing art history as a legitimate field of study in the English-speaking world, and the influence of Panofsky's methodology, in particular, determined the course of American art history for a generation.
Freud and psychoanalysis
Heinrich Wölfflin was not the only scholar to invoke psychological theories in the study of art. An unexpected turn in the history of art criticism came in 1910 when psychoanalyst Sigmund Freud published a book on the artist Leonardo da Vinci, in which he used Leonardo's paintings to interrogate the artist's psyche and sexual orientation. Freud inferred from his analysis that Leonardo was probably homosexual. In 1914 Freud published a psychoanalytical interpretation of Michelangelo's Moses (Der Moses des Michelangelo). He published this work shortly after reading Vasari's Lives. For unknown reasons, he originally published the article anonymously.
Though the use of posthumous material to perform psychoanalysis is controversial among art historians, especially as the sexual mores of Michelangelo's and Leonardo's time and Freud's are different, it is often attempted.
Jung and archetypes
Carl Jung also applied psychoanalytic theory to art. Jung was a Swiss psychiatrist, an influential thinker, and founder of analytical psychology. Jung's approach to psychology emphasized understanding the psyche through exploring the worlds of dreams, art, mythology, world religion and philosophy. Much of his life's work was spent exploring Eastern and Western philosophy, alchemy, astrology, sociology, as well as literature and the arts. His most notable contributions include his concept of the psychological archetype, the collective unconscious, and his theory of synchronicity. Jung believed that many experiences perceived as coincidence were not merely due to chance but, instead, suggested the manifestation of parallel events or circumstances reflecting this governing dynamic. He argued that a collective unconscious and archetypal imagery were detectable in art. His ideas were particularly popular among American Abstract expressionists in the 1940s and 1950s. His work inspired the surrealist concept of drawing imagery from dreams and the unconscious.
Jung emphasized the importance of balance and harmony. He cautioned that modern humans rely too heavily on science and logic and would benefit from integrating spirituality and appreciation of the unconscious realm. His work not only triggered analytical work by art historians but became an integral part of art-making. Jackson Pollock, for example, famously created a series of drawings to accompany his sessions with his Jungian analyst, Joseph Henderson. Henderson, who later published the drawings in a text devoted to Pollock's sessions, realized how powerful the drawings were as a therapeutic tool.
The legacy of psychoanalysis and analytical psychology in art history has been profound, and extends beyond Freud and Jung. The prominent feminist art historian Griselda Pollock, for example, draws upon psychoanalysis both in her reading into contemporary art and in her rereading of modernist art. With Griselda Pollock's reading of French feminist psychoanalysis and in particular the writings of Julia Kristeva and Bracha L. Ettinger, as with Rosalind Krauss's readings of Jacques Lacan and Jean-François Lyotard and Catherine de Zegher's curatorial rereading of art, Feminist theory written in the fields of French feminism and Psychoanalysis has strongly informed the reframing of both men and women artists in art history.
Marx and ideology
During the mid-20th century, art historians embraced social history by using critical approaches. The goal was to show how art interacts with power structures in society. One such critical approach was Marxism. Marxist art history attempted to show how art was tied to specific classes, how images contain information about the economy, and how images can make the status quo seem natural (ideology).
Marcel Duchamp and the Dada Movement jump-started the anti-art style. German artists, upset by the World War in 1914, wanted to create artworks which were nonconforming and aimed to destroy traditional art styles. These two movements helped other artists to create pieces that were not viewed as traditional art. Some examples of styles that branched off the anti-art movement would be Neo-Dadaism, Surrealism, and Constructivism. These styles and artists did not want to surrender to traditional ways of art. This way of thinking provoked political movements such as the Russian Revolution and the communist ideals.
Artist Isaak Brodsky's work of art Shock Workers from Dnieprostroi in 1932 shows his political involvement within art. This piece of art can be analysed to show the internal troubles Soviet Russia was experiencing at the time. Perhaps the best-known Marxist was Clement Greenberg, who came to prominence during the late 1930s with his essay "Avant-Garde and Kitsch". In the essay Greenberg claimed that the avant-garde arose in order to defend aesthetic standards from the decline of taste involved in consumer society, and seeing kitsch and art as opposites. Greenberg further claimed that avant-garde and Modernist art was a means to resist the leveling of culture produced by capitalist propaganda. Greenberg appropriated the German word 'kitsch' to describe this consumerism, although its connotations have since changed to a more affirmative notion of leftover materials of capitalist culture. Greenberg now is well known for examining and criticizing the formal properties of modern art.
Meyer Schapiro is one of the best-remembered Marxist art historians of the mid-20th century. After his graduation from Columbia University in 1924, he returned to his alma mater to teach Byzantine, Early Christian, and medieval art along with art-historical theory. Although he wrote about numerous time periods and themes in art, he is best remembered for his commentary on sculpture from the late Middle Ages and early Renaissance.
Arnold Hauser wrote the first Marxist survey of Western Art, entitled The Social History of Art. He attempted to show how class consciousness was reflected in major art periods. The book was controversial when published in 1951 because of its generalizations about entire eras, a strategy now called "vulgar Marxism".
Marxist art history was refined by scholars such as T. J. Clark, , David Kunzle, Theodor W. Adorno, and Max Horkheimer. T. J. Clark was the first art historian writing from a Marxist perspective to abandon vulgar Marxism. He wrote Marxist art histories of several impressionist and realist artists, including Gustave Courbet and Édouard Manet. These books focused closely on the political and economic climates in which the art was created.
Feminist art history
Linda Nochlin's essay "Why Have There Been No Great Women Artists?" helped to ignite feminist art history during the 1970s and remains one of the most widely read essays about female artists. This was then followed by a 1972 College Art Association Panel, chaired by Nochlin, entitled "Eroticism and the Image of Woman in Nineteenth-Century Art". Within a decade, scores of papers, articles, and essays sustained a growing momentum, fueled by the Second-wave feminist movement, of critical discourse surrounding women's interactions with the arts as both artists and subjects. In her pioneering essay, Nochlin applies a feminist critical framework to show systematic exclusion of women from art training, arguing that exclusion from practicing art as well as the canonical history of art was the consequence of cultural conditions which curtailed and restricted women from art producing fields. The few who did succeed were treated as anomalies and did not provide a model for subsequent success. Griselda Pollock is another prominent feminist art historian, whose use of psychoanalytic theory is described above.
While feminist art history can focus on any time period and location, much attention has been given to the Modern era. Some of this scholarship centers on the feminist art movement, which referred specifically to the experience of women. Often, feminist art history offers a critical "re-reading" of the Western art canon, such as Carol Duncan's re-interpretation of Les Demoiselles d'Avignon. Two pioneers of the field are Mary Garrard and Norma Broude. Their anthologies Feminism and Art History: Questioning the Litany, The Expanding Discourse: Feminism and Art History, and Reclaiming Feminist Agency: Feminist Art History After Postmodernism are substantial efforts to bring feminist perspectives into the discourse of art history. The pair also co-founded the Feminist Art History Conference.
Barthes and semiotics
As opposed to iconography which seeks to identify meaning, semiotics is concerned with how meaning is created. Roland Barthes's connoted and denoted meanings are paramount to this examination. In any particular work of art, an interpretation depends on the identification of denoted meaning—the recognition of a visual sign, and the connoted meaning—the instant cultural associations that come with recognition. The main concern of the semiotic art historian is to come up with ways to navigate and interpret connoted meaning.
Semiotic art history seeks to uncover the codified meaning or meanings in an aesthetic object by examining its connectedness to a collective consciousness. Art historians do not commonly commit to any one particular brand of semiotics but rather construct an amalgamated version which they incorporate into their collection of analytical tools. For example, Meyer Schapiro borrowed Saussure's differential meaning in effort to read signs as they exist within a system. According to Schapiro, to understand the meaning of frontality in a specific pictorial context, it must be differentiated from, or viewed in relation to, alternate possibilities such as a profile, or a three-quarter view. Schapiro combined this method with the work of Charles Sanders Peirce whose object, sign, and interpretant provided a structure for his approach. Alex Potts demonstrates the application of Peirce's concepts to visual representation by examining them in relation to the Mona Lisa. By seeing the Mona Lisa, for example, as something beyond its materiality is to identify it as a sign. It is then recognized as referring to an object outside of itself, a woman, or Mona Lisa. The image does not seem to denote religious meaning and can therefore be assumed to be a portrait. This interpretation leads to a chain of possible interpretations: who was the sitter in relation to Leonardo da Vinci? What significance did she have to him? Or, maybe she is an icon for all of womankind. This chain of interpretation, or "unlimited semiosis" is endless; the art historian's job is to place boundaries on possible interpretations as much as it is to reveal new possibilities.
Semiotics operates under the theory that an image can only be understood from the viewer's perspective. The artist is supplanted by the viewer as the purveyor of meaning, even to the extent that an interpretation is still valid regardless of whether the creator had intended it. Rosalind Krauss espoused this concept in her essay "In the Name of Picasso." She denounced the artist's monopoly on meaning and insisted that meaning can only be derived after the work has been removed from its historical and social context. Mieke Bal argued similarly that meaning does not even exist until the image is observed by the viewer. It is only after acknowledging this that meaning can become opened up to other possibilities such as feminism or psychoanalysis.
Museum studies and collecting
Aspects of the subject which have come to the fore in recent decades include interest in the patronage and consumption of art, including the economics of the art market, the role of collectors, the intentions and aspirations of those commissioning works, and the reactions of contemporary and later viewers and owners. Museum studies, including the history of museum collecting and display, is now a specialized field of study, as is the history of collecting.
New materialism
Scientific advances have made possible much more accurate investigation of the materials and techniques used to create works, especially infra-red and x-ray photographic techniques which have allowed many underdrawings of paintings to be seen again, including figures that had been removed from the piece. Proper analysis of pigments used in paint is now possible, which has upset many attributions. Dendrochronology for panel paintings and radio-carbon dating for old objects in organic materials have allowed scientific methods of dating objects to confirm or upset dates derived from stylistic analysis or documentary evidence. The development of good color photography, now held digitally and available on the internet or by other means, has transformed the study of many types of art, especially those covering objects existing in large numbers which are widely dispersed among collections, such as illuminated manuscripts and Persian miniatures, and many types of archaeological artworks.
Concurrent to those technological advances, art historians have shown increasing interest in new theoretical approaches to the nature of artworks as objects. Thing theory, actor–network theory, and object-oriented ontology have played an increasing role in art historical literature.
Nationalist art history
The making of art, the academic history of art, and the history of art museums are closely intertwined with the rise of nationalism. Art created in the modern era, in fact, has often been an attempt to generate feelings of national superiority or love of one's country. Russian art is an especially good example of this, as the Russian avant-garde and later Soviet art were attempts to define that country's identity.
Napoleon Bonaparte was also well known for commissioning works that emphasized the strength of France with him as ruler.
Western Romanticism provided a new appreciation for one's home country, or new home country. Caspar David Friedrich's, Monk by the Sea (1808 or 1810) sets a sublime scene representing the overwhelming beauty and strength of the German shoreline at the Baltic Sea. In the infancy of the American colonies, the people believed it was their destiny to explore the Western, "untamed", wilderness. Artists who had been training at the Hudson River School in New York, took on the task of presenting the unknown land as both picturesque and sublime.
Most art historians working today identify their specialty as the art of a particular culture, time period, or movement like, 19th-century German or contemporary Chinese art. A focus on nationhood has deep roots in the discipline. Indeed, Vasari's Lives of the Most Excellent Painters, Sculptors, and Architects is an attempt to show the superiority of Florentine artistic culture, and Heinrich Wölfflin's writings (especially his monograph on Albrecht Dürer) attempt to distinguish Italian from German styles of art.
Many of the largest and most well-funded art museums of the world, such as the Louvre, the Victoria and Albert Museum, and the National Gallery of Art in Washington are state-owned. Most countries, indeed, have a national gallery, with an explicit mission of preserving the cultural patrimony owned by the government—regardless of what cultures created the art—and an often implicit mission to bolster that country's own cultural heritage. The National Gallery of Art thus showcases art made in the United States, but also owns objects from across the world.
Divisions by period
The discipline of art history is traditionally divided into specializations or concentrations based on eras and regions, with further sub-division based on media.
Western art for example, can be divided into the following periods: Ancient Classical, Medieval, Renaissance, Baroque, Neoclassical, Romanticism, Modern, and Contemporary.
Professional organizations
In the United States, the most important art history organization is the College Art Association. It organizes an annual conference and publishes the Art Bulletin and Art Journal. Similar organizations exist in other parts of the world, as well as for specializations, such as architectural history and Renaissance art history. In the UK, for example, the Association of Art Historians is the premiere organization, and it publishes a journal titled Art History.
See also
Bildwissenschaft
Dictionary of Art Historians, a database of notable art historians maintained by Duke University
Fine art
Rock art
Theosophy and visual arts
Notes and references
Sources
Further reading
Listed by date
Wölfflin, H. (1915, trans. 1932). Principles of Art History; the problem of the development of style in later art. [New York]: Dover Publications.
Hauser, A. (1959). The philosophy of art history. New York: Knopf.
Arntzen, E., & Rainwater, R. (1980). Guide to the literature of art history. Chicago: American Library Association.
Holly, M. A. (1984). Panofsky and the foundations of art history. Ithaca, New York: Cornell University Press.
Johnson, W. M. (1988). Art history: its use and abuse. Toronto: University of Toronto Press.
Carrier, D. (1991). Principles of art history writing. University Park, Pa: Pennsylvania State University Press.
Kemal, Salim, and Ivan Gaskell (1991). The Language of Art History. Cambridge University Press.
Fitzpatrick, Virginia L. N. V. D. (1992). Art History: A Contextual Inquiry Course. Point of view series. Reston, Virginia: National Art Education Association.
Minor, Vernon Hyde. (1994). Critical Theory of Art History. Englewood Cliffs, New Jersey: Prentice Hall.
Adams, L. (1996). The methodologies of art: an introduction. New York: IconEditions.
Frazier, N. (1999). The Penguin concise dictionary of art history. New York: Penguin Reference.
Pollock, G., (1999). Differencing the Canon. Routledge.
Harrison, Charles, Paul Wood, and Jason Gaiger. (2000). Art in Theory 1648–1815: An Anthology of Changing Ideas. Malden, MA: Blackwell.
Minor, Vernon Hyde. (2001). Art history's history. 2nd ed. Upper Saddle River, New Jersey: Prentice Hall.
Robinson, Hilary. (2001). Feminism – Art – Theory: An Anthology, 1968–2000. Malden, Massachusetts: Blackwell.
Clark, T. J. (2001). Farewell to an Idea: Episodes from a History of Modernism. New Haven: Yale University Press.
Buchloh, Benjamin. (2001). Neo-Avantgarde and Culture Industry. Cambridge, Massachusetts: MIT Press.
Mansfield, Elizabeth (2002). Art History and Its Institutions: Foundations of a Discipline. Routledge.
Murray, Chris. (2003). Key Writers on Art. 2 vols, Routledge Key Guides. London: Routledge.
Harrison, Charles, and Paul Wood. (2003). Art in Theory, 1900–2000: An Anthology of Changing Ideas. 2nd ed. Malden, Massachusetts: Blackwell.
Shiner, Larry. (2003). The Invention of Art: A Cultural History. Chicago: University of Chicago Press.
Pollock, Griselda (ed.) (2006). Psychoanalysis and the Image. Oxford: Blackwell.
Emison, Patricia (2008). The Shaping of Art History. University Park: Pennsylvania State University Press.
Charlene Spretnak (2014), The Spiritual Dynamic in Modern Art : Art History Reconsidered, 1800 to the Present.
Gauvin Alexander Bailey (2014) The Spiritual Rococo: Décor and Divinity from the Salons of Paris to the Missions of Patagonia. Farnham: Ashgate.
Kleiner, F. S. (2018). Gardner's Art Through the Ages: A Global History. 16th edition. Boston: Cengage Learning.
John-Paul Stonard (2021) Creation. Art Since the Beginning. London and New York: Bloomsbury
External links
Art History Resources on the Web, in-depth directory of web links, divided by period
Art criticism
Fields of history
Humanities | 0.774268 | 0.995822 | 0.771033 |
Ancient Society | Ancient Society is an 1877 book by the American anthropologist Lewis H. Morgan. Building on the data about kinship and social organization presented in his 1871 Systems of Consanguinity and Affinity of the Human Family, Morgan develops his theory of the three stages of human progress, i.e., from Savagery through Barbarism to Civilization. Contemporary European social theorists such as Karl Marx and Friedrich Engels were influenced by Morgan's work on social structure and material culture, as shown by Engels' The Origin of the Family, Private Property, and the State (1884).
The concept of progress
The dominant idea of Morgan's thought is that of progress. He conceived it as a career of social states arranged in a scale on which man has worked his way up from the bottom. Progress is historically true of the entire human family, but not uniformly. Different branches of the family have evidenced human advancement to different conditions. He thought the scale had universal application or substantially the same in kind, with deviations from uniformity ... produced by special causes. Morgan hopes therefore to discern the principal stages of human development.
Morgan arrived at the idea of a society's progress in part through analogy to individual development. It is an ascent to human supremacy on the earth. The prime analogate is an individual working his way up in society; that is, Morgan, who was well read in classics, relies on the Roman cursus honorum, rising through the ranks, which became the basis of the English ideas of career and working your way up, to which he blends in the rationalist idea of a scala, or ladder, of life. The idea of growth or development is also borrowed from individuals. He proposed that a society has a life like that of an individual, which develops and grows.
He gives the analogy an anthropological twist and introduces the comparative method coming into vogue in other disciplines. Lewis names units called ethna, by which he means inventions, discoveries and domestic institutions. The ethna are compared and judged higher or lower on the scale, pair by pair. Morgan's ethna appear to comprise at least some of Edward Burnett Tylor's cultural objects. Morgan mentions Tylor a number of times in the book.
Morgan's standard of higher or lower is not clearly expressed. By higher he appears to mean whatever contributes better to control over the environment, victory over competitors, and spread of population. He does not mention Charles Darwin's theory of evolution, but Darwin referred to Morgan's work in his own.
The lines of progress
The substitutions of ethna better than the previous follow several lines of progress. Morgan admits to a deficit in knowledge of language development, which he does not think important. The little knowledge he shares can be found in Chapter 3. His brief scheme is in fact speculative only. Many Sino-Tibetan languages and Tai–Kadai languages, which may appear to non-speakers be "monosyllabic", use tone to distinguish morphemes, One syllable in different tones has different meanings. No language today is considered more primitive than any other. Early stages of language are totally unknown and must have disappeared in remote prehistory. Gestural language still is considered the original form of symbolic communication.
The ethnical periods
Morgan rejects the Ages of Stone, of Bronze, of Iron, the Three-Age System of pre-history, as being insufficient characterizations of progress. This theory had been explicated by J. J. A. Worsaae in his The Primeval Antiquities of Denmark, published in English in 1849. Worsaae had built his work on the foundation of evidence-based chronology by Christian Jürgensen Thomsen, whose Guideline to Scandinavian Antiquity (Ledetraad til Nordisk Oldkyndighed) (1836), was not published in English until 1848. The two works were highly influential to researchers in Great Britain and North America.
Morgan believed the prehistoric stages as defined by the Danish were difficult to distinguish, as they overlapped and refer only to material types of implements or tools. In addition, Morgan thought they did not fit the evidence he was finding among Native American societies in North America, in which he had closely studied social structure as an indicator of stages of civilization. Since Morgan, the European three-age system has prevailed in anthropology and archeology, but the age characteristics have been enlarged to include many of the additional factors which Morgan described. Morgan's Savagery and Barbarism are roughly equivalent to Braidwood's food gathering and food production.
Based on the lines of progress, he distinguishes ethnical periods, which each have a distinct culture and a particular mode of life and do not overlap in a region. He does admit to exceptions and a difficulty of determining precise borders between periods. Scientific archaeology was being developed at this time; Morgan did not have the techniques of stratigraphy or scientific dating available, but based his arguments on linguistic and historical speculation.
Chronological dating
Christian Jürgensen Thomsen and J. J. A. Worsaae are credited with the foundation of scientific archaeology, as they worked to have controlled excavations in which artifacts could be evaluated by which were found together: the beginning of stratigraphy. This supposedly evidence-based system was the start of chronological dating in archeology.
From savagery to civilization
John Wesley Powell credited Ancient Society as "the most noteworthy attempt hitherto made to distinguish and define culture-stages". Powell theorized that savages advanced into civilization with the help of racial and cultural mixing. Therefore, Powell reasoned, civilized people could help savages by mixing blood, rather than spilling blood. Powell also contended, that "human evolution has none of the characteristics of animal evolution". Powell opposed the survival of the fittest theory because in his mind, humans did not advance their living conditions to succeed in the struggle for existence. Instead, he mused that the "human endeavor to secure happiness" was the driving force of civilization.
References
External links
Full text of Ancient Society
1877 non-fiction books
American non-fiction books
Anthropology books
Books about civilizations
English-language books | 0.790492 | 0.975335 | 0.770994 |
Nordic model | The Nordic model comprises the economic and social policies as well as typical cultural practices common in the Nordic countries (Denmark, Finland, Iceland, Norway, and Sweden). This includes a comprehensive welfare state and multi-level collective bargaining based on the economic foundations of social corporatism, and a commitment to private ownership within a market-based mixed economywith Norway being a partial exception due to a large number of state-owned enterprises and state ownership in publicly listed firms.
Although there are significant differences among the Nordic countries, they all have some common traits. The three Scandinavian countries are constitutional monarchies, while Finland and Iceland have been republics since the 20th century. All the Nordic countries are however described as being highly democratic and all have a unicameral legislature and use proportional representation in their electoral systems. They all support a universalist welfare state aimed specifically at enhancing individual autonomy and promoting social mobility, with a sizable percentage of the population employed by the public sector (roughly 30% of the work force in areas such as healthcare, education, and government), and a corporatist system with a high percentage of the workforce unionized and involving a tripartite arrangement, where representatives of labour and employers negotiate wages and labour market policy is mediated by the government. As of 2020, all of the Nordic countries rank highly on the inequality-adjusted HDI and the Global Peace Index as well as being ranked in the top 10 on the World Happiness Report.
The Nordic model was originally developed in the 1930s under the leadership of social democrats, although centrist and right-wing political parties, as well as labour unions, also contributed to the Nordic model's development. The Nordic model began to gain attention after World War II and has transformed in some ways over the last few decades, including increased deregulation and expanding privatization of public services. However, it is still distinguished from other models by the strong emphasis on public services and social investment.
Overview and aspects
The Nordic model has been characterized as follows:
An elaborate social safety net, in addition to public services such as free education and universal healthcare in a largely tax-funded system.
Strong property rights, contract enforcement and overall ease of doing business.
Public pension plans.
High levels of democracy as seen in the Freedom in the World survey and Democracy Index.
Free trade combined with collective risk sharing (welfare social programmes and labour market institutions) which has provided a form of protection against the risks associated with economic openness.
Little product market regulation. Nordic countries rank very high in product market freedom according to OECD rankings.
Low levels of corruption. In Transparency International's 2022 Corruption Perceptions Index, Denmark, Finland, Norway and Sweden were ranked among the top 10 least corrupt of the 180 countries evaluated.
A partnership between employers, trade unions and the government, whereby these social partners negotiate the terms to regulating the workplace amongst themselves, rather than the terms being imposed by law. Sweden has decentralised wage co-ordination while Finland is ranked the least flexible. The changing economic conditions have given rise to fear among workers as well as resistance by trade unions in regards to reforms.
High trade union density and collective bargaining coverage. In 2019, trade union density was 90.7% in Iceland, 67.0% in Denmark, 65.2% in Sweden, 58.8% in Finland, and 50.4% in Norway; in comparison, trade union density was 16.3% in Germany and 9.9% in the United States. Additionally, in 2018, collective bargaining coverage was 90% in Iceland, 88.8% in Finland (2017), 88% in Sweden, 82% in Denmark, and 69% in Norway; in comparison collective bargaining coverage was 54% in Germany and 11.7% in the United States. The lower union density in Norway is mainly explained by the absence of a Ghent system since 1938. In contrast, Denmark, Finland and Sweden all have union-run unemployment funds.
The Nordic countries received the highest ranking for protecting workers rights on the International Trade Union Confederation 2014 Global Rights Index, with Denmark being the only nation to receive a perfect score.
Very high public spending, with Sweden at 56.6% of GDP, Denmark at 51.7%, and Finland at 48.6%. Public expenditure for health and education is significantly higher in Denmark, Norway, and Sweden in comparison to the OECD average.
Overall tax burdens as a percentage of GDP are high, with Denmark at 45.9% and both Finland and Sweden at 44.1%. The Nordic countries have relatively flat tax rates, meaning that even those with medium and low incomes are taxed at relatively high levels.
The United Nations World Happiness Reports show that the happiest nations are concentrated in Northern Europe. The Nordics ranked highest on the metrics of real GDP per capita, healthy life expectancy, having someone to count on, perceived freedom to make life choices, generosity and freedom from corruption. The Nordic countries place in the top 10 of the World Happiness Report 2018, with Finland and Norway taking the top spots.
Economic system
The Nordic model is underpinned by a mixed-market capitalist economic system that features high degrees of private ownership, with the exception of Norway which includes a large number of state-owned enterprises and state ownership in publicly listed firms.
The Nordic model is described as a system of competitive capitalism combined with a large percentage of the population employed by the public sector, which amounts to roughly 30% of the work force, in areas such as healthcare and higher education. In Norway, Finland, and Sweden, many companies and/or industries are state-run or state-owned like utilities, mail, rail transport, airlines, electrical power industry, fossil fuels, chemical industry, steel mill, electronics industry, machine industry, aerospace manufacturer, shipbuilding, and the arms industry. In 2013, The Economist described its countries as "stout free-traders who resist the temptation to intervene even to protect iconic companies", while also looking for ways to temper capitalism's harsher effects and declared that the Nordic countries "are probably the best-governed in the world." Some economists have referred to the Nordic economic model as a form of "cuddly capitalism", with low levels of inequality, generous welfare states, and reduced concentration of top incomes, contrasting it with the more "cut-throat capitalism" of the United States, which has high levels of inequality and a larger concentration of top incomes, among others social inequalities.
As a result of the Sweden financial crisis of 1990–1994, Sweden implemented economic reforms that were focused on deregulation and the strengthening of competition laws. Despite this however, Sweden still has the highest government spending-to-GDP ratio of all the Nordic countries, it retains national-level sectoral bargaining unlike Denmark and Iceland, with over 650 national-level bargaining agreements, it retains the Ghent system unlike Norway and Iceland and consequently has the second-highest rate of unionization in the world. Despite being one of the most equal OECD nations, from 1985 to the 2010s Sweden saw the largest growth in income inequality among OECD economies. Other effects of the 1990s reforms was the substantial growth of mutual fund savings, which largely began with the government subsidizing mutual fund savings through the so-called Allemansfonder program in the 1980s; today 4 out of 5 people aged 18–74 have fund savings.
Norway's particularities
The state of Norway has ownership stakes in many of the country's largest publicly listed companies, owning 37% of the Oslo stock market and operating the country's largest non-listed companies, including Equinor and Statkraft. In January 2013, The Economist reported that "after the second world war the government nationalised all German business interests in Norway and ended up owning 44% of Norsk Hydro's shares. The formula of controlling business through shares rather than regulation seemed to work well, so the government used it wherever possible. 'We invented the Chinese way of doing things before the Chinese', says Torger Reve of the Norwegian Business School." The government also operates a sovereign wealth fund, the Government Pension Fund of Norway, whose partial objective is to prepare Norway for a post-oil future but "unusually among oil-producing nations, it is also a big advocate of human rightsand a powerful one, thanks to its control of the Nobel peace prize."
Norway is the only major economy in the north of Europe where younger generations are getting richer, with a 13% increase in disposable income income for 2018, bucking the trend seen in other european-northern nations of Millennials becoming poorer than the generations which came before.
Social democracy
Social democrats have played a pivotal role in shaping the Nordic model, with policies enacted by social democrats being pivotal in fostering the social cohesion in the Nordic countries. Among political scientists and sociologists, the term social democracy has become widespread to describe the Nordic model due to the influence of social democratic party governance in Sweden and Norway, in contrast to other classifications such as liberal or Christian democratic. According to sociologist Lane Kenworthy, the meaning of social democracy in this context refers to a variant of capitalism based on the predominance of private property and market allocation mechanisms alongside a set of policies for promoting economic security and opportunity within the framework of a capitalist economy as opposed to a political ideology that aims to replace capitalism.
While many countries have been categorized as social democratic, the Nordic countries have been the only ones to be constantly categorized as such. In a review by Emanuele Ferragina and Martin Seeleib-Kaiser of works about the different models of welfare states, apart from Belgium and the Netherlands, categorized as "medium-high socialism", the Scandinavian countries analyzed (Denmark, Norway, and Sweden) were the only ones to be categorized by sociologist Gøsta Esping-Andersen as "high socialism", which is defined as socialist attributes and values (equality and universalism) and the social democratic model, which is characterized by "a high level of decommodification and a low degree of stratification. Social policies are perceived as 'politics against the market.'" They summarized the social democratic model as being based on "the principle of universalism, granting access to benefits and services based on citizenship. Such a welfare state is said to provide a relatively high degree of autonomy, limiting the reliance on family and market."
According to Johan Strang, since the 1990s, politicians, researchers and the media have shifted to explaining the Nordic model with cultural rather than political factors. These cultural explanations benefit neoliberalism, whose rise this cultural phenomenon coincided with. By the 2010s, politics has been re-entering the conversation on the Nordic model.
Lutheran influence
Some academics have theorized that Lutheranism, the dominant traditional religion of the Nordic countries, had an effect on the development of social democracy there. Schröder posits that Lutheranism promoted the idea of a nationwide community of believers and led to increased state involvement in economic and social life, allowing for nationwide welfare solidarity and economic co-ordination. Esa Mangeloja says that the revival movements helped to pave the way for the modern Finnish welfare state. During that process, the church lost some of its most important social responsibilities (health care, education, and social work) as these tasks were assumed by the secular Finnish state. Pauli Kettunen presents the Nordic model as the outcome of a sort of mythical "Lutheran peasant enlightenment", portraying the Nordic model as the result of a sort of "secularized Lutheranism"; however, mainstream academic discourse on the subject focuses on "historical specificity", with the centralized structure of the Lutheran church being but one aspect of the cultural values and state structures that led to the development of the welfare state in Scandinavia.
Labour market policy
The Nordic countries share active labour market policies as part of a social corporatist economic model intended to reduce conflict between labour and the interests of capital. This corporatist system is most extensive in Norway and Sweden, where employer federations and labour representatives bargain at the national level mediated by the government. Labour market interventions are aimed at providing job retraining and relocation.
The Nordic labour market is flexible, with laws making it easy for employers to hire and shed workers or introduce labour-saving technology. To mitigate the negative effect on workers, the government labour market policies are designed to provide generous social welfare, job retraining and relocation services to limit any conflicts between capital and labour that might arise from this process.
Nordic welfare model
The Nordic welfare model refers to the welfare policies of the Nordic countries, which also tie into their labour market policies. The Nordic model of welfare is distinguished from other types of welfare states by its emphasis on maximising labour force participation, promoting gender equality, egalitarian, and extensive benefit levels, the large magnitude of income redistribution and liberal use of expansionary fiscal policy.
While there are differences among the Nordic countries, they all share a broad commitment to social cohesion, a universal nature of welfare provision in order to safeguard individualism by providing protection for vulnerable individuals and groups in society, and maximising public participation in social decision-making. It is characterized by flexibility and openness to innovation in the provision of welfare. The Nordic welfare systems are mainly funded through taxation.
Despite the common values, the Nordic countries take different approaches to the practical administration of the welfare state. Denmark features a high degree of private sector provision of public services and welfare, alongside an assimilation immigration policy. Iceland's welfare model is based on a "welfare-to-work" (see workfare) model while part of Finland's welfare state includes the voluntary sector playing a significant role in providing care for the elderly. Norway relies most extensively on public provision of welfare.
Gender equality
When it comes to gender equality, the Nordic countries hold one of the smallest gaps in gender employment inequality of all OECD countries, with less than 8 points in all Nordic countries according to International Labour Organization standards. They have been at the front of the implementation of policies that promote gender equality; the Scandinavian governments were some of the first to make it unlawful for companies to dismiss women on grounds of marriage or motherhood. Mothers in Nordic countries are more likely to be working mothers than in any other region and families enjoy pioneering legislation on parental leave policies that compensate parents for moving from work to home to care for their child, including fathers. Although the specifics of gender equality policies in regards to the work place vary from country to country, there is a widespread focus in Nordic countries to highlight "continuous full-time employment" for both men and women as well as single parents as they fully recognize that some of the most salient gender gaps arise from parenthood. Aside from receiving incentives to take shareable parental leave, Nordic families benefit from subsidized early childhood education and care and activities for out-of-school hours for those children that have enrolled in full-time education.
The Nordic countries have been at the forefront of championing gender equality and this has been historically shown by substantial increases in women's employment. Between 1965 and 1990, Sweden's employment rate for women in working-age (15–64) went from 52.8% to 81.0%. In 2016, nearly three out of every four women in working-age in the Nordic countries were taking part in paid work. Nevertheless, women are still the main users of the shareable parental leave (fathers use less than 30% of their paid parental-leave-days), foreign women are being subjected to under-representation, and Finland still holds a notable gender pay-gap; the average woman's salary is 83% of that of a man, not accounting for confounding factors such as career choice.
Poverty reduction
The Nordic model has been successful at significantly reducing poverty. In 2011, poverty rates before taking into account the effects of taxes and transfers stood at 24.7% in Denmark, 31.9% in Finland, 21.6% in Iceland, 25.6% in Norway, and 26.5% in Sweden. After accounting for taxes and transfers, the poverty rates for the same year became 6%, 7.5%, 5.7%, 7.7% and 9.7% respectively, for an average reduction of 18.7 p.p. Compared to the United States, which has a poverty level pre-tax of 28.3% and post-tax of 17.4% for a reduction of 10.9 p.p., the effects of tax and transfers on poverty in all the Nordic countries are substantially bigger. In comparison to France (27 p.p. reduction) and Germany (24.2 p.p. reduction), the taxes and transfers in the Nordic countries are smaller on average.
History
The term 'peasant republic' is sometimes applied to certain communities in Scandinavia during the Viking Age and High Middle Ages, especially in Sweden, where royal power seems to have been initially somewhat weak, and in areas of modern day Sweden that were not under the rule of the Swedish king yet, as well as in Iceland where the Icelandic Commonwealth serves as an example of an unusually large and sophisticated peasant republic building on the same democratic traditions. Some historians have also argued that Gotland was a peasant republic before the attack by the Danes in 1361. Central for the old Scandinavian democratic traditions was the assemblies called the Thing or Moot.
The Nordic model traces its foundation to the "grand compromise" between workers and employers spearheaded by farmer and worker parties in the 1930s. Following a long period of economic crisis and class struggle, the "grand compromise" served as the foundation for the post-World War II Nordic model of welfare and labour market organization. The key characteristics of the Nordic model were the centralized coordination of wage negotiation between employers and labour organizations, termed a social partnership, as well as providing a peaceful means to address class conflict between capital and labour.
Magnus Bergli Rasmussen has challenged that farmers played an important role in ushering Nordic welfare states. A 2022 study by him found that farmers had strong incentives to resist welfare state expansion and farmer MPs consistently opposed generous welfare policies.
Although often linked to social democratic governance, the Nordic model's parentage also stems from a mixture of mainly social democratic, centrist, and right-wing political parties, especially in Finland and Iceland, along with the social trust that emerged from the "great compromise" between capital and labour. The influence of each of these factors on each Nordic country varied as social democratic parties played a larger role in the formation of the Nordic model in Sweden and Norway, whereas in Iceland and Finland, right-wing political parties played a much more significant role in shaping their countries' social models. However, even in Iceland and Finland, strong labour unions contributed to the development of universal welfare.
Social security and collective wage bargaining policies were rolled back following economic imbalances in the 1980s and the financial crises of the 1990s which led to more restrictive budgetary policies that were most pronounced in Sweden and Iceland. Nonetheless, welfare expenditure remained high in these countries, compared to the European average.
Denmark
Social welfare reforms emerged from the Kanslergade Agreement of 1933 as part of a compromise package to save the Danish economy. Denmark was the first Nordic country to join the European Union in the 1970s, reflecting the different political approaches to it among the Nordic countries.
Finland
The early 1990s recession affected the Nordic countries and caused a deep crisis in Finland, and came amid the context of the dissolution of the Soviet Union and collapse of trade from the Eastern Bloc. Like in Sweden, Finland's universalistic welfare state based on the Nordic model was weakened and no longer based on the social-democratic middle ground, as several social welfare policies were often permanently dismantled; however, Finland was hit even harder than Sweden. During the crisis, Finland looked to the European Union, which they were more committed and open to joining than Sweden and especially Norway, while Denmark had already joined the EU by the 1970s. Finland is, to date, the only Nordic country to become a Eurozone member state after fully adopting the euro as its official currency in 2002.
Iceland
According to analyst Harpa Njálsdóttir, Iceland in the late 2010s moved away from the Nordic model towards the economic liberal model of workfare. She also noted that with the large changes having been made to the social security system, "70% of elderly people now live well below national subsistence criteria, while about 70% of those who live alone and in bad conditions are women." Despite this, as of 2021, Iceland has the lowest poverty rate in the OECD of only 4.9%.
Norway
Norway's "grand compromise" emerged as a response to the crisis of the early 1930s between the trade union confederation and Norwegian Employers' Association, agreeing on national standards in labour–capital relations and creating the foundation for social harmony throughout the period of compromises. For a period between the 1980s and the 1990s, Norway underwent more neoliberal reforms and marketization than Sweden during the same time frame, while still holding to the traditional foundations of the "social democratic compromise" that was specific to Western capitalism from 1945 to 1973.
Norway was the Nordic country least willing to join the European Union. While Finland and Sweden suffered greatly from the 1990s recession, Norway began to earn enough revenue from their oil. As of 2007, the Norwegian state maintained large ownership positions in key industrial sectors, among them petroleum, natural gas, minerals, lumber, seafood and fresh water. The petroleum industry accounts for around a quarter of the country's gross domestic product.
Sweden
In Sweden, the grand compromise was pushed forward by the Saltsjöbaden Agreement signed by employer and trade union associations at the seaside retreat of Saltsjöbaden in 1938. This agreement provided the foundation for Scandinavian industrial relations throughout Europe's Golden Age of Capitalism. The Swedish model of capitalism developed under the auspices of the Swedish Social Democratic Party which assumed power in 1932 and retained uninterrupted power until 1976. Initially differing very little from other industrialized capitalist countries, the state's role in providing comprehensive welfare and infrastructure expanded after the Second World War until reaching a broadly social democratic consensus in the 1950s which would become known as the social liberal paradigm, which was followed by the neoliberal paradigm by the 1980s and 1990s. According to Phillip O'Hara, "Sweden eventually became part of the Great Capitalist Restoration of the 1980s and 1990s. In all the industrial democracies and beyond, this recent era has seen the retrenchment of the welfare state by reduced social spending in real terms, tax cuts, deregulation and privatization, and a weakening of the influence of organized labor."
In the 1950s, Olof Palme and the prime minister Tage Erlander formulated the basis of Swedish social democracy and what would become known as the "Swedish model", drawing inspiration from the reformist socialism of party founder Hjalmar Branting, who stated that socialism "would not be created by brutalized...slaves [but by] the best positioned workers, those who have gradually obtained a normal workday, protective legislation, minimum wages." Arguing against those to their left, the party favored moderatism and wanted to help workers in the here and now, and followed the Fabian argument that the policies were steps on the road to socialism, which would not come about through violent revolution but through the social corporative model of welfare capitalism, to be seen as progressive in providing institutional legitimacy to the labour movement by recognizing the existence of the class conflict between the bourgeoisie and the proletariat as a class compromise within the context of existing class conflict. This Swedish model was characterized by a strong labour movement as well as inclusive publicly funded and often publicly administered welfare institutions.
By the early 1980s, the Swedish model began to suffer from international imbalances, declining competitiveness and capital flight. Two polar opposite solutions emerged to restructure the Swedish economy, the first being a transition to socialism by socializing the ownership of industry and the second providing favorable conditions for the formation of private capital by embracing neoliberalism. The Swedish model was first challenged in 1976 by the Meidner Plan promoted by the Swedish Trade Union Confederation and trade unions which aimed at the gradual socialization of Swedish companies through wage earner funds. The Meidner Plan aimed to collectivize capital formation in two generations by having the wage earner funds own predominant stakes in Swedish corporations on behalf of workers. This proposal was supported by Palme and the Social Democratic party leadership, but it did not garner enough support upon Palme's assassination and was defeated by the conservatives in the 1991 Swedish general election.
Upon returning to power in 1982, the Social Democratic party inherited a slowing economy resulting from the end of the post-war boom. The Social Democrats adopted monetarist and neoliberal policies, deregulating the banking industry, and liberalizing currency in the 1980s. The economic crisis of the 1990s saw greater austerity measures, deregulation, and the privatization of public services. Into the 21st century, it greatly affected Sweden and its universalistic welfare state, although not as hard as Finland. Sweden remained more Eurosceptic than Finland, and its struggles affected all the other Nordic countries, as it was seen as "the guiding star of the north", and with Sweden fading away, other Nordic countries also felt like they were losing their political identities. When the Nordic model was then gradually rediscovered, cultural explanations were sought for the special features of the Nordic countries.
Reception
The Nordic model has been positively received by some American politicians and political commentators. Jerry Mander has likened the Nordic model to a kind of "hybrid" system which features a blend of capitalist economics with socialist values, representing an alternative to American-style capitalism. Vermont Senator Bernie Sanders has pointed to Scandinavia and the Nordic model as something the United States can learn from, in particular with respect to the benefits and social protections the Nordic model affords workers and its provision of universal healthcare. Scandinavian political scientist Daniel Schatz argued that Sanders is wrong, saying that "the success of Nordic countries like Swedenas measured by relatively high living standards accompanied by low poverty, with government-funded education through university, universal health coverage, generous parental-leave policies and long life spansprecedes the contemporary welfare state.", adding that "Research has suggested that the Northern European success story has its roots in cultural rather than economic factors. The Scandinavian countries ... historically developed remarkably high levels of social trust, a robust work ethic and considerable social cohesion".
According to Luciano Pellicani, the social and political measures adopted in countries like Sweden and Denmark are the same that some other European left-wing politicians theorised to combine justice and freedom, referring to liberal socialism and movements like Giustizia e Libertà and Fabian Society. According to Naomi Klein, former Soviet leader Mikhail Gorbachev sought to move the Soviet Union in a similar direction to the Nordic system, combining free markets with a social safety net, but still retaining public ownership of key sectors of the economyingredients that he believed would transform the Soviet Union into "a socialist beacon for all mankind."
The Nordic model has also been positively received by various social scientists and economists. American professor of sociology and political science Lane Kenworthy advocates for the United States to make a gradual transition toward a social democracy similar to those of the Nordic countries, defining social democracy as such: "The idea behind social democracy was to make capitalism better. There is disagreement about how exactly to do that, and others might think the proposals in my book aren't true social democracy. But I think of it as a commitment to use government to make life better for people in a capitalist economy. To a large extent, that consists of using public insurance programsgovernment transfers and services."
Nobel Prize-winning economist Joseph Stiglitz says that there is higher social mobility in the Scandinavian countries than in the United States and posits that Scandinavia is now the land of opportunity that the United States once was. American author Ann Jones, who lived in Norway for four years, posits that "the Nordic countries give their populations freedom from the market by using capitalism as a tool to benefit everyone" whereas in the United States "neoliberal politics puts the foxes in charge of the henhouse, and capitalists have used the wealth generated by their enterprises (as well as financial and political manipulations) to capture the state and pluck the chickens."
Economist Jeffrey Sachs is a proponent of the Nordic model, having pointed out that the Nordic model is "the proof that modern capitalism can be combined with decency, fairness, trust, honesty, and environmental sustainability." The Nordic combination of extensive public provision of welfare and a culture of individualism has been described by Lars Trägårdh of Ersta Sköndal University College as "statist individualism." A 2016 survey by the think tank Israel Democracy Institute found that nearly 60 percent of Israeli Jews preferred a "Scandinavian model" economy, with high taxes and a robust welfare state.
Criticism
Socialist economists Pranab Bardhan and John Roemer criticize Nordic-style social democracy for its questionable effectiveness in promoting relative egalitarianism as well as its sustainability. They posit that Nordic social democracy requires a strong labour movement to sustain the heavy redistribution required, arguing that it is idealistic to think similar levels of redistribution can be accomplished in countries with weaker labour movements. They say that even in the Scandinavian countries social democracy has been in decline since the weakening of the labour movement in the early 1990s, arguing that the sustainability of social democracy is limited. Roemer and Bardham posit that establishing a market-based socialist economy by changing enterprise ownership would be more effective than social democratic redistribution at promoting egalitarian outcomes, particularly in countries with weak labour movements.
Historian Guðmundur Jónsson said that it would be historically inaccurate to include Iceland in one aspect of the Nordic model, that of consensus democracy. Addressing the time period from 1950 to 2000, Jónsson writes that "Icelandic democracy is better described as more adversarial than consensual in style and practice. The labour market was rife with conflict and strikes more frequent than in Europe, resulting in strained government–trade union relationship. Secondly, Iceland did not share the Nordic tradition of power-sharing or corporatism as regards labour market policies or macro-economic policy management, primarily because of the weakness of Social Democrats and the Left in general. Thirdly, the legislative process did not show a strong tendency towards consensus-building between government and opposition with regard to government seeking consultation or support for key legislation. Fourthly, the political style in legislative procedures and public debate in general tended to be adversarial rather than consensual in nature."
In a 2017 study, economists James Heckman and Rasmus Landersøn compared American and Danish social mobility, and found that social mobility is not as high as figures might suggest in the Nordic countries, although they did find that Denmark ranks higher in income mobility. When looking exclusively at wages (before taxes and transfers), Danish and American social mobility are very similar; it is only after taxes and transfers are taken into account that Danish social mobility improves, indicating that Danish economic redistribution policies are the key drivers of greater mobility. Additionally, Denmark's greater investment in public education did not improve educational mobility significantly, meaning children of non-college educated parents are still unlikely to receive college education, although this public investment did result in improved cognitive skills amongst poor Danish children compared to their American peers. There was evidence that generous welfare policies could discourage the pursuit of higher-level education due to decreasing the economic benefits that college education level jobs offer and increasing welfare for workers of a lower education level.
Some welfare and gender researchers based in the Nordic countries suggest that these states have often been over-privileged when different European societies are being assessed in terms of how far they have achieved gender equality. They posit that such assessments often utilise international comparisons adopting conventional economic, political, educational, and well-being measures. By contrast, they suggest that if one takes a broader perspective on well-being incorporating, such as social issues associated with bodily integrity or bodily citizenship, then some major forms of men's domination still stubbornly persist in the Nordic countries, e.g. business, violence to women, sexual violence to children, the military, academia, and religion.
While praising the Nordic model as a "clear and compelling contrast to the neoliberal ideology that has strafed the rest of the world with inequality, ill-health and needless poverty," economic anthropologist Jason Hickel sharply criticizes the "ecological disaster" that accompanies it, noting that data shows the Nordic countries "have some of the highest levels of resource use and CO2 emissions in the world, in consumption based terms, drastically overshooting safe planetary boundaries," and rank towards the bottom of the Sustainable Development Index. He argues that the model needs to be updated for the Anthropocene, and reduce overconsumption while retaining the positive elements of progressive social democracy including universal healthcare and education, paid vacations and reasonable working hours, which have resulted in much better health outcomes and poverty reduction compared to overtly neoliberal countries like the United States, in order to "stand as a beacon for the rest of the world in the 21st century."
Swedish economist John Gustavsson, writing for American conservative magazine The Dispatch, criticized the Nordic model for its high taxation rates, including on the middle class and poor people.
Political scientist Michael Cottakis has noted the rise of right-wing populist and anti-immigration sentiment in the Nordic countries, arguing that these countries, in particular Sweden, have failed to handle immigration effectively.
Misconceptions
George Lakey, author of Viking Economics, says that Americans generally misunderstand the nature of the Nordic model, commenting: "Americans imagine that "welfare state" means the U.S. welfare system on steroids. Actually, the Nordics scrapped their American-style welfare system at least 60 years ago, and substituted universal services, which means everyonerich and poorgets free higher education, free medical services, free eldercare, etc."
In a speech at Harvard's Kennedy School of Government, Lars Løkke Rasmussen, the centre-right Danish prime minister from the conservative-liberal Venstre party, addressed the American misconception that the Nordic model is a form of socialism, which is conflated with any form of planned economy, stating: "I know that some people in the US associate the Nordic model with some sort of socialism. Therefore, I would like to make one thing clear. Denmark is far from a socialist planned economy. Denmark is a market economy."
See also
Dirigisme, a socioeconomic model associated with France
Folkhemmet
Liberal socialism
Market socialism
Nefco
Polder model
Rehn–Meidner model
Rhenish model, a socioeconomic model associated with Germany
Social democracy
Social market economy
Welfare in Finland
Welfare in Sweden
Lists
Human Development Index
Legatum Prosperity Index
List of countries by GDP per capita
List of countries by income equality
List of countries by life expectancy
List of countries by share of income of the richest one percent
List of countries by wealth per adult
List of international rankings
Press Freedom Index
Social Progress Index
Where-to-be-born Index
References
Further reading
Kjellberg, Anders (2022) The Nordic Model of Industrial Relations . Lund: Department of Sociology.
Kjellberg, Anders (2023) The Nordic Model of Industrial Relations: comparing Denmark, Finland, Norway and Sweden . Department of Sociology, Lund University and Max Planck Institute for the Study of Societies, Cologne.
Livingston, Michael A. (2021). Dreamworld or Dystopia? The Nordic Model and Its Influence in the 21st Century. Cambridge: Cambridge University Press.
External links
"The Nordic Way". . Davos: World Economic Forum. January 2011. Retrieved 3 December 2019.
Thorsen, Dag Einar; Brandal, Nik; Bratberg, Øivind (8 April 2013). Utopia sustained: "The Nordic model of social democracy". Fabian Society. Retrieved 3 December 2019.
"The secret of their success". The Economist. 2 February 2013. Retrieved 3 December 2019.
Sanders, Bernie (26 July 2013). "What Can We Learn From Denmark?". The Huffington Post. Retrieved 3 December 2019.
Isaacs, Julia (25 September 2013). "What Is Scandinavia Doing Right?". The New York Times. Retrieved 3 December 2019.
Stahl, Rune Møller Stahl; Mulvad, Andreas Møller (4 August 2015). "What Makes Scandinavia Different?". Jacobin. Retrieved 3 December 2019.
"The Nordic Model: Local Government, Global Competitiveness in Denmark, Finland and Sweden". KommuneKredit. August 2017. Retrieved 3 October 2020.
Goodman, Peter S. (11 July 2019). "The Nordic Model May Be the Best Cushion Against Capitalism. Can It Survive Immigration?". The New York Times. Retrieved 3 October 2020.
"Om Norden" (in Swedish). Föreningen Norden. Retrieved 3 December 2019.
"The Nordic Model". Nordics. Aarhus University. Retrieved 3 October 2020.
Capitalism
Corporatism
Economic policy in Europe
Economic systems
Mixed economies
Nordic politics
Political-economic models
Social democracy
Welfare in Europe | 0.772138 | 0.998407 | 0.770909 |
Theory of generations | Theory of generations (or sociology of generations) is a theory posed by Karl Mannheim in his 1928 essay, "Das Problem der Generationen," and translated into English in 1952 as "The Problem of Generations." This essay has been described as "the most systematic and fully developed" and even "the seminal theoretical treatment of generations as a sociological phenomenon". According to Mannheim, people are significantly influenced by the socio-historical environment (in particular, notable events that involve them actively) of their youth; giving rise, on the basis of shared experience, to social cohorts that in their turn influence events that shape future generations. Because of the historical context in which Mannheim wrote, some critics contend that the theory of generations centers on Western ideas and lacks a broader cultural understanding. Others argue that the theory of generations should be global in scope, due to the increasingly globalized nature of contemporary society.
Theory
Mannheim defined a generation (note that some have suggested that the term cohort is more correct) to distinguish social generations from the kinship (family, blood-related generations) as a group of individuals of similar ages whose members have experienced a noteworthy historical event within a set period of time.
According to Mannheim, social consciousness and perspective of youth reaching maturity in a particular time and place (what he termed "generational location") is significantly influenced by the major historical events of that era (thus becoming a "generation in actuality"). A key point, however, is that this major historical event has to occur, and has to involve the individuals in their young age (thus shaping their lives, as later experiences will tend to receive meaning from those early experiences); a mere chronological contemporaneity is not enough to produce a common generational consciousness. Mannheim in fact stressed that not every generation will develop an original and distinctive consciousness. Whether a generation succeeds in developing a distinctive consciousness is significantly dependent on the pace of social change ("tempo of change").
Mannheim notes also that social change can occur gradually, without the need for major historical events, but those events are more likely to occur in times of accelerated social and cultural change. Mannheim did also note that the members of a generation are internally stratified (by their location, culture, class, etc.), thus they may view different events from different angles and thus are not totally homogenous. Even with the "generation in actuality", there may be differing forms of response to the particular historical situation, thus stratifying by a number of "generational units" (or "social generations").
Application
Mannheim's theory of generations has been applied to explain how important historical, cultural, and political events of the late 1950s and the early 1960s educated youth of the inequalities in American society, such as their involvement along with other generations in the Civil Rights Movement, and have given rise to a belief that those inequalities need to be changed by individual and collective action. This has pushed an influential minority of young people in the United States toward social movement activity. On the other hand, the generation which came of age in the later part of the 1960s and 1970s was much less engaged in social movement activity, because - according to the theory of generations - the events of that era were more conducive to a political orientation stressing individual fulfillment instead of participation in such social movements questioning the status quo.
Other notable applications of Mannheim's theory that illustrate the dynamics of generational change include:
The effects of the Great Depression in the U.S. on young people's orientations toward work and politics
How the Nazi regime in Germany affected young Germans' political attitudes
Collective memories of important historical events that happen during late adolescence or early adulthood
Changing patterns of civic engagement in the U.S.
The effects of coming of age during the second-wave feminist movement in the U.S. on feminist identity
Explaining the rise of same-sex marriage in the United States
The effects of the Chinese Cultural Revolution on youth political activism
Social generation studies have mainly focused on the youth experience from the perspective of the Western society. "Social generations theory lacks ample consideration of youth outside of the West. Increased empirical attention to non-Western cases corrects the tendency of youth studies to 'other' non-Western youth and provides a more in-depth understanding of the dynamics of reflexive life management." The constraints and opportunities affecting a youth's experiences within particular sociopolitical contexts require research to be done in a wide array of spaces to better reflect the theory and its implications on youth's experiences. Recent works discuss the difficulty of managing generational structures as global processes, proceeding to design glocal structures.
See also
Generation
Strauss-Howe generation theory
Sociology of aging
Sociology of knowledge
References
1923 in science
Cultural generations
Sociological theories | 0.781526 | 0.98634 | 0.77085 |
Viking raid warfare and tactics | The term "Viking Age" refers to the period roughly from 790s to the late 11th century in Europe, though the Norse raided Scotland's western isles well into the 12th century. In this era, Viking activity started with raids on Christian lands in England and eventually expanded to mainland Europe, including parts of present-day Belarus, Russia and Ukraine.
While maritime battles were very rare, Viking bands proved very successful in raiding coastal towns and monasteries due to their efficient warships, and intimidating war tactics, skillful hand-to-hand combat, and fearlessness. What started as Viking raids on small towns transformed into the establishment of important agricultural spaces and commercial trading-hubs across Europe through rudimentary colonization. Vikings' tactics in warfare gave them an enormous advantage in successfully raiding (and later colonising), despite their small population in comparison to that of their enemies.
Culture of war
Vikings, according to Clare Downham in Viking Kings of Britain and Ireland, are "people of Scandinavian culture who were active outside Scandinavia ... Danes, Norwegians, Swedish, Hiberno-Scandinavians, Anglo-Scandinavians, or the inhabitants of any Scandinavian colony who affiliated themselves more strongly with the culture of the colonizer than with that of the indigenous population."
Parts of the tactics and warfare of the Vikings were driven by their cultural belief, themselves rooted in Norse culture and religion, and vividly recalled in the later Icelandic sagas written in the 13th-14th centuries; after the Christianisation of the Nordic world.
In the early Viking Age, during the late 8th century and most of the 9th, Norse society consisted of minor kingdoms with limited central authority and organization, leading to communities ruled according to laws made and pronounced by local assemblies called things. Lacking any kind of public executive apparatus—e.g. police—the enforcement of laws and verdicts fell upon the individual involved in a dispute. As a natural consequence, violence was a common feature of the Norse legal environment. This use of violence as an instrument regarding disputes was not limited to a man, but extended to his kin. Personal reputation and honour was an important value among Norsemen, and so actionable slander was also a legal category, in addition to physical and material injuries. Honour could be shamed from mere insults, where Norsemen were legally allowed to react violently. With this prevalence of violence came the expectation of fearlessness.
Norsemen believed that the time of death for any individual is predetermined, but that nothing else in life is. Considering this, Norsemen believed there to be two possibilities in life: "success with its attendant fame; or death." The necessity of defending honour with violence, the belief that time of death was preordained, adventure and fearlessness were core values to the Viking Age. These principal values and convictions were displayed in the tactics of Viking raids and warfare.
As in most societies with limited mechanisms for projecting central power, Norse society also shared traits of bonding through mutual gift-giving to ensure alliances and loyalty. One of the reasons many Norse went on such expeditions was the opportunity to gather loot and wealth by trading and raiding. This wealth was then brought back to Scandinavia and used for political gain; e.g. Olaf Tryggvason and Olaf Haraldsson both led successful raiding campaigns that served their later claims to kingship. This was one reason that monasteries and churches were often targeted, due to their wealth in relics and luxury goods like precious metals, fine cloths, and books such as the Codex Aureus which was stolen by a Viking and then sold to an Anglo-Saxon couple later on (a note was written inside the book after its recovery "I ealdorman Alfred and Wærburh my wife obtained these books from the heathen army with out pure money, that was with pure gold, and that we did for the love of God and for the benefit of our souls and because we did not wish these holy books to remain longer in heathen possession")
Raids
The Vikings regularly attacked coastal regions due to the difficult nature of defending such regions, as well as utilising rivers and stolen horses to raid deeper inland by the mid 9th century.
The Norse were born into a seafaring culture. With the Atlantic Ocean to the west and the Baltic and North Sea bordering southern Scandinavia, seafaring proved to be an important means of communication for Scandinavians, and a vital instrument for the Vikings.
Despite reports since the 5th Century of the presence of seafaring Germanic peoples both in the Black Sea and in Frisia, and archaeological evidence of earlier contact with the British Isles, the Viking Age proper is characterized by extensive raiding, entering history by being recorded in various annals and chronicles by their victims." The Annals of St. Bertin and the Annals of Fulda contain East and West Frankish records (respectively) of Viking attacks, as does Regino of Prum's Chronicle - which was written as a history of the Carolingian Empire in its final years.
These raids continued for the entirety of the Viking Age and Vikings would target monasteries along the coast, raid the towns for their booty, and were known to set fires in their wake. While there is evidence that Viking arson attacks did occur, more recent scholarship has cast doubt on quite how severe the physical damages (rather than their psychological impact) truly were. Regino of Prum’s Chronicle records that the palace of Aachen was burned to the ground but there is no archaeological evidence of destruction on such a scale at the site. These attacks caused widespread fear, so much so that the Vikings were thought by some monks to be a punishment from God. There is also the complication of a lack of direct written sources about these raids from the Viking perspective. This leads to biased views of the raiders from Christians who were being attacked in their churches and lands.
Initially, the Vikings limited their attacks to "hit-and-run" raids. However, they soon expanded their operations. In the years 814–820, Danish Vikings repeatedly sacked the regions of Northwestern France via the Seine River and also repeatedly sacked monasteries in the Bay of Biscay via the Loire River. Eventually, the Vikings settled in these areas and turned to farming. This was mainly due to Rollo, a Viking leader who seized what is now Normandy in 879, and formally in 911 when Charles the Simple of West Francia granted him the Lower Seine. This became a precursor to the Viking expansion that established important trade posts and agrarian settlements deep into Frankish territory, English territory, and much of what is now European Russian territory. The Vikings had taken control of most of the Anglo-Saxon kingdoms like Wessex and others by the 870s, which was after the time of the Great Heathen Army that swept the Anglo-Saxon rulers away from power in 865. This army focused not on raiding, but on conquering and settling in Anglo-Saxon Britain, being composed of small bands that were already in Britain and Ireland that worked together for a period of time to accomplish their goals.
The Vikings were also able to establish an extended period of economic and political rule of much of Ireland, England, and Scotland during the Norse Ivarr Dynasty that started in the late 9th century and lasted until 1094. In Ireland, coastal fortifications known as longphorts were established in many places after initial raidings, and they developed into trading posts and settlements over time. Quite a few modern towns in Ireland were founded in this way, including Dublin, Limerick and Waterford.
Warships
Much of the Vikings' success was due to the technical superiority of their shipbuilding. Their ships proved to be very fast. Their build was not designed for battle at sea, as this was a form of warfare that the Vikings very rarely engaged in, but these long narrow ships could accommodate 50–60 seamen who powered the ship by rowing, as well as a complement of warriors, and so able to carry sizeable forces at speed to land wherever advantageous. Due to their shallow draft, Viking ships could land directly on sandy beaches rather than docking in well-fortified harbours. Viking ships made it possible to land practically anywhere on a coast and to navigate rivers in Britain and on the Continent, with raids reported far up rivers such as the Elbe, the Weser, the Rhine, the Seine and the Loire, the Thames, and many more. Vikings also navigated the extensive network of rivers in Eastern Europe, but they would more often engage in trade than in raiding.
Depending on local resources, the ships were mainly built from strong oak, though some with pine, but all with riven (split) and then hewn planks that preserved the wood grain unbroken, resulting in light, but very strong and flexible strakes. Steering was accomplished with a single rudder in the stern. There was a relatively short mast that allowed fast rigging and unrigging. The low mast, built for speed when the winds were favourable, could often easily pass under bridges erected in rivers. These masts were designed to maneuver under the fortified bridges that Charles the Bald of West Francia created from 848 to 877. These boats have a shallow draft of around a metre of water. Viking longships were built with speed and flexibility in mind, which allowed Norse builders to craft strong yet elegant ships. Close to 28 metres long and five metres wide, the Gokstad ship is often cited as an example of a typical Viking ship.
Variants of these longships were built with a deeper hull for transporting goods, but what they added in hull depth and durability they sacrificed in speed and mobility. These cargo ships were built to be sturdy and solid, rather than Drakkar warships which were built to be fast. There is a mention of the Knörr being used as warships in poems written by skalds. Specifically, the poem "Lausavisor" by Vígfúss Víga-Glúmsson describes a Knörr being used as a battleship.
Seafaring military strategies
The fast design of Viking ships was essential to their hit-and-run raids. For instance, in the sacking of Frisia in the early 9th century, Charlemagne mobilized his troops as soon as he heard of the raid, but found no Vikings by the time he arrived. Their ships gave the Vikings an element of surprise. Travelling in small bands, they could easily go undetected, swiftly enter a village or monastery, pillage and collect booty, and leave before reinforcements arrived. Vikings understood the advantages of the longships' mobility and used them to a great extent.
Viking fleets of over a hundred ships did occur, but these fleets usually only banded together for one single—and temporary—purpose, being composed of smaller fleets each led by its own chieftain, or of different Norse bands. This was most often seen in the Francia raids between 841 and 892. They can be attributed to the fact that it was during this time that the Frankish aristocracy began paying off Vikings and buying mercenaries in return for protection from Viking raids. Thus, there appeared rudimentary structures of Viking armies.
Viking ships would rarely try to ram ships in the open sea, due to their construction not allowing for it. Vikings did attack ships, not with the intent to destroy them, but rather to board and seize them. Vikings raided for economic rather than political or territorial gains, and so were eager to enrich themselves through ransom, extortion, and slave trading. A noteworthy examples of ransom/tribute being paid to end a conflict is the 882 siege of Asselt which ended with emperor Charles the Fat paying the Vikings of gold and silver, as well as granting them land and allowing them to sail back to Scandinavia with an alleged two-hundred captives.
While naval Viking battles were not as common as battles on land, they did occur. As they had little to fear from other European countries invading the inhospitable regions of Scandinavia, most naval battles were fought amongst Vikings themselves, "Dane against Norwegian, Swede against Norwegian, Swede against Dane." Most Viking-on-Viking naval battles were little more than infantry battles on a floating platform. Viking fleets would lash their boats together, their prows facing the enemy. When they got close enough, the fighters would throw ballast stones, spears and use their longbows. Archers would be positioned in the back of the ships protected by a shield wall formation constructed in the front of the ship. Depending on the size of the defending fleet, some would attack from smaller craft to flank the bigger ships.
Battle tactics on land
These very small fleets brutally effectively scared locals and made it difficult for English and Frankish territories to counter these alien tactics. Sprague compares these tactics to those of contemporary western Special Forces soldiers, who "attack in small units with specific objectives." Later in the 860s, the formation of the Great Heathen Army brought about a more organized type of warfare for the Vikings. Large squads of raiders banded together to attack towns and cities, landing from fleets comprising hundreds of ships.
Viking raiders would anchor their largest warships before storming a beach. "It has been suggested that Sö 352 depicts an anchor and rope...It is perhaps more plausibly an anchor-stone...". However, it was more common practice for Vikings to beach their regular warships on land, where their battle tactics contained elements of surprise. "Vikings were notorious for laying ambushes and using woods to lay in wait for armies approaching along established roads." If confronted by legitimate forces in raids, Vikings would create a wedge formation, with their best men at the front of this wedge. They would throw spears, and rush this wedge through enemy lines where they could engage in hand-to-hand combat, which was their forte. Some survivors of sea battles were pressed into guarding the ships during land skirmishes.
Sagas of the Viking Age often mention Berserkers. These fabled Viking warriors are said to have spiritual magical powers from the god of war Odin that allowed them to become impervious to injuries on the battlefield. While these stories are exaggerated, the term berserks is rooted in truths about Viking warriors who were able to enter an intense, trance-like state whereupon they would "engage in reckless fighting." These warriors were greatly feared by Christians in Frankish and English regions who viewed such men as satanic. The reason for these raids is unknown, but some have suggested that the increase in trade created a growth in piracy.
Viking tactics were unconventional by wider European standards at the time and this element of "otherness" brought with it a tactical advantage. They also attacked holy sites far more regularly than Frankish and other Christian armies did, and they never arranged battle times. Deceit, stealth, and ruthlessness were not seen as cowardly. During raids, the Vikings targeted religious sites because of their vulnerability, often killing or taking the clergy at these sites prisoner, to then be either ransomed or taken as slaves. Norsemen who sailed back to Scandinavia after raiding brought back their loot as a symbol of pride and power. "The Viking chieftains Sigfrid and Gorm 'sent ships loaded with treasure and captives back to their country' in 882". Additionally, 'overwintering' was a widely used form of short-term occupation by Viking warbands in which they would descend on "monasteries, towns and royal estates" after the harvests had been gathered and then use the sites as fortified hubs from which they launched raids deeper inland. The Franks rarely, if ever, campaigned during the winter, even under Charlemagne, and once the Vikings had "dug in", as it were, it was incredibly difficult for armies to be raised in order to root them out, due to the resource-intensive nature of mustering and maintaining an army, especially when living off the land was not an option. Occupying warbands would then withdraw in early spring before the weather turned against them and armies could be raised again.
Warriors could be as young as 12 years old. Various basic physical tests were required to join the Viking forces, but these tests were considered easy to pass.
Additionally, during inland raiding campaigns, the loot from a given target would be stored in a warband's ship that would then sail further upriver while the raiding party proceeded by land to a rendezvous point. By doing this, Vikings could ensure the safety of their plunder from counter-raids, as well as drastically increase the amount they could carry.
Common weapons
Spear
The most common weapon in the Viking arsenal was the spear. They were inexpensive and effective weapons, and could also be used when hunting. In the late Roman Iron Age (ending c. 500 CE), the Norse were reputed for their preference of and prowess with the light spear. The wooden shaft of the Viking spear was between two and three meters long. There were two types of spears; one was made for throwing while the other was generally used for thrusting. The shafts were similar, but the tips of throwing spears were roughly thirty centimetres while the thrusting spears were close to sixty. Spears were sometimes used as projectile weapons in the occasional naval fight, as well as during raids onshore and in battle. The spear was popular because it was inexpensive and had a longer reach than the sword, making it the most common battlefield weapon all over the world, despite popular belief.
Archery
Another common weapon in the Viking arsenal was the bow. "In combat, archers formed up behind a line of spearmen who defended against a mounted attack."
Bows
One bow found in an Irish grave was of yew with a rounded rectangular cross section flattened toward the tips, which had been heat bent toward the belly's side. Other bows, either complete or in pieces, were made of yew and elm, as found in Hedeby.
Arrows
Viking arrows have been found in pieces,ancient arrow shafts were made from light and strait woods such as ash, pine, poplar and spruce. Three feathers were used for fletching. "The Viking's long arrows are meant to be drawn to the ear for instinctive shooting, meaning that the archer does not sight on or even look at his arrow."
Axe
The axe overtook the spear as the most common weapon in the turbulent Migration Age, which saw much internal raiding and warfare in Scandinavia. It was the first "siege weapon" for raiding enemy farmhouses, where a spear or a sword could do little damage. The axe was commonly used for all kinds of farm labour and logging, as well as in construction and shipbuilding, and eventually adapted for use in Viking raids. Axes varied in size from small handheld broadaxes that could be used both for raids and in farming, to Danish axes that were well over a meter in length. The popularity of the axe is often misunderstood in modern culture. The battle-axe was not seen as a superior weapon to the spear, and historical evidence shows that its use was rather limited. These axes had a wooden shaft, with a large, curved iron blade. They required less swinging power than expected, as the heads, while large, usually weighed only 0.8–0.9 kg, and as such were light and fast weapons, not depending on gravity and momentum to do most of the work. The axe had points on each tip of the blade where the curve tapered off. This allowed it to be used to hook an opponent, while also doubling as a thrusting weapon.King Magnus of Norway inherited his axe from his patron saint father, Olav Haraldsson. He named this axe Hel, the name of the Norse goddess of death (Christians associated this name with the word Hell). The axe of Magnus is still portrayed in the Norwegian coat of arms.
Sword
Viking swords were pattern welded and most commonly decorated with copper inlays and icons, featuring a fuller down the centre of the blade in order to reduce the weight of the blade; a few single-bladed swords around a meter in length have been unearthed but the most commonly found swords in Viking graves are double-edged with blades measuring around 90 cm long and 15 cm wide.
Viking Age swords were common in battles and raids. They were used as a secondary weapon when fighting had fallen out of formation or their primary weapon was damaged. While there were many variations of swords, the Vikings used double-edged swords, often with blades 90 centimetres long and 15 centimetres wide. These swords were designed for slashing and cutting, rather than thrusting, so the blade was carefully sharpened while the tip was often left relatively dull.
A sword was considered a personal object amongst Vikings. Warriors named their swords, as they felt such objects guarding their lives deserved identities. A sword, depending on the make, was often associated with prestige and value due to the importance of honour in the Viking Age. While the Vikings used their own swords in battle, they were interested in the Frankish battle swords because of their acclaimed craftsmanship.
Weapons often served more than one purpose. If two people were in disagreement, one would often challenge his offender to a duel of honour that was supposed to resolve the issue. This challenge would take place either on a small island or marked off area. A square with sides between 9–12 ft would be marked off with an animal hide placed inside the square. Each man was allowed three shields and a shield-bearer who carried the shield during battle. The helper could replace or carry shields for the combatant. The person who had been challenged was entitled to the first blow at the shields. The opponent could parry the blow and counter with his own strike; only one strike at a time was allowed. Once all of someone's shields had been destroyed he could continue to defend himself as best he could with a sword. This would continue until someone was injured; if blood fell on the animal hide then that person was required to pay three marks of silver to be set free and have his honour restored.
Defensive equipment
While few intact helmets have been recovered from Viking burial sites (often just fragments of metal), contemporary depictions of Viking warriors do show them wearing helmets which has led some historians, like Anne Pedersen, to suspect that most warriors wore leather helmets rather than metal, something that would offer little protection and of which there is little evidence. Another piece of defensive equipment used by warriors was a shield. The shield itself was round and easy to maneuver with; however, if on horseback, it left the legs exposed. Shields were made out of wooden boards and held together by a rim of either leather or thin iron fittings. Shields also appears to have been covered in thin leather, preventing them from splintering. In addition to this, the weapons of their enemies sometimes became stuck in the shield, giving the Viking an opportunity to kill them. Shields had its hand grip hidden behind an iron boss and measures about 1 m in diameter.
Fragments of chain mail have been uncovered in particularly wealthy Viking graves and in the 9th-10th centuries such armour would have been incredibly expensive due to the material, time, and labour costs that would have been required to manufacture.
See also
Viking raids in the Rhineland
Ushkuiniks – Novgorod's privateers who inherited Vikings' warfare
Mangayaw, similar seasonal naval raids for prestige and loot among Austronesian societies in the Philippines
The Denmark National Museum on Expeditions and Raids
References
Sources
Abels, Richard. "Alfred the Great and Æthelred II 'the Unready": The Viking Wars in England, C. 850–1016." Vikings Revised (2009): n.p. United States Naval Academy. United States Naval Academy Press, 20 July 2009. Web. 16 Nov. 2014. https://www.academia.edu/30747712/Alfred_the_Great_and_%C3%86thelred_II_the_Unready_the_Viking_Wars_in_England_c_850_1016
Bruun, Per. "The Viking Ship." Journal of Coastal Research 13.4 (1997): 1282–89. JSTOR. Web. 18 Nov. 2014.
Brink, Stefan, and Pierce Niel (eds). "The Viking World". Routledge 2008 (print) 2011 Epub
Coupland, S. ‘Holy Ground? The Plundering and Burning of Churches by Vikings and Franks in the Ninth Century’, Viator, Medieval and Renaissance Studies. 45. (2014), pp. 73 – 97 Epub
DeVries, Kelly Robert, and Robert Douglas Smith. Medieval Military Technology, Second Edition. Toronto: U of Toronto, 2012. Google Books. 1 May 2012. Web. 17 Nov 2014.
Fasulo, David F. Medieval Scandinavia: Overview of Viking Shipbuilding. Great Neck: Great Neck, n.d. Ebsco Host. Great Neck Publishing, 2011. Web. 15 Nov 2014.
Fasulo, David F. Medieval Scandinavia: Overview of Viking Warfare. Great Neck: Great Neck, n.d. Ebsco Host. Great Neck Publishing, 2011. Web. 16 Nov 2014.
Maclean, Simon, translator Regino of Prum's Chronicle (2009) Epub
Nelson, Janet, translator. The Annals of St. Bertin (2000) Epub
Pedersen, Anne. 'Viking Weaponry', Brink, Stefan, and Pierce Niel (eds). "The Viking World". Routledge 2008 (print) 2011 Epub
Reuter, Timothy, translator. The Annals of Fulda (1992) Epub
Short, William Rhuel. Icelanders in the Viking Age: The People of the Sagas. Jefferson, NC: McFarland, 2010. Print.
Sprague, Martina. Norse Warfare: The Unconventional Battle Strategies of the Ancient Vikings. New York: Hippocrene, 2007. Print.
Taylor, Simon, Garreth Williams, B.E Crawford, and Beverly Ballin Smith. West Over Sea : Studies in Scandinavian Sea-borne Expansion and Settlement Before 1300: A Festschrift in Honour of Dr Barbara Crawford. Leiden: In the Northern World, 2007. Print.
Williams, Gareth. 'Raiding and Warfare', Brink, Stefan, and Pierce Niel (eds). "The Viking World". Routledge 2008 (print) 2011 Epub
Winroth, Anders. The Age of the Vikings. (eBook and hardcover). Princeton University Press, 1 Sep 2014. Web. 17 Nov. 2014.
External links
Warfare | 0.778679 | 0.989946 | 0.77085 |
Political modernization | Political modernization (also spelled as political modernisation; ), refers to the process of development and evolution from a lower to a higher level, in which a country's constitutional system and political life moves from superstition of authority, autocracy and the rule of man to rationality, autonomy, democracy and the rule of law. It manifests itself in certain types of political change, like political integration, political differentiation, political secularisation, and so forth. The process of political modernisation has enhanced the capacity of a society's political system, i.e. the effectiveness and efficiency of its performance.
Sustainability studies researcher George Francis argues that 'political modernisation' is the changes in the nation-state brought about by the neoliberal globalisation process since the 1970s. It primarily consists of processes of differentiation of political structure and secularisation of political culture.
According Samuel Huntington, an American political scientist, political modernization consists of three basic elements, the rationalization of authority, the differentiation of structure and the expansion of political participation.
References
modernization
Political theories
Political terminology | 0.787182 | 0.979235 | 0.770836 |
A Distant Mirror | A Distant Mirror: The Calamitous 14th Century is a narrative history book by the American historian Barbara Tuchman, first published by Alfred A. Knopf in 1978.
It won a 1980 U.S. National Book Award in History.
The main title, A Distant Mirror, conveys Tuchman's thesis that the death and suffering of the 14th century reflect those of the 20th century, particularly the horrors of World War I.
Summary
The book's focus is the Crisis of the Late Middle Ages which caused widespread suffering in Europe in the 14th century. Drawing heavily on Froissart's Chronicles, Tuchman recounts the histories of the Hundred Years' War, the Black Plague, the Papal Schism, pillaging mercenaries, anti-Semitism, popular revolts including the Jacquerie in France, the liberation of Switzerland, the Battle of the Golden Spurs, and various peasant uprisings. She also discusses the advance of the Islamic Ottoman Empire into Europe until the disastrous Battle of Nicopolis. However, Tuchman's scope is not limited to political and religious events. She begins with a discussion of the Little Ice Age, a change in climate that reduced average temperatures in Europe well into the mid-19th century, and describes the lives of all social classes, including nobility, clergy, and peasantry.
Much of the narrative is woven around the life of the French nobleman Enguerrand de Coucy. Tuchman chose him as a central figure partly because his life spanned much of the 14th century, from 1340 to 1397. A powerful French noble who married Isabella, eldest daughter of Edward III of England, Coucy's ties put him in the middle of events.
Critical reception
A Distant Mirror received much popular acclaim. A reviewer in History Today described it as an enthralling work full of "vivid pen-portraits". In The Spectator, David Benson called it "an exciting and even bracing" book which did away with many sentimental myths about the Middle Ages. It also received a favorable review in the Los Angeles Times.
However, scholarly reaction was muted. In the journal Speculum, Charles T. Wood praised Tuchman's narrative abilities but described the book as a "curiously dated and old-fashioned work" and criticized it for being shaped by the political concerns of the United States in the late 1960s and early 1970s. Bernard S. Bachrach criticized Tuchman's reliance on secondary sources and dated translations of medieval narratives at the expense of archival research, and characterized the book as a whole as "a readable fourteenth-century version of the Fuzz n' Wuz (cops and corpses) that dominates the evening news on television." Thomas Ohlgren agreed with many of Bachrach's criticisms, and further took issue with many perceived anachronisms in Tuchman's characterization of the medieval world and a lack of scholarly rigor. William McNeill, writing in the Chicago Tribune, thought that A Distant Mirror, while well-written on a technical level, did not present an intelligible picture of the period.
The book inspired Katherine Hoover to write her composition Medieval Suite.
Editions
, all editions are re-printings with identical pagination and contents (xx, 677 pages).
Notes
References
1978 non-fiction books
20th-century history books
Alfred A. Knopf books
History books about Europe
History books about the Middle Ages
National Book Award-winning works
14th century
Hundred Years' War literature
Books by Barbara W. Tuchman | 0.775426 | 0.993989 | 0.770765 |
Culture change | Culture change is a term used in public policy making and in workplaces that emphasizes the influence of cultural capital on individual and community behavior. It has been sometimes called repositioning of culture, which means the reconstruction of the cultural concept of a society. It places stress on the social and cultural capital determinants of decision making and the manner in which these interact with other factors like the availability of information or the financial incentives facing individuals to drive behavior.
These cultural capital influences include the role of parenting, families and close associates; organizations such as schools and workplaces; communities and neighborhoods; and wider social influences such as the media. It is argued that this cultural capital manifests into specific values, attitudes or social norms which in turn guide the behavioral intentions that individuals adopt in regard to particular decisions or courses of action. These behavioral intentions interact with other factors driving behavior such as financial incentives, regulation and legislation, or levels of information, to drive actual behavior and ultimately feed back into underlying cultural capital.
In general, cultural stereotypes present great resistance to change and to their own redefinition. Culture, often appears fixed to the observer at any one point in time because cultural mutations occur incrementally. Cultural change is a long-term process. Policymakers need to make a great effort to improve some basics aspects of a society's cultural traits.
Culture
Raimon Panikkar identified 29 ways in which cultural change can be brought about, including growth, development, evolution, involution, renovation, reconception, reform, innovation, revivalism, revolution, mutation, progress, diffusion, osmosis, borrowing, eclecticism, syncretism, modernization, nudging indigenization, and transformation. In this context, modernization could be viewed as adoption of Enlightenment era beliefs and practices, such as science, rationalism, industry, commerce, democracy, and the notion of progress. Rein Raud, building on the work of Umberto Eco, Pierre Bourdieu and Jeffrey C. Alexander, has proposed a model of cultural change based on claims and bids, which are judged by their cognitive adequacy and endorsed or not endorsed by the symbolic authority of the cultural community in question.
Cultural invention
Cultural invention has come to mean any innovation that is new and found to be useful to a group of people and expressed in their behavior but which does not exist as a physical object. Humanity is in a global "accelerating culture change period," driven by the expansion of international commerce, the mass media, and above all, the human population explosion, among other factors. Culture repositioning means the reconstruction of the cultural concept of a society.
Cultures are internally affected by both forces encouraging change and forces resisting change. These forces are related to both social structures and natural events, and are involved in the perpetuation of cultural ideas and practices within current structures, which themselves are subject to change. (See structuration.)
Social conflict
Social conflict and the development of technologies can produce changes within a society by altering social dynamics and promoting new cultural models, and spurring or enabling generative action. These social shifts may accompany ideological shifts and other types of cultural change. For example, the U.S. feminist movement involved new practices that produced a shift in gender relations, altering both gender and economic structures. Environmental conditions may also enter as factors. For example, after tropical forests returned at the end of the last ice age, plants suitable for domestication were available, leading to the invention of agriculture, which in turn brought about many cultural innovations and shifts in social dynamics.
Diffusion
Cultures are externally affected via contact between societies, which may also produce—or inhibit—social shifts and changes in cultural practices. War or competition over resources may impact technological development or social dynamics. Additionally, cultural ideas may transfer from one society to another, through diffusion or acculturation. In diffusion, the form of something (though not necessarily its meaning) moves from one culture to another. For example, Western restaurant chains and culinary brands sparked curiosity and fascination to the Chinese as China opened its economy to international trade in the late 20th-century. "Stimulus diffusion" (the sharing of ideas) refers to an element of one culture leading to an invention or propagation in another. "Direct borrowing," on the other hand, tends to refer to technological or tangible diffusion from one culture to another. Diffusion of innovations theory presents a research-based model of why and when individuals and cultures adopt new ideas, practices, and products.
Acculturation
Acculturation has different meanings. Still, in this context, it refers to the replacement of traits of one culture with another, such as what happened to certain Native American tribes and many indigenous peoples across the globe during the process of colonization. Related processes on an individual level include assimilation (adoption of a different culture by an individual) and transculturation. The transnational flow of culture has played a major role in merging different cultures and sharing thoughts, ideas, and beliefs.
Achieving culture change
The term is used by Knott et al. of the Prime Minister's Strategy Unit in their 2008 publication Achieving Culture Change: A Policy Framework. The paper sets out how public policy can achieve social and cultural change through 'downstream' interventions including fiscal incentives, legislation, regulation and information provision and also 'upstream' interventions such as parenting, peer and mentoring programs, or development of social and community networks.
The key concepts the paper is based on include:
Cultural capital - such as the attitudes, values, aspirations and sense of self-efficacy which influence behavior. Cultural capital is itself influenced by behavior over time
The shifting social zeitgeist - whereby social norms and values that predominate within the cultural capital in society evolve in over time
The process by which political narrative and new ideas and innovations shift the social zeitgeist over time within the constraint of the 'elastic band' of public opinion
The process of behavioral normalization - whereby behavior and actions pass through into social and cultural norms (for example, Knott et al. argue that the UK experience of seat belt enforcement established and reinforced this as a social norm)
The use of customer insight
The importance of tailoring policy programmes around an ecological model of human behavior to account for how policy will interact with cultural capital and affect it over time.
Knott et al. use examples from a range of policy areas to demonstrate how the culture change framework can be applied to policymaking. for example:
To encourage educational aspiration they recommend more use of early years and parenting interventions, an improved childhood offer, and development of positive narratives on education as well as integrated advisory systems, financial assistance and targeted social marketing approaches.
To promote healthy living and personal responsibility they recommend building healthy living into community infrastructure, building partnerships with schools and employers, more one-to-one support for wellbeing alongside use of regulation and legislation on unhealthy products, provision of robust health information and health marketing to promote adaptive forms of behaviour.
To develop environmentally sustainable norms they recommend reinforcing sustainability throughout policy narratives, using schools and the voluntary sector to promote environmental messages, development of infrastructure that make sustainable choices easy, together with a wider package of measures on fiscal incentives, regulation, advisory services and coalition movements.
See also
Behavioural economics
Cultural economics
Cultural geography
Cultural psychology
Social psychology
Cultural capital
Market failure
Mediatization (media)
Social change
Sociocultural evolution
Theory of planned behavior
Cultural engineering document
Notes
References
Groh, Arnold (2019). Theories of Culture. London: Routledge, .
Knott, David; Muers, Stephen; Aldridge, Stephen (2008).
GSR Behaviour Change Knowledge Review (2008). Reference Report: An overview of behaviour change models and their uses
External links
Baconbutty on Culture Change
Cut crime with drink tax, Gordon Brown told, Daily Telegraph
The naughty nation, New Statesman
Winning Hearts and Minds
Transformer une culture : un cadre d'action pour les politiques publiques (in French)
Leading Teams on Culture Change
Policy
Change
Social psychology
Cultural geography
Cultural economics | 0.777694 | 0.991073 | 0.770752 |
Ethnology | Ethnology (from the , meaning 'nation') is an academic field and discipline that compares and analyzes the characteristics of different scenarios peoples and the relationships between them (compare cultural, social, or sociocultural anthropology).
Scientific discipline
Compared to ethnography, the study of single groups through direct contact with the culture, ethnology takes the research that ethnographers have compiled and then compares and contrasts different cultures.
The term ethnologia (ethnology) is credited to Adam Franz Kollár (1718–1783) who used and defined it in his Historiae ivrisqve pvblici Regni Vngariae amoenitates published in Vienna in 1783. as: "the science of nations and peoples, or, that study of learned men in which they inquire into the origins, languages, customs, and institutions of various nations, and finally into the fatherland and ancient seats, in order to be able better to judge the nations and peoples in their own times."
Kollár's interest in linguistic and cultural diversity was aroused by the situation in his native multi-ethnic and multilingual Kingdom of Hungary and his roots among its Slovaks, and by the shifts that began to emerge after the gradual retreat of the Ottoman Empire in the more distant Balkans.
Among the goals of ethnology have been the reconstruction of human history, and the formulation of cultural invariants, such as the incest taboo and culture change, and the formulation of generalizations about "human nature", a concept which has been criticized since the 19th century by various philosophers (Hegel, Marx, structuralism, etc.). In some parts of the world, ethnology has developed along independent paths of investigation and pedagogical doctrine, with cultural anthropology becoming dominant especially in the United States, and social anthropology in Great Britain. The distinction between the three terms is increasingly blurry. Ethnology has been considered an academic field since the late 18th century, especially in Europe and is sometimes conceived of as any comparative study of human groups.
The 15th-century exploration of America by European explorers had an important role in formulating new notions of the Occident (the Western world), such as the notion of the "Other". This term was used in conjunction with "savages", which was either seen as a brutal barbarian, or alternatively, as the "noble savage". Thus, civilization was opposed in a dualist manner to barbary, a classic opposition constitutive of the even more commonly shared ethnocentrism. The progress of ethnology, for example with Claude Lévi-Strauss's structural anthropology, led to the criticism of conceptions of a linear progress, or the pseudo-opposition between "societies with histories" and "societies without histories", judged too dependent on a limited view of history as constituted by accumulative growth.
Lévi-Strauss often referred to Montaigne's essay on cannibalism as an early example of ethnology. Lévi-Strauss aimed, through a structural method, at discovering universal invariants in human society, chief among which he believed to be the incest taboo. However, the claims of such cultural universalism have been criticized by various 19th- and 20th-century social thinkers, including Marx, Nietzsche, Foucault, Derrida, Althusser, and Deleuze.
The French school of ethnology was particularly significant for the development of the discipline, since the early 1950s. Important figures in this movement have included Lévi-Strauss, Paul Rivet, Marcel Griaule, Germaine Dieterlen, and Jean Rouch.
Scholars
See: List of scholars of ethnology
See also
Anthropology
Cultural anthropology
Comparative cultural studies
Cross-cultural studies
Ethnography
Folklore studies
Cultural survival
Culture
Ethnocentrism
Evolutionism
Indigenous peoples
Intangible cultural heritage
Marxism
Meta-analysis
Critical theory
Modernism
Postmodernism
Postcolonial
Decoloniality
Primitive culture
Primitivism
Scientific Racism
Secondary research
Society
Structural anthropology
Structural functionalism
Ethnobiology
Ethnopoetics
Ethnic studies
Critical race studies
Cultural studies
References
Bibliography
Forster, Johann Georg Adam. Voyage round the World in His Britannic Majesty's Sloop, Resolution, Commanded by Capt. James Cook, during the Years 1772, 3, 4, and 5 (2 vols), London (1777).
Lévi-Strauss, Claude. The Elementary Structures of Kinship, (1949), Structural Anthropology (1958)
Mauss, Marcel. originally published as Essai sur le don. Forme et raison de l'échange dans les sociétés archaïques in 1925, this classic text on gift economy appears in the English edition as The Gift: The Form and Reason for Exchange in Archaic Societies.
Maybury-Lewis, David. Akwe-Shavante society (1967), The Politics of Ethnicity: Indigenous Peoples in Latin American States (2003).
Clastres, Pierre. Society Against the State (1974).
Pop, Mihai and Glauco Sanga. "Problemi generali dell'etnologia europea", La Ricerca Folklorica, No. 1, La cultura popolare. Questioni teoriche (April 1980), pp. 89–96.
External links
What is European Ethnology?
Webpage "History of German Anthropology/Ethnology 1945/49-1990
Languages describes the languages and ethnic groups found worldwide, grouped by host nation-state.
Division of Anthropology, American Museum of Natural History – over 160,000 objects from Pacific, North American, African, Asian ethnographic collections with images and detailed description, linked to the original catalogue pages, field notebooks, and photographs are available online.
National Museum of Ethnology – Osaka, Japan
Ethnicity
Cultural anthropology
Sociological theories
Sociology of culture | 0.775812 | 0.993471 | 0.770747 |
Economic globalization | Economic globalization is one of the three main dimensions of globalization commonly found in academic literature, with the two others being political globalization and cultural globalization, as well as the general term of globalization.
Economic globalization refers to the widespread international movement of goods, capital, services, technology and information. It is the increasing economic integration and interdependence of national, regional, and local economies across the world through an intensification of cross-border movement of goods, services, technologies and capital. Economic globalization primarily comprises the globalization of production, finance, markets, technology, organizational regimes, institutions, corporations, and people.
While economic globalization has been expanding since the emergence of trans-national trade, it has grown at an increased rate due to improvements in the efficiency of long-distance transportation, advances in telecommunication, the importance of information rather than physical capital in the modern economy, and by developments in science and technology. The rate of globalization has also increased under the framework of the General Agreement on Tariffs and Trade and the World Trade Organization, in which countries gradually cut down trade barriers and opened up their current accounts and capital accounts. This recent boom has been largely supported by developed economies integrating with developing countries through foreign direct investment, lowering costs of doing business, the reduction of trade barriers, and in many cases cross-border migration.
Evolution of globalization
History
International commodity markets, labor markets, and capital markets make up the economy and define economic globalization.
Beginning as early as 6500 BCE, people in Syria were trading livestock, tools, and other items. In Sumer, an early civilization in Mesopotamia, a token system was one of the first forms of commodity money. Labor markets consist of workers, employers, wages, income, supply and demand. Labor markets have been around as long as commodity markets. The first labor markets provided workers to grow crops and tend livestock for later sale in local markets. Capital markets emerged in industries that require resources beyond those of an individual farmer.
Technology
World War I disrupted economic globalization, with countries adopting protectionist policies and trade barriers, slowing global trade. The 1956 invention of containerized shipping and larger ship sizes reduced costs, facilitating global trade.
Globalization resumed in the 1970s as governments highlighted trade benefits. Subsequent technology advancements have accelerated global trade expansion.
The follow-on advances in technology since then have played a pivotal role in the rapid expansion of global trade.
Policy and government
The GATT/WTO framework, which was initiated in 1947, led participating countries to reduce their tariff and non-tariff barriers to trade. Indeed, the idea of Most Favoured Nation was essential to the GATT. In order to accede, governments had to shift their economies from central planning to market driven, especially after the fall of the Soviet Union.
On 27 October 1986, the London Stock Exchange enacted newly deregulated rules that enabled global interconnection of markets, with an expectation of huge increases in market activity. This event came to be known as the Big Bang.
By the time the World Trade Organization was established in 1994 as the baton was passed from the GATT, it had grown to 128 countries, including Czech Republic, Slovakia and Slovenia. The year 1995 saw the WTO pass the General Agreement on Trade in Services, while the 1998 defeat of the OECD's Multilateral Agreement on Investment was a hiccup on the route to economic globalization.
Multinational corporations reorganized production to take advantage of these opportunities. Labor-intensive production migrated to areas with lower labor costs, especially China, later followed by other functions as skill levels increased. Networks raised the level of wealth consumption and geographical mobility. This highly dynamic worldwide system had powerful ramifications. The World Trade Organization Ministerial Conference of 1999 and associated 1999 Seattle WTO protests were a significant step on the road to economic globalization.
The People's Republic of China (2001) and the last remnants of ex-Soviet bloc countries like Ukraine (2008) and Russia (2012) were admitted much later to the WTO process after painful structural reforms.
The Multilateral Convention to Implement Tax Treaty Related Measures to Prevent Base Erosion and Profit Shifting, which entered into force on 1 July 2018, is an effort to harmonize tax regimes in order to prevent multi-national firms from taking advantage of loopholes like Ireland's Green Jersey BEPS tool.
Global agents
International governmental organizations
An intergovernmental organization or international governmental organization (IGO) is an entity created by treaty, involving two or more nations, to work in good faith, on issues of common interest. IGO's strive for peace, security and deal with economic and social questions. Examples include: The United Nations, The World Bank and on a regional level The North Atlantic Treaty Organization among others.
International non-governmental organizations (NGOs)
International non-governmental organizations include charities, non-profit advocacy groups, business associations, and cultural associations. International charitable activities increased after World War II and on the whole NGOs provide more economic aid to developing countries than developed country governments.
Businesses
Since the 1970s, multinational businesses have increasingly relied on outsourcing and subcontracting across vast geographical spaces, due to the global nature of supply chains and the production of intermediate products. Firms also engage in inter-firm alliances and rely on foreign research and development. This is in contrast to past periods where firms kept production internalized or within a localized geography. Innovations in communications and transportation technology, as well as greater economic openness and less government intervention have made a shift away from internalization more feasible. Additionally, businesses going global learn the tools to effectively interact with cultural agility; with people of many diverse cultural backgrounds, expanding their market.
Immigrants
International immigrants transfer significant amounts of money through remittances to lower-income relatives. Communities of immigrants in the destination country often provide new arrivals with information and ideas about how to earn money. In some cases, this has resulted in disproportionately high representation of some ethnic groups in certain industries, especially if economy success encourages more people to move from the source country. Movement of people also spreads technology and aspects of business culture, and moves accumulated financial assets.
Impact
Economic growth and poverty reduction
Economic growth accelerated and poverty declined globally following the acceleration of globalization.
According to the International Monetary Fund, growth benefits of economic globalization are widely shared. While several globalizers have seen an increase in inequality, most notably China, this increase in inequality is a result of domestic liberalization, restrictions on internal migration, and agricultural policies, rather than a result of international trade.
Poverty has been reduced as evidenced by a 5.4 percent annual growth in income for the poorest fifth of the population of Malaysia. Even in China, where inequality continues to be a problem, the poorest fifth of the population saw a 3.8 percent annual growth in income. In several countries, those living below the dollar-per-day poverty threshold declined. In China, the rate declined from 20 to 15 percent and in Bangladesh the rate dropped from 43 to 36 percent.
Globalizers are narrowing the per capita income gap between the rich and the globalizing nations. China, India, and Bangladesh, some of the newly industrialised nations in the world, have greatly narrowed inequality due to their economic expansion.
Global supply chain
The global supply chain consists of complex interconnected networks that allow companies to produce handle and distribute various goods and services to the public worldwide.
Corporations manage their supply chain to take advantage of cheaper costs of production. A supply chain is a system of organizations, people, activities, information, and resources involved in moving a product or service from supplier to customer. Supply chain activities involve the transformation of natural resources, raw materials, and components into a finished product that is delivered to the end customer. Supply chains link value chains. Supply and demand can be very fickle, depending on factors such as the weather, consumer demand, and large orders placed by multinational corporations.
Labor conditions and environment
"Race to the bottom"
Globalization is sometimes perceived as a cause of a phenomenon called the "race to the bottom" that implies that to minimize cost and increase delivery speed, businesses tend to locate operations in countries with the least stringent environmental and labor regulations. Pressure to do this is increased if competitors lower costs by the same means. This both directly results poor working conditions, low wages, job insecurity, and pollution, but also encourages governments to under-regulate in order to attract jobs and economic investment. However, if business demand is sufficiently high, the labor pool in low-wage countries becomes exhausted (as has happened in China), resulting in higher wages due to competition, and more demand from the public for government protection against exploitation and pollution. From 2003 to 2013, wages in China and India have gone up by around 10–20% a year.
Health risks
In developing countries with loose labor regulations, there are adverse health consequences from working long hours, and individuals burden themselves from working within vast global supply chains. Women in agriculture, for example, are often asked to work long hours handling chemicals such as pesticides and fertilizers without any protection.
Although both men and women experience shortcomings with health, the final reports stated that women, with the double burden of domestic and paid work experience an increased the risk of psychological distress and suboptimal health. Strazdins concluded that negative work-family spillover especially is associated with health problems among both women and men, and negative family-work spillover is related to a poorer health status among women."
It is common for the work lifestyle to bring forth adverse health conditions or even death due to weak safety measure policies. After the tragic collapse of the Rana Plaza factory in Bangladesh where over 800 deaths occurred the country has since then made efforts in boosting up their safety policies to better accommodate workers.
Mistreatment
In developing countries with loose labor regulations and a large supply of low-skill, low-cost workers, there are risks for mistreatment of some workers, especially women and children. Poor working conditions and sexual harassment are just some of the mistreatment faced by women in the textile supply chain. Marina Prieto-Carrón shows in her research in Central America that women in sweatshops are not even supplied with toilet paper in the bathroom every day. The reason it costs corporations more is because people can not work to their full potential in poor conditions, affecting the global marketplace. Furthermore, when corporations decide to change manufacturing rates or locations in industries that employ more women, they are often left with no job nor assistance. This kind of sudden reduction or elimination in hours is seen in industries such as the textile industry and agriculture industry, both of which employ a higher number of women than men. One solution to mistreatment of women in the supply chain is more involvement from the corporation and trying to regulate the outsourcing of their product.
Global labor and fair trade movements
Several movements, such as the fair trade movement and the anti-sweatshop movement, claim to promote a more socially just global economy. The fair trade movement works towards improving trade, development and production for disadvantaged producers. The fair trade movement has reached 1.6 billion US dollars in annual sales. The movement works to raise consumer awareness of exploitation of developing countries. Fair trade works under the motto of "trade, not aid", to improve the quality of life for farmers and merchants by participating in direct sales, providing better prices and supporting the community. Meanwhile, the anti-sweatshop movement is to protest the unfair treatment caused by some companies.
Various transnational organizations advocate for improved labor standards in developing countries. This including labor unions, who are put at a negotiating disadvantage when an employer can relocate or outsource operations to a different country.
Capital flight
Capital flight occurs when assets or money rapidly flow out of a country because of that country's recent increase in unfavorable financial conditions such as taxes, tariffs, labor costs, government debt or capital controls. This is usually accompanied by a sharp drop in the exchange rate of the affected country or a forced devaluation for countries living under fixed exchange rates. Currency declines improve the terms of trade, but reduce the monetary value of financial and other assets in the country. This leads to decreases in the purchasing power of the country's assets.
A 2008 paper published by Global Financial Integrity estimated capital flight to be leaving developing countries at the rate of "$850 billion to $1 trillion a year." But capital flight also affects developed countries. A 2009 article in The Times reported that hundreds of wealthy financiers and entrepreneurs had recently fled the United Kingdom in response to recent tax increases, relocating to low tax destinations such as Jersey, Guernsey, the Isle of Man and the British Virgin Islands. In May 2012 the scale of Greek capital flight in the wake of the first "undecided" legislative election was estimated at €4 billion a week.
Capital flight can cause liquidity crises in directly affected countries and can cause related difficulties in other countries involved in international commerce such as shipping and finance. Asset holders may be forced into distress sales. Borrowers typically face higher loan costs and collateral requirements, compared to periods of ample liquidity, and unsecured debt is nearly impossible to obtain. Typically, during a liquidity crisis, the interbank lending market stalls.
Inequality
While within-country income inequality has increased throughout the globalization period, globally inequality has lessened as developing countries have experienced much more rapid growth. Economic inequality varies between societies, historical periods, economic structures or economic systems, ongoing or past wars, between genders, and between differences in individuals' abilities to create wealth. Among the various numerical indices for measuring economic inequality, the Gini coefficient is most often-cited.
Economic inequality includes equity, equality of outcome and subsequent equality of opportunity. Although earlier studies considered economic inequality as necessary and beneficial, some economists see it as an important social problem. Early studies suggesting that greater equality inhibits economic growth did not account for lags between inequality changes and growth changes. Later studies claimed that one of the most robust determinants of sustained economic growth is the level of income inequality.
International inequality is inequality between countries. Income differences between rich and poor countries are very large, although they are changing rapidly. Per capita incomes in China and India doubled in the prior twenty years, a feat that required 150 years in the US. According to the United Nations Human Development Report for 2013, for countries at varying levels of the UN Human Development Index the GNP per capita grew between 2004 and 2013 from 24,806 to 33,391 or 35% (very high human development), 4,269 to 5,428 or 27% (medium) and 1,184 to 1,633 or 38% (low) PPP$, respectively (PPP$ = purchasing power parity measured in United States dollars).
Certain demographic changes in the developing world after active economic liberalization and international integration resulted in rising welfare and hence, reduced inequality. According to Martin Wolf, in the developing world as a whole, life expectancy rose by four months each year after 1970 and infant mortality rate declined from 107 per thousand in 1970 to 58 in 2000 due to improvements in standards of living and health conditions. Also, adult literacy in developing countries rose from 53% in 1970 to 74% in 1998 and much lower illiteracy rate among the young guarantees that rates will continue to fall as time passes. Furthermore, the reduction in fertility rates in the developing world as a whole from 4.1 births per woman in 1980 to 2.8 in 2000 indicates improved education level of women on fertility, and control of fewer children with more parental attention and investment. Consequentially, more prosperous and educated parents with fewer children have chosen to withdraw their children from the labor force to give them opportunities to be educated at school improving the issue of child labor. Thus, despite seemingly unequal distribution of income within these developing countries, their economic growth and development have brought about improved standards of living and welfare for the population as a whole.
Economic development spurred by international investment or trade can increase local income inequality as workers with more education and skills can find higher-paying work. This can be mitigated with government funding of education. Another way globalization increases income inequality is by increasing the size of the market available for any particular good or service. This allows the owners of companies that service global markets to reap disproportionately larger profits. This may happen at the expense of local companies that would have otherwise been able to dominate the domestic market, which would have spread profits around to a larger number of owners. On the other hand, globalized stock markets allow more people to invest internationally, and get a share of profits from companies they otherwise could not.
Resource insecurity
A systematic, and possibly first large-scale, cross-sectoral analysis of water, energy and land in security in 189 countries that links national and sector consumption to sources showed that countries and sectors are highly exposed to over-exploited, insecure, and degraded such resources. The 2020 study finds that economic globalization has decreased security of global supply chains with most countries exhibiting greater exposure to resource risks via international trade – mainly from remote production sources – and that diversifying trading partners is unlikely to help nations and sectors to reduce these or to improve their resource self-sufficiency.
Competitive advantages
Businesses in developed countries tend to be more highly automated, have more sophisticated technology and techniques, and have better national infrastructure. For these reasons and sometimes due to economies of scale, they can sometimes out-compete similar businesses in developing countries. This is a substantial issue in international agriculture, where Western farms tend to be large and highly productive due to agricultural machinery, fertilizer, and pesticides; but developing-country farms tend to be smaller and rely heavily on manual labor. Conversely, cheaper manual labor in developing countries allowed workers there to out-compete workers in higher-wage countries for jobs in labor-intensive industries. As the theory of competitive advantage predicts, instead of each country producing all the goods and services it needs domestically, a country's economy tends to specialize in certain areas where it is more productive (though in the long term the differences may be equalized, resulting in a more balanced economy).
Tax havens
A tax haven is a state, country or territory where certain taxes are levied at a low rate or not at all, which are used by businesses for tax avoidance and tax evasion. Individuals and/or corporate entities can find it attractive to move themselves to areas with reduced taxation. This creates a situation of tax competition among governments. Taxes vary substantially across jurisdictions. Sovereign states have theoretically unlimited powers to enact tax laws affecting their territories, unless limited by previous international treaties. The central feature of a tax haven is that its laws and other measures can be used to evade or avoid the tax laws or regulations of other jurisdictions. In its December 2008 report on the use of tax havens by American corporations, the U.S. Government Accountability Office regarded the following characteristics as indicative of a tax haven: nil or nominal taxes; lack of effective exchange of tax information with foreign tax authorities; lack of transparency in the operation of legislative, legal or administrative provisions; no requirement for a substantive local presence; and self-promotion as an offshore financial center.
A 2012 report from the Tax Justice Network estimated that between US$21 trillion and $32 trillion is sheltered from taxes in tax havens worldwide. If such hidden offshore assets are considered, many countries with governments nominally in debt would be net creditor nations. However, the tax policy director of the Chartered Institute of Taxation expressed skepticism over the accuracy of the figures. Daniel J. Mitchell of the US-based Cato Institute says that the report also assumes, when considering notional lost tax revenue, that 100% of the money deposited offshore is evading payment of tax.
The tax shelter benefits result in a tax incidence disadvantaging the poor. Many tax havens are thought to have connections to "fraud, money laundering and terrorism." Accountants' opinions on the propriety of tax havens have been evolving, as have the opinions of their corporate users, governments, and politicians, although their use by Fortune 500 companies and others remains widespread. Reform proposals centering on the Big Four accountancy firms have been advanced. Some governments appear to be using computer spyware to scrutinize corporations' finances.
Cultural effects
Economic globalization may affect culture. Populations may mimic the international flow of capital and labor markets in the form of immigration and the merger of cultures. Foreign resources and economic measures may affect different native cultures and may cause assimilation of a native people. As these populations are exposed to the English language, computers, western music, and North American culture, changes are being noted in shrinking family size, immigration to larger cities, more casual dating practices, and gender roles are transformed.
Yu Xintian noted two contrary trends in culture due to economic globalization. Yu argued that culture and industry not only flow from the developed world to the rest, but trigger an effort to protect local cultures. He notes that economic globalization began after World War II, whereas internationalization began over a century ago.
George Ritzer wrote about the McDonaldization of society and how fast food businesses spread throughout the United States and the rest of the world, attracting other places to adopt fast food culture. Ritzer describes other businesses such as The Body Shop, a British cosmetics company, that have copied McDonald's business model for expansion and influence. In 2006, 233 of 280 or over 80% of new McDonald's opened outside the US. In 2007, Japan had 2,828 McDonald's locations.
Global media companies export information around the world. This creates a mostly one-way flow of information, and exposure to mostly western products and values. Companies like CNN, Reuters and the BBC dominate the global airwaves with western points of view. Other media news companies such as Qatar's Al Jazeera network offer a different point of view, but reach and influence fewer people.
Migration
"With an estimated 210 million people living outside their country of origin (International Labour Organization [ILO] 2010), international migration has touched the lives of almost everyone in both the sending and receiving countries of the Global South and the Global North". Because of advances made in technology, human beings as well as goods are able to move through different countries and regions with relative ease.
See also
Capitalist peace
Economic union
Fiscal localism
Foreign direct investment
Free trade
Globalization
Internationalization
Jihad vs. McWorld
Lists of free trade agreements
Military globalization
Mundialization
Neoliberalism
Trade globalization
World economy
Notes
References
External links | 0.773567 | 0.996336 | 0.770733 |
Trend analysis | Trend analysis is the widespread practice of collecting information and attempting to spot a pattern. In some fields of study, the term has more formally defined meanings.
Although trend analysis is often used to predict future events, it could be used to estimate uncertain events in the past, such as how many ancient kings probably ruled between two dates, based on data such as the average years which other known kings reigned.
Project management
In project management, trend analysis is a mathematical technique that uses historical results to predict future outcome. This is achieved by tracking variances in cost and schedule performance. In this context, it is a project management quality control tool.
Statistics
In statistics, trend analysis often refers to techniques for extracting an underlying pattern of behavior in a time series which would otherwise be partly or nearly completely hidden by noise. If the trend can be assumed to be linear, trend analysis can be undertaken within a formal regression analysis, as described in Trend estimation. If the trends have other shapes than linear, trend testing can be done by non-parametric methods, e.g. Mann-Kendall test, which is a version of Kendall rank correlation coefficient. Smoothing can also be used for testing and visualization of nonlinear trends.
Text
Trend analysis can be also used for word usage, how words change in the frequency of use in time (diachronic analysis), in order to find neologisms or archaisms. It relates to diachronic linguistics, a field of linguistics which examines how languages change over time. Google provides tool Google Trends to explore how particular terms are trending in internet searches. On the other hand, there are tools which provide diachronic analysis for particular texts which compare word usage in each period of the particular text (based on timestamped marks), see e.g. Sketch Engine diachronic analysis (trends).
See also
Cool-hunting
Extrapolation
Horizon scanning
Technology forecasting
Weather forecasting
Notes
External links
Trend Analysis in Polls, Topics, Opinions and Answers
Megatrends and connected trends download files
Regression with time series structure
Research methods
Project management techniques
Futures techniques | 0.779875 | 0.988257 | 0.770717 |
Maritime history | Maritime history is the study of human interaction with and activity at sea. It covers a broad thematic element of history that often uses a global approach, although national and regional histories remain predominant. As an academic subject, it often crosses the boundaries of standard disciplines, focusing on understanding humankind's various relationships to the oceans, seas, and major waterways of the globe. Nautical history records and interprets past events involving ships, shipping, navigation, and seafarers.
Maritime history is the broad overarching subject that includes fishing, whaling, international maritime law, naval history, the history of ships, ship design, shipbuilding, the history of navigation, the history of the various maritime-related sciences (oceanography, cartography, hydrography, etc.), sea exploration, maritime economics and trade, shipping, yachting, seaside resorts, the history of lighthouses and aids to navigation, maritime themes in literature, maritime themes in art, the social history of sailors and passengers and sea-related communities. There are a number of approaches to the field, sometimes divided into two broad categories: Traditionalists, who seek to engage a small audience of other academics, and Utilitarians, who seek to influence policy makers and a wider audience.
Historiography
Historians from many lands have published monographs, popular and scholarly articles, and collections of archival resources. A leading journal is International Journal of Maritime History, a fully refereed scholarly journal published twice a year by the International Maritime Economic History Association. Based in Canada with an international editorial board, it explores the maritime dimensions of economic, social, cultural, and environmental history. For a broad overview, see the four-volume encyclopedia edited by John B. Hattendorf, Oxford Encyclopedia of Maritime History (Oxford, 2007). It contains over 900 articles by 400 scholars and runs 2900 pages. Other major reference resources are Spencer Tucker, ed., Naval Warfare: An International Encyclopedia (3 vol. ABC-CLIO, 2002) with 1500 articles in 1231, pages, and I. C. B. Dear and Peter Kemp, eds., Oxford Companion to Ships and the Sea (2nd ed. 2005) with 2600 articles in 688 pages.
Typically, studies of merchant shipping and of defensive navies are seen as separate fields. Inland waterways are included within 'maritime history,' especially inland seas such as the Great Lakes of North America, and major navigable rivers and canals worldwide.
One approach to maritime history writing has been nicknamed 'rivet counting' because of a focus on the minutiae of the vessel. However, revisionist scholars are creating new turns in the study of maritime history. This includes a post-1980s turn towards the study of human users of ships (which involves sociology, cultural geography, gender studies and narrative studies); and post-2000 turn towards seeing sea travel as part of the wider history of transport and mobilities. This move is sometimes associated with Marcus Rediker and Black Atlantic studies, but most recently has emerged from the International Association for the History of Transport, Traffic and Mobilities (T2M)
See also: Historiography related articles below
Prehistoric times
Watercraft such as rafts and boats have been used far into pre-historic times and possibly even by Homo erectus more than a million years ago crossing straits between landmasses.
Little evidence remains that would pinpoint when the first seafarer made their journey. We know, for instance, that a sea voyage had to have been made to reach Greater Australia (Sahul) or more years ago. Functional maritime technology was required to progress between the many islands of Wallacea before making this crossing. We do not know what seafaring predated the milestone of the first settling of Australia. One of the oldest known boats to be found is the Pesse canoe, and carbon dating has estimated its construction from 8040 to 7510 BCE. The Pesse canoe is the oldest physical object that can date the use of watercraft, but the oldest depiction of a watercraft is from Norway. The rock art at Valle, Norway depicts a carving of a more than 4 meter long boat and it is dated to be 10,000 to 11,000 years old.
Ancient times
Throughout history sailing has been instrumental in the development of civilization, affording humanity greater mobility than travel over land, whether for trade, transport or warfare, and the capacity for fishing. The earliest depiction of a maritime sailing vessel is from the Ubaid period of Mesopotamia in the Persian Gulf, from around 3500 to 3000 BCE. These vessels were depicted in clay models and painted disks. They were made from bundled reeds encased in a lattice of ropes. Remains of barnacle-encrusted bituminous amalgams have also been recovered, which are interpreted to have been part of the water-proof coating applied on these vessels. The depictions lack details, but an image of a vessel on a shard of pottery shows evidence of what could be bipod masts and a sail, which would make it the earliest known evidence of the use of such technology. The location of the sites indicate that the Ubaid culture was engaging in maritime trade with Neolithic Arabian cultures along the coasts of the Persian Gulf for high-value goods. Pictorial representation of sails are also known from Ancient Egypt, dated to circa 3100 BCE. The earliest seaborne trading route, however, is known from the 7th millennium BCE in the Aegean Sea. It involved the seaborne movement of obsidian by an unknown Neolithic Europe seafaring people. The obsidian was mined from the volcanic island of Milos and then transported to various parts of the Balkans, Anatolia, and Cyprus, where they were refined into obsidian blades. However, the nature of the seafaring technologies involved have not been preserved.
Austronesians started a dispersal from Taiwan across Maritime Southeast Asia around 3000 BCE. This started to spread into the islands of the Pacific , steadily advanced across the Pacific and culminated with the settlement of Hawaii , and New Zealand . Distinctive maritime technology was used for this, including the lashed-lug boatbuilding technique, the catamaran, and the crab claw sail, together with extensive navigation techniques. This allowed them to colonize a large part of the Indo-Pacific region during the Austronesian expansion. Prior to the 16th century Colonial Era, Austronesians were the most widespread ethnolinguistic group, spanning half the planet from Easter Island in the eastern Pacific Ocean to Madagascar in the western Indian Ocean.
The Ancient Egyptians had knowledge of sail construction. The Greek historian Herodotus states that Necho II sent out an expedition of Phoenicians, which in two and a half years sailed from the Red Sea around Africa to the mouth of the Nile. As they sailed south and then west, they observed that the mid-day sun was to the north. Their contemporaries did not believe them, but modern historians take this as evidence that they were south of the equator as crossing the equator changes the angle of the sun resulting in the change of season.
Age of navigation
By 1000 BCE, Austronesians in Island Southeast Asia were already engaging in regular maritime trade with China, South Asia, and the Middle East, introducing sailing technologies to these regions. They also facilitated an exchange of cultivated crop plants, introducing Pacific coconuts, bananas, and sugarcane to the Indian subcontinent, some of which eventually reached Europe via overland Persian and Arab traders. A Chinese record in 200 AD describes one of the Austronesian ships, called kunlun bo or k'unlun po (崑崙舶, lit. "ship of the Kunlun people"). It may also have been the "kolandiaphonta" known by the Greeks. It has 4–7 masts and is able to sail against the wind due to the usage of tanja sails. These ships reached as far as Madagascar by ca. 50–500 AD and Ghana in the eighth century AD.
Northern European Vikings also developed oceangoing vessels and depended heavily upon them for travel and population movements prior to 1000 AD, with the oldest known examples being longships dated to around 190 AD from the Nydam Boat site. In early modern India and Arabia the lateen-sail ship known as the dhow was used on the waters of the Red Sea, Indian Ocean, and Persian Gulf.
China started building sea-going ships in the 10th century during the Song dynasty. Chinese seagoing ship is based on Austronesian ship designs which have been trading with the Eastern Han dynasty since the second century AD. They purportedly reached massive sizes by the Yuan dynasty in the 14th century, and by the Ming dynasty, they were used by Zheng He to send expeditions to the Indian Ocean.
Water was the cheapest and usually the only way to transport goods in bulk over long distances. In addition, it was the safest way to transport commodities. The long trade routes created popular trading ports called Entrepôts. There were three popular Entrepôts in Southeast Asia: the Malaka in southwestern Malaya, Hoi An in Vietnam, and Ayuthaya in Thailand. These super centers for trade were ethnically diverse, because ports served as a midpoint of voyages and trade rather than a destination. The Entrepôts helped link the coastal cities to the "hempispheric trade nexus". The increase in sea trade initiated a cultural exchange among traders. From 1400 to 1600 the Chinese population doubled from 75 million to 150 million as a result of imported goods, this was known as the "age of commerce".
The mariner's astrolabe was the chief tool of Celestial navigation in early modern maritime history. This scaled down version of the instrument used by astronomers served as a navigational aid to measure latitude at sea, and was employed by Portuguese sailors no later than 1481.
The precise date of the discovery of the magnetic needle compass is undetermined, but the earliest attestation of the device for navigation was in the Dream Pool Essays by Shen Kuo (1088). Kuo was also the first to document the concept of true north to discern a compass' magnetic declination from the physical North Magnetic Pole. The earliest iterations of the compass consisted of a floating, magnetized lodestone needle that spun around in a water-filled bowl until it reached alignment with Earth's magnetic poles. Chinese sailors were using the "wet" compass to determine the southern cardinal direction no later than 1117. The first use of a magnetized needle for seafaring navigation in Europe was written of by Alexander Neckham, circa 1190 AD. Around 1300 AD, the pivot-needle dry-box compass was invented in Europe; it pointed north, similar to the modern-day mariner's compass. In Europe the device also included a compass-card, which was later adopted by the Chinese through contact with Japanese pirates in the 16th century.
The oldest known map is dated back to 12,000 BC; it was discovered in a Spanish cave by Pilar Utrilla. The early maps were oriented with east at the top. This is believed to have begun in the Middle East. Religion played a role in the drawing of maps. Countries that were predominantly Christian during the Middle Ages placed east at the top of the maps, in part due to Genesis, "the Lord God planted a garden toward the east in Eden". This led to maps containing the image of Jesus Christ, and the garden of Eden at the top of maps. The latitude and longitude coordinate tables were made with the sole purpose of praying towards Mecca. The next progression of maps came with the portolan chart. This was the first type of map that labeled North at the top and was drawn proportionate to size. Landmarks were drawn in great detail.
Ships and vessels
Various ships were in use during the Middle Ages. Jong, a type of large sailing ship from Nusantara, was built using wooden dowels without iron nails and multiple planks to endure heavy seas. The chuan (Chinese Junk ship) design was both innovative and adaptable. Junk vessels employed mat and batten style sails that could be raised and lowered in segments, as well varying angles. The longship was a type of ship developed over a period of centuries and perfected by its most famous users, the Vikings, around the 9th century. The ships were clinker-built, using overlapping wooden strakes. The knaar, a relative of the longship, was a type of cargo vessel. It differed from the longship in that it was larger and relied solely on its square rigged sail for propulsion. The cog was a design which is believed to have evolved from (or at least been influenced by) the longship, and was in wide use by the 12th century. It too used the clinker method of construction. The caravel was a ship invented in Islamic Iberia and used in the Mediterranean from the 13th century. Unlike the longship and cog, it used a carvel method of construction. It could be either square rigged (Caravela Redonda) or lateen rigged (Caravela Latina). The carrack was another type of ship invented in the Mediterranean in the 15th century. It was a larger vessel than the caravel. Columbus's ship, the , was a famous example of a carrack.
Arab age of discovery
The Arab Empire maintained and expanded a wide trade network across parts of Asia, Africa and Europe. This helped establish the Arab Empire (including the Rashidun, Umayyad, Abbasid and Fatimid caliphates) as the world's leading extensive economic power throughout the 8th–13th centuries according to the political scientist John M. Hobson. The Belitung is the oldest discovered Arabic ship to reach the Asian sea, dating back over 1000 years.
Apart from the Nile, Tigris and Euphrates, navigable rivers in the Islamic regions were uncommon, so transport by sea was very important. Islamic geography and navigational sciences were highly developed, making use of a magnetic compass and a rudimentary instrument known as a kamal, used for celestial navigation and for measuring the altitudes and latitudes of the stars. When combined with detailed maps of the period, sailors were able to sail across oceans rather than skirt along the coast. According to the political scientist John M. Hobson, the origins of the caravel ship, used for long-distance travel by the Spanish and Portuguese since the 15th century, date back to the qarib used by Andalusian explorers by the 13th century.
Control of sea routes dictated the political and military power of the Islamic nation. The Islamic border spread from Spain to China. Maritime trade was used to link the vast territories that spanned the Mediterranean Sea to the Indian Ocean. The Arabs were among the first to sail the Indian Ocean. Long-distance trade allowed the movement of "armies, craftsmen, scholars, and pilgrims". Sea trade was an important factor not just for the coastal ports and cities like Istanbul, but also for Baghdad and Iraq, which are further inland. Sea trade enabled the distribution of food and supplies to feed entire populations in the middle east. Long distance sea trade imported raw materials for building, luxury goods for the wealthy, and new inventions.
Hanseatic League
The Hanseatic League was an alliance of trading guilds that established and maintained a trade monopoly over the Baltic Sea, to a certain extent the North Sea, and most of Northern Europe for a time in the Late Middle Ages and the early modern period, between the 13th and 17th centuries. Historians generally trace the origins of the League to the foundation of the Northern German town of Lübeck, established in 1158/1159 after the capture of the area from the Count of Schauenburg and Holstein by Henry the Lion, the Duke of Saxony. Exploratory trading adventures, raids and piracy had occurred earlier throughout the Baltic (see Vikings)—the sailors of Gotland sailed up rivers as far away as Novgorod, for example—but the scale of international economy in the Baltic area remained insignificant before the growth of the Hanseatic League. German cities achieved domination of trade in the Baltic with striking speed over the next century, and Lübeck became a central node in all the seaborne trade that linked the areas around the North Sea and the Baltic Sea.
The 15th century saw the climax of Lübeck's hegemony. (Visby, one of the midwives of the Hanseatic league in 1358, declined to become a member. Visby dominated trade in the Baltic before the Hanseatic league, and with its monopolistic ideology, suppressed the Gotlandic free-trade competition.) By the late 16th century, the League imploded and could no longer deal with its own internal struggles, the social and political changes that accompanied the Reformation, the rise of Dutch and English merchants, and the incursion of the Ottoman Turks upon its trade routes and upon the Holy Roman Empire itself. Only nine members attended the last formal meeting in 1669 and only three (Lübeck, Hamburg and Bremen) remained as members until its final demise in 1862.
Italian maritime republics
The maritime republics, also called merchant republics, were Italian thalassocratic port cities which, starting from the Middle Ages, enjoyed political autonomy and economic prosperity brought about by their maritime activities. The term, coined during the 19th century, generally refers to four Italian cities, whose coats of arms have been shown since 1947 on the flags of the Italian Navy and the Italian Merchant Navy: Amalfi, Genoa, Pisa, and Venice. In addition to the four best known cities, Ancona, Gaeta, Noli, and, in Dalmatia, Ragusa, are also considered maritime republics; in certain historical periods, they had no secondary importance compared to some of the better known cities.
Uniformly scattered across the Italian peninsula, the maritime republics were important not only for the history of navigation and commerce: in addition to precious goods otherwise unobtainable in Europe, new artistic ideas and news concerning distant countries also spread. From the 10th century, they built fleets of ships both for their own protection and to support extensive trade networks across the Mediterranean, giving them an essential role in reestablishing contacts between Europe, Asia, and Africa, which had been interrupted during the early Middle Ages. They also had an essential role in the Crusades and produced renowned explorers and navigators such as Marco Polo and Christopher Columbus.
Over the centuries, the maritime republics—both the best known and the lesser known but not always less important—experienced fluctuating fortunes. In the 9th and 10th centuries, this phenomenon began with Amalfi and Gaeta, which soon reached their heyday. Meanwhile, Venice began its gradual ascent, while the other cities were still experiencing the long gestation that would lead them to their autonomy and to follow up on their seafaring vocation. After the 11th century, Amalfi and Gaeta declined rapidly, while Genoa and Venice became the most powerful republics. Pisa followed and experienced its most flourishing period in the 13th century, and Ancona and Ragusa allied to resist Venetian power. Following the 14th century, while Pisa declined to the point of losing its autonomy, Venice and Genoa continued to dominate navigation, followed by Ragusa and Ancona, which experienced their golden age in the 15th century. In the 16th century, with Ancona's loss of autonomy, only the republics of Venice, Genoa, and Ragusa remained, which still experienced great moments of splendor until the mid-17th century, followed by over a century of slow decline that ended with the Napoleonic invasion.
Somali maritime enterprise
During the Age of the Ajuran, the Somali sultanates and republics of Merca, Mogadishu, Barawa, Hobyo and their respective ports flourished. They had a lucrative foreign commerce with ships sailing to and coming from Arabia, India, Venetia, Persia, Egypt, Portugal and as far away as China. In the 16th century, Duarte Barbosa noted that many ships from the Kingdom of Cambaya in what is modern-day India sailed to Mogadishu with cloths and spices, for which they in return received gold, wax and ivory. Barbosa also highlighted the abundance of meat, wheat, barley, horses, and fruit on the coastal markets, which generated enormous wealth for the merchants.
In the early modern period, successor states of the Adal and Ajuran empires began to flourish in Somalia who continued the seaborne trade established by previous Somali empires. The rise of the 19th century Gobroon dynasty in particular saw a rebirth in Somali maritime enterprise. During this period, the Somali agricultural output to Arabian markets was so great that the coast of Somalia came to be known as the Grain Coast of Yemen and Oman.
Age of Discovery
The Age of Discovery was a period from the early 15th century and continuing into the early 17th century, during which European ships traveled around the world to search for new trading routes after the Fall of Constantinople. Historians often refer to the 'Age of Discovery' as the pioneer Portuguese and later Spanish long-distance maritime travels in search of alternative trade routes to "the East Indies", moved by the trade of gold, silver and spices. In the process, Europeans encountered peoples and mapped lands previously unknown to them. The Portuguese discovery of the sea route to India changed Europe's view of the world.
Christopher Columbus was a navigator and maritime explorer who is one of several historical figures credited as the discoverer of the Americas. It is generally believed that he was born in Genoa, although other theories and possibilities exist. Columbus' voyages across the Atlantic Ocean began a European effort at exploration and colonization of the Western Hemisphere. While history places great significance on his first voyage of 1492, he did not actually reach the mainland until his third voyage in 1498. Likewise, he was not the earliest European explorer to reach the Americas, as there are accounts of European transatlantic contact prior to 1492. Nevertheless, Columbus's voyage came at a critical time of growing national imperialism and economic competition between developing nation states seeking wealth from the establishment of trade routes and colonies. Therefore, the period before 1492 is known as Pre-Columbian.
John Cabot was a Genoese navigator and explorer commonly credited as one of the first early modern Europeans to land on the North American mainland aboard the in 1497. Sebastian Cabot was an Italian explorer and may have sailed with his father John Cabot in May 1497. John Cabot and perhaps Sebastian, sailing from Bristol, took their small fleet along the coasts of a "New Found Land". There is much controversy over where exactly Cabot landed, but two likely locations that are often suggested are Nova Scotia and Newfoundland. Cabot and his crew (including perhaps Sebastian) mistook this place for China, without finding the passage to the east they were looking for. Some scholars maintain that the name America comes from Richard Amerik, a Bristol merchant and customs officer, who is claimed on very slender evidence to have helped finance the Cabot voyages.
Jacques Cartier was a French navigator who first explored and described the Gulf of St-Lawrence and the shores of the Saint Lawrence River, which he named Canada, likely comes from the Huron-Iroquois word “kanata”, meaning “village” or “settlement”. Juan Fernández was a Spanish explorer and navigator. Probably between 1563 and 1574 he discovered the Juan Fernández Islands west of Valparaíso, Chile. He also discovered the Pacific islands of San Félix and San Ambrosio (1574). Among the other famous explorers of the period were Vasco da Gama, Pedro Álvares Cabral, Yermak, Juan Ponce de León, Francisco Coronado, Juan Sebastián Elcano, Bartolomeu Dias, Ferdinand Magellan, Willem Barentsz, Abel Tasman, Jean Alfonse, Samuel de Champlain, Willem Jansz, Captain James Cook, Henry Hudson, and Giovanni da Verrazzano.
Peter Martyr d'Anghiera was an Italian-born historian of Spain and of the discoveries of her representatives during the Age of Exploration. He wrote the first accounts of explorations in Central and South America in a series of letters and reports, grouped in the original Latin publications of 1511–1530 into sets of ten chapters called "decades." His Decades are thus of great value in the history of geography and discovery. His De Orbe Novo (published 1530; "On the New World") describes the first contacts of Europeans and Native Americans and contains, for example, the first European reference to India rubber.
Richard Hakluyt was an English writer, and is principally remembered for his efforts in promoting and supporting the settlement of North America by the English through his works, notably Divers Voyages Touching the Discoverie of America (1582) and The Principal Navigations, Voiages, Traffiques and Discoueries of the English Nation (1598–1600).
European expansion
Although Europe is the world's second-smallest continent in terms of area, it has a very long coastline, and has arguably been influenced more by its maritime history than any other continent. Europe is uniquely situated between several navigable seas and intersected by navigable rivers running into them in a way which greatly facilitated the influence of maritime traffic and commerce.
When the carrack and then the caravel were developed by the Portuguese, European thoughts returned to the fabled East. These explorations have a number of causes. Monetarists believe the main reason the Age of Exploration began was because of a severe shortage of bullion in Europe. The European economy was dependent on gold and silver currency, but low domestic supplies had plunged much of Europe into a recession. Another factor was the centuries long conflict between the Iberians and the Muslims to the south. The eastern trade routes were controlled by the Ottoman Empire after the Turks took control of Constantinople in 1453, and they barred Europeans from those trade routes. The ability to outflank the Muslim states of North Africa was seen as crucial to their survival. At the same time, the Iberians learnt much from their Arab neighbours. The carrack and caravel both incorporated the Mediterranean lateen sail that made ships far more manoeuvrable. It was also through the Arabs that Ancient Greek geography was rediscovered, for the first time giving European sailors some idea of the shape of Africa and Asia.
European colonization
In 1492, Christopher Columbus reached the Americas, after which European exploration and colonization rapidly expanded. The post-1492 era is known as the Columbian Exchange period. The first conquests were made by the Spanish, who quickly conquered most of South and Central America and large parts of North America. The Portuguese took Brazil. The British, French and Dutch conquered islands in the Caribbean Sea, many of which had already been conquered by the Spanish or depopulated by disease. Early European colonies in North America included Spanish Florida, the British settlements in Virginia and New England, French settlements in Quebec and Louisiana, and Dutch settlements in New Netherlands. Denmark-Norway revived its former colonies in Greenland from the 18th until the 20th century, and also colonised a few of the Virgin Islands.
From its very outset, Western colonialism was operated as a joint public-private venture. Columbus' voyages to the Americas were partially funded by Italian investors, but whereas the Spanish state maintained a tight rein on trade with its colonies (by law, the colonies could only trade with one designated port in the mother country, and treasure was brought back in special convoys), the English, French and Dutch granted what were effectively trade monopolies to joint-stock companies such as the British East India Company, the Dutch East India Company and the Hudson's Bay Company.
In the exploration of Africa, there was a proliferation of conflicting European claims to African territory. By the 15th century, Europeans explored the African coast in search of a water route to India. These expeditions were mostly conducted by the Portuguese, who had been given papal authority to exploit all non-Christian lands of the Eastern Hemisphere. The Europeans set up coastal colonies to purchase or abduct slaves for the Atlantic slave trade, but the interior of the continent remained unexplored until the 19th century. This was a cumulative period that resulted in European colonial rule in Africa and altered the future of the African continent.
Imperialism in Asia traces its roots back to the late 15th century with a series of voyages that sought a sea passage to India in the hope of establishing direct trade between Europe and Asia in spices. Before 1500 European economies were largely self-sufficient, only supplemented by minor trade with Asia and Africa. Within the next century, however, European and Asian economies were slowly becoming integrated through the rise of new global trade routes; and the early thrust of European political power, commerce, and culture in Asia gave rise to a growing trade in lucrative commodities—a key development in the rise of today's modern world capitalist economy. European colonies in India were set up by several European nations beginning at the beginning of the 16th century. Rivalry between reigning European powers saw the entry of the Dutch, British and French among others.
Ming Maritime world
Zheng He voyages
In the 15th century, before the European Age of Discovery began, the Chinese Ming dynasty carried out a maritime operation that, like the European's late expeditions, was primarily carried out to expand power, increase trade, and in some instances forcibly subdue local populations.
In 1405 Zheng He, a Muslim eunuch, was ordered by the Ming dynasty to lead a fleet of over 27,000 sailors and anywhere between 62 and 300 ships, beginning a period of expedition which would last 33 years. During his seven voyages, Zheng He visited over 30 countries spread out across the Indian Ocean. Under Emperor Yongle, this naval undertaking served primarily as a deliverer of letters demanding tribute and allegiance to the middle kingdom; gifts were the first approach to gaining a country's favor, but if circumstances required it Zheng He's fleet would resort to violence. The result was a successful connection to 48 new tribute states and an influx of over 180 new trade goods; many were gifts. These expeditions expanded China's diplomatic supremacy of the region and strengthened their economic ties in the area. When these expeditions ended, China's maritime strength diminished and lacked a powerful navy for centuries after.
Other Ming Maritime Activity
The end of the imperially-sponsored voyages, however, in no way meant that Ming people no longer put to sea. Merchants, pirates, fishermen, and others depended on boats and ships for their livelihood, and immigration to Southeast Asia, both permanent and temporary, continued throughout Ming times. Because Chinese and Chinese immigrants to Southeast Asia were the main players in commerce in the South China Sea, Chinese merchants and ships were critical to the Spanish trade in Manila. Not only did Chinese merchants supply the goods the Spanish bought with their American silver, but Chinese shipbuilders built the famous galleons that carried those goods and that silver back and forth across the Pacific twice a year.
Clipper route
During this time, the clipper route was established by clipper ships between Europe and the Far East, Australia and New Zealand. The route ran from west to east through the Southern Ocean, in order to make use of the strong westerly winds of the Roaring Forties. Many ships and sailors were lost in the heavy conditions along the route, particularly at Cape Horn, which the clippers had to round on their return to Europe. In September 1578, Sir Francis Drake, in the course of his circumnavigation of the world, discovered Cape Horn. This discovery went unused for some time, as ships continued to use the known passage through the Strait of Magellan. By the early 17th century, the Dutch merchant Jacob le Maire, together with navigator Willem Schouten, set off to investigate Drake's suggestion of a route to the south of Tierra del Fuego. At the time it was discovered, the Horn was believed to be the southernmost point of Tierra del Fuego; the unpredictable violence of weather and sea conditions in the Drake Passage made exploration difficult, and it was only in 1624 that the Horn was discovered to be an island. It is an interesting testament to the difficulty of conditions there that Antarctica, only 650 kilometres (400 mi) away across the Drake Passage, was discovered as recently as 1820, despite the passage having been used as a major shipping route for 200 years. The clipper route fell into commercial disuse with the introduction of steam ships, and the opening of the Suez and Panama Canals.
End of exploration
The Age of Exploration is generally said to have ended in the early 17th century. By this time European vessels were well enough built and their navigators competent enough to travel to virtually anywhere on the planet. Exploration, of course, continued. The Arctic and Antarctic seas were not explored until the 19th century.
Age of Sail
The Age of Sail originates from ancient seafaring exploration, during the rise of ancient civilizations. Including Mesopotamia, the Far East and the Cradle of Civilization, the Arabian Sea has been an important marine trade route since the era of the coastal sailing vessels from possibly as early as the third millennium BC, certainly the late second millennium BC up to and including the later days of Age of Sail. By the time of Julius Caesar, several well-established combined land-sea trade routes depended upon water transport through the Sea around the rough inland terrain features to its north. These routes usually began in the Far East with transshipment via historic Bharuch (Bharakuccha), traversed past the inhospitable coast of today's Iran then split around Hadhramaut into two streams north into the Gulf of Aden and thence into the Levant, or south into Alexandria via Red Sea ports such as Axum. Each major route involved transhipping to pack animal caravans, travel through desert country and risk of bandits and extortionate tolls by local potentiates. Southern coastal route past the rough country in the southern Arabian peninsula (Yemen and Oman today) was significant, and the Egyptian Pharaohs built several shallow canals to service the trade, one more or less along the route of today's Suez canal, and another from the Red Sea to the Nile River, both shallow works that were swallowed up by huge sand storms in antiquity.
In the modern western countries, the European "Age of Sail" is the period in which international trade and naval warfare were both dominated by sailing ships. The age of sail mostly coincided with the Age of Discovery, from the 15th to the 18th century. After the 17th century, English naval maps stopped using the term of British Sea for the English Channel. From 15th to the 18th centuries, the period saw square rigged sailing ships carry European settlers to many parts of the world in one of the most important human migrations in recorded history. This period was marked by extensive exploration and colonization efforts on the part of European kingdoms. The sextant, developed in the 18th century, made more accurate charting of nautical position possible.
Notable individuals
Juan of Austria was a military leader whose most famous victory was in the naval Battle of Lepanto in 1571. Philip had appointed Juan to command the naval forces of the Holy League which was pitted against the Ottoman Empire. Juan, by dint of leadership ability and charisma, was able to unite this disparate coalition and inflict a historic defeat upon the Ottomans and their corsair allies in the Battle of Lepanto. His role in the battle is commemorated in the poem "Lepanto" by G. K. Chesterton.
Maarten Tromp was an officer and later admiral in the Dutch navy. In 1639, during the Dutch struggle for independence from Spain, Tromp defeated a large Spanish fleet bound for Flanders at the Battle of the Downs, marking the end of Spanish naval power. In a preliminary battle, the action of 18 September 1639, Tromp was the first fleet commander known to deliberately use line of battle tactics. His flagship in this period was Aemilia. In the First Anglo-Dutch War of 1652–1653 Tromp commanded the Dutch fleet in the battles of Dungeness, Portland, the Gabbard and Scheveningen. In the last of these, he was killed by a sharpshooter in the rigging of William Penn's ship. His acting flag captain, Egbert Bartholomeusz Kortenaer, on kept up fleet morale by not lowering Tromp's standard, pretending Tromp was still alive.
Cornelis Tromp was a Commander in Chief of the Dutch and Danish navy. In 1656 he participated in the relief of Gdańsk (Danzig). In 1658 it was discovered he had used his ships to trade in luxury goods; as a result he was fined and not allowed to have an active command until 1662. Just before the Second Anglo-Dutch War he was promoted to vice-admiral on 29 January 1665; at the Battle of Lowestoft he prevented total catastrophe by taking over fleet command to allow the escape of the larger part of the fleet. In 1676 he became Admiral-General of the Danish navy and Knight in the Order of the Elephant. He defeated the Swedish navy in the Battle of Öland, his only victory as a fleet commander.
Charles Hardy was a British naval officer and colonial governor. He was appointed governor and commander-in-chief of the British colony of Newfoundland in 1744. In 1758, he and James Wolfe attacked French posts around the mouth of the St. Lawrence River and destroyed all of the French fishing stations along the northern shores of what is now New Brunswick and along the Gaspé peninsula.
Augustus Keppel, 1st Viscount Keppel was a British admiral who held sea commands during the Seven Years' War and the War of American Independence. During the final years of the latter conflict he served as First Lord of the Admiralty. During the Seven Years' War he saw constant service. He was in North America in 1755, on the coast of France in 1756, was detached on a cruise to reduce the French settlements on the west coast of Africa in 1758, and his ship Torbay (74) was the first to get into action in the Battle of Quiberon Bay in 1759. In 1757 he had formed part of the court martial which had condemned Admiral Byng, but was active among those who endeavoured to secure a pardon for him; but neither he nor those who had acted with him could produce any serious reason why the sentence should not be carried out. When Spain joined France in 1762 he was sent as second in command with Sir George Pocock in the expedition which took Havana. His health suffered from the fever which carried off an immense proportion of the soldiers and sailors, but the £25,000 of prize money which he received freed him from the unpleasant position of younger son of a family ruined by the extravagance of his father.
Edward Hawke, 1st Baron Hawke was a naval officer of the Royal Navy. During the War of the Austrian Succession he was promoted to rear admiral. In the Seven Years' War, Hawke replaced Admiral John Byng as commander in the Mediterranean in 1756.
Richard Howe, 1st Earl Howe was a British admiral. During the rebellion in North America, Howe was known to be sympathetic to the colonists – he had in prior years sought the acquaintance of Benjamin Franklin, who was a friend of Howe's sister, a popular lady in London society. During his career, Howe displayed a tactical uncommon originality. His performance was unexcelled even by Nelson, who, like Howe's other successors, was served by more highly trained squadrons and benefitted from Howe's example.
Horatio Nelson, 1st Viscount Nelson was a British admiral famous for his participation in the Napoleonic Wars, most notably in the Battle of Trafalgar, a decisive British victory in the war, where he was killed. Nelson was noted for his considerable ability to inspire and bring out the best in his men, to the point that it gained a name: "The Nelson Touch". His actions during these wars meant that before and after his death he was revered like few military figures have been throughout British history. Alexander Davison was a contemporary and close friend of Horatio Nelson. Davison is responsible for several acts that glorified Nelson's public image. These included the creation of a medal commemorating the victory at the Battle of the Nile and the creation of the Nelson Memorial at his estate at Swarland, Northumberland. As a close friend of the Admiral he acted as an intermediary when Nelson's marriage to his wife, Frances Nelson fell apart due in large part to his affair with Emma Hamilton.
Hyde Parker in 1778 was engaged in the Savannah expedition, and in the following year his ship was wrecked on the hostile Cuban coast. His men, however, entrenched themselves, and were in the end brought off safely. Parker was with his father at the Dogger Bank, and with Richard Howe in the two actions in the Straits of Gibraltar. In 1793, having just become rear admiral, he served under Samuel Hood at Toulon and in Corsica, and two years later, now a vice admiral, he took part, under Lord Hotham, in the indecisive fleet actions on 13 March 1795 and the 13 July 1795. From 1796 to 1800 he was in command at Jamaica and ably conducted the operations in the West Indies.
Edward Pellew, 1st Viscount Exmouth was a British naval officer who fought during the American War of Independence, the French Revolutionary, and the Napoleonic Wars. Pellew is remembered as an officer and a gentleman of great courage and leadership, earning his land and titles through courage, leadership and skill – serving as a paradigm of the versatility and determination of Naval Officers during the Napoleonic Wars.
Antoine de Sartine, a French statesman, was the Secretary of State for the Navy under King Louis XVI. Sartine inherited a strong French Navy, resurrected by Choiseul after the disasters of the Seven Years' War when France lost Canada, Louisiana, and India, and which would later defeat the British Navy in the War of American Independence.
James Saumarez, 1st Baron de Saumarez was an admiral of the British Royal Navy, notable for his victory at the Battle of Algeciras. In 1801 he was raised to the rank of Rear-Admiral of the Blue, was created a baronet, and received the command of a small squadron which was destined to watch the movements of the Spanish fleet at Cadiz. Between 6 and 12 July he performed a brilliant piece of service, in which after a first repulse at Algeciras he routed a much superior combined force of French and Spanish ships at the Battle of Algeciras. For his services Saumarez received the order of the Bath and the freedom of the City of London.
David Porter during the First Barbary War (1801–07) was 1st lieutenant of , and and was taken prisoner when Philadelphia ran aground in Tripoli harbor 31 October 1803. After his release 3 June 1805 he remained in the Mediterranean as acting captain of and later captain of Enterprise. He was in charge of the naval forces at New Orleans 1808–1810. As commander of in the War of 1812, Captain Porter achieved fame by capturing the first British warship of the conflict, , 13 August 1812 as well as several merchantmen. In 1813 he sailed Essex around Cape Horn and cruised in the Pacific warring on British whalers. On 28 March 1814 Porter was forced to surrender off Valparaiso after an unequal contest with the frigates and and only when his ship was too disabled to offer any resistance.
Spanish and English Armadas
The Spanish Armada was the Spanish fleet that sailed against England under the command of the Duke of Medina Sidona in 1588. The Spanish Armada was sent by King Philip II of Spain, who had been king consort of England until the death of his wife Mary I of England thirty years earlier. The purpose of the expedition was to escort the Duke of Parma's army of tercios from the Spanish Netherlands across the North Sea for a landing in south-east England. Once the army had suppressed English support for the United Provinces — part of the Spanish Netherlands — it was intended to cut off attacks against Spanish possessions in the New World and the Atlantic treasure fleets. It was also hoped to reverse the Protestant revolution in England, and to this end the expedition was supported by Pope Sixtus V, with the promise of a subsidy should it make land. The command of the fleet was originally entrusted to Alvaro de Bazan, a highly experienced naval commander who died a few months before the fleet sailed from Lisbon in May 1588.
The Spanish Armada consisted of about 130 warships and converted merchant ships. After forcing its way up the English Channel, it was attacked by a fleet of 200 English ships, assisted by the Dutch navy, in the North Sea at Gravelines off the coastal border between France and the Spanish Netherlands. A fire-ship attack drove the Armada ships from their safe anchorage, and in the ensuing battle the Spanish abandoned their rendezvous with Parma's army.
The Spanish Armada was blown north up the east coast of England and in a hasty strategic move, attempted a return to Spain by sailing around Scotland and out into the Atlantic, past Ireland. But very severe weather destroyed a portion of the fleet, and more than 24 vessels were wrecked on the north and western coasts of Ireland, with the survivors having to seek refuge in Scotland. Of the Spanish Armada's initial complement of vessels, about 50 did not return to Spain. However, the loss to Philip's Royal Navy was comparatively small: only seven ships failed to return, and of these only three were lost to enemy action.
The English Armada was a fleet of warships sent to the Iberian coast by Queen Elizabeth I of England in 1589, during the Anglo-Spanish War (1585–1604). It was led by Sir Francis Drake as admiral and Sir John Norreys as general, and failed in its attempt to drive home the advantage England had won upon the defeat and dispersal of the Spanish Armada in the previous year. With the opportunity to strike a decisive blow against the weakened Spanish lost, the failure of the expedition further depleted the crown treasury that had been so carefully restored during the long reign of Elizabeth I. The Anglo-Spanish war was very costly to both sides, and Spain itself, also fighting France and the United Provinces, had to default on its debt repayments in 1596, following another raid on Cadiz. But the failure of the English Armada was a turning point, and the fortunes of the various parties to this complicated conflict fluctuated until the Treaty of London in 1604, when a peace was agreed.
Spain's rebuilt navy had quickly recovered and exceeded its pre-Armada dominance of the sea, until defeats by the Dutch fifty years later marked the beginning of its decline. With the peace, the English were able to consolidate their hold on Ireland and make a concerted effort to establish colonies in North America.
North American maritime
The maritime history of the United States starts in the modern sense with the first successful English colony established in 1607, on the James River at Jamestown. It languished for decades until a new wave of settlers arrived in the late 17th century and set up commercial agriculture based on tobacco. The connection between the American colonies and Europe, with shipping as its cornerstone, would continue to grow unhindered for almost two hundred years.
The Continental Navy was formed during the American Revolution in 1774–1775. Through the efforts of the Continental Navy's apparent patron, John Adams and vigorous congressional support in the face of stiff opposition, the fleet cumulatively became relatively substantial when considering the limitations imposed upon the Patriot supply poole. The "Six original United States frigates" were the first United States frigates of the United States Navy, first authorized by the Congress with the Naval Act of 1794 on March 27, 1794, at a cost of $688,888.82.
John Paul Jones was America's first well-known naval hero in the American Revolutionary War. John Paul adopted the alias John Jones when he fled to his brother's home in Fredericksburg, Virginia in 1773 in order to avoid the hangman's noose in Tobago after an incident when he was accused of murdering a sailor under his command. He began using the name John Paul Jones as his brother suggested during the start of the American Revolution. Though his naval career never rose above the rank of captain in the Continental Navy after his victory over with the frigate , John Paul Jones remains the first genuine American naval hero, and a highly regarded battle commander.
Jonathan Haraden was a privateer during the American Revolution, being the first lieutenant of the sloop-of-war Tyrannicide, fourteen guns. On board for two years, he captured many prizes, becoming her commander in 1777.
George H. Preble was an American naval officer and writer, notable for his history of the flag of the United States and for taking the first photograph of the Fort McHenry flag that inspired "The Star-Spangled Banner". George entered the Navy as a midshipman on 10 December 1835, serving on until 1838.
Edward Preble was a U.S. naval officer. Following his Revolutionary War service, he was appointed 1st Lieutenant in the U.S. Navy. In January 1799, he assumed command of the 14-gun brig and took her to the West Indies to protect American commerce during the Quasi-War with France. Commissioned Captain 7 June 1799, he took command of in December and sailed in January 1800 for the Pacific to provide similar protective services for Americans engaged in the East Indies trade. Given command of the 3rd Squadron, with as his flagship, in 1803, he sailed for the Barbary coast and by October had promoted a treaty with Morocco and established a blockade off Tripoli in the First Barbary War.
Triangular trade
In the 17th, 18th, and 19th centuries a network of maritime trade formed in the Atlantic, connecting Europe, Africa, and the Americas through a triangular trade of African slaves, sugar/molasses, and rum. This maritime trade route would enrich Europe and the Americas while also pulling both deeper into the slave trade.
European merchants would buy slaves from African slavers, transporting these slaves to their sugar plantations in the Caribbeans, where the sugar/molasses they produced would be shipped to British North America and distilled into rum where it would be consumed in the colonies and sent to Europe. In some models of triangular trade, the Colonies take Europe's place, and the model of trade shifts to Slaves from Africa to the Caribbean, sugar and molasses go to New England, and the rum/other finished goods would be sold in Africa to get more slaves. Both of these models are not restricted to sugar trade; tobacco, cotton, and other plantation based raw materials take the place of sugar, and its derivatives.
Piracy in the Atlantic Ocean
During the Age of Discovery, key trade routes to the new world formed in the Caribbean and the Atlantic Ocean. With this concentrated area of trade, piracy was a significant maritime hazard in the 16th, 17th and 18th centuries. Some nations would use pirates to sabotage their rivals, going as far as supplying and recognizing them as legitimate. Eventually, powers like the English and Dutch implemented strong anti-piracy tactics to strengthen their trade empires in the 18th century.
In the 16th and 17th century Caribbean, the trading of slaves, precious metals, and raw materials all fell prey to piracy. Pirates would raid forts, and attack ships at sea to get possession of merchants material wealth. In some cases, pirates would tie themselves to a maritime power like the British and aid them by raiding rival nations like the Spanish and leaving British trade unmolested. In areas like Jamaica, some pirates were friendly with the British and would remain on the fringes of the colony. Some of these pirates were accepted by British colonial governors.
The English and Dutch had created extensive trade empires during the 17th and 18th century and saw pirates as a barrier to their continued growth. English began building a codification for piracy, which started a war against pirates that lasted from the 1670s ending in the 1720s. During this time the English would develop a ship called the Jamaica Sloops which were better at fending off piracy. In the late 1600s, the British began building up their navy and were able to put an end to most piracy by the 1720s violently, only isolated individual instances persisted.
Life at sea
Shipping, whether of cargo or passengers, is a business and the duties of a ship's captain reflect that. A captain's first duty was to the ship's owner and often the captain was encouraged to buy into the business with at least a one eighth share of the ship. A captain's second duty was to the cargo itself followed thirdly by the crew.
Crew were broken into two shifts that served four hour alternating watches often with all hands jointly serving the noon to 4:00 watch. American ships would commonly alternate watches with the addition of a two-hour dog watch. Work for sailors during their shift consisted primarily of general ship maintenance, washing, sanding, painting and repairs from general wear and tear or damage from storms. General ship operations like raising and lowering the anchor or furling and unfurling sails were done as needed. During the off shift hours, sailors could take care of their personal chores, washing and repairing clothes, sleeping and eating. Leisure time was often spent reading, writing in journals, playing an instrument, wood carving or fancy rope work. The American Seaman's Friend Society in New York City would loan boxes of books to ships for sailor's use.
Life aboard ship for immigrant travelers was much harsher and sometimes deadly. Ship owners would pack as many people as they could on board to maximize profits and little government oversight existed to ensure they received proper care during the voyage. British immigrant ships would often show less care to the passengers than criminals on prison ships to Australia. In 1803 the Passenger Vessel Act in Britain limited occupancy to one person per two tons of the ship's register. America issue stricter laws in 1819 limiting ships to a 1 to 5 ratio with fine levied should an overcrowded ship arrive at port. The Act of Feb. 1847 further increased the amount of space granted to passengers with the confiscation of a ship as the penalty for overcrowding.
War of 1812
Stephen Decatur was an American naval officer notable for his heroism in the First Barbary War and the Second Barbary War and in the War of 1812. He was the youngest man to reach the rank of captain in the history of the U.S. Navy, and the first American celebrated as a national military hero who had not played a role in the American Revolution.
James Lawrence was an American naval hero. During the War of 1812, he commanded in a single-ship action against (commanded by Philip Broke). He is probably best known today for his dying command "Don't give up the ship!", which is still a popular naval battle cry.
John H. Aulick was an officer in the United States Navy whose service extended from the War of 1812 to the end of the antebellum era. During the War of 1812, he served in and took part in her battle with on 4 September 1813. After that engagement ended in an American victory, Aulick served as prize master of the prize.
Thomas Macdonough was an early 19th-century American naval officer, most notably as commander of American naval forces on Lake Champlain during the War of 1812. One of the leading members of "Preble's Boys", a small group of naval officers who served during the First Barbary War, Macdonough's actions during the decisive Battle of Lake Champlain are often cited as a model of tactical preparation and execution.
Challenger Expedition
, built in 1858, undertook the first global marine research expedition called the Challenger expedition in 1872. To enable her to probe the depths, all but two of Challengers guns had been removed and her spars reduced to make more space available. Laboratories, extra cabins and a special dredging platform were installed. She was loaded with specimen jars, alcohol for preservation of samples, microscopes and chemical apparatus, trawls and dredges, thermometers and water sampling bottles, sounding leads and devices to collect sediment from the sea bed and great lengths of rope with which to suspend the equipment into the ocean depths. In all she was supplied with of Italian hemp for sounding, trawling and dredging. As the first true oceanographic cruise, the Challenger expedition laid the groundwork for an entire academic and research discipline.
End of the sail
Like most periodic eras the definition is inexact and close enough to serve as a general description. The age of sail runs roughly from the Battle of Lepanto in 1571, the last significant engagement in which oar-propelled galleys played a major role, to the Battle of Hampton Roads in 1862, in which the steam-powered destroyed the sailing ships and , finally culminating with the advance of steam power, rendering sail power obsolete.
Submarines
The history of submarines covers the historical chronology and facts related to submarines, the ships and boats which operate underwater. The modern underwater boat proposal was made by the Englishman William Bourne who designed a prototype submarine in 1578. Unfortunately for him these ideas never got beyond the planning stage. The first submersible proper to be actually built in modern times was built in 1620 by Cornelius Jacobszoon Drebbel, a Dutchman in the service of James I: it was based on Bourne's design. It was propelled by means of oars. The precise nature of the submarine type is a matter of some controversy; some claim that it was merely a bell towed by a boat. Two improved types were tested in the Thames between 1620 and 1624. In 1900, the U.S. navy was sold their first submarine by an Irish man named John Holland. From 1945 to 1955, tremendous changes were made for a great time when the first submarine was sent out to sail for the first time. The United States heavily depended on the submarines as a weapon of war when they were going to war with the Japanese.
Age of steam
Steam was first applied to boats in the 1770s. With the advent of economical steam engines, efficient external combustion heat engines that makes use of the heat energy that exists in steam and converting it to mechanical work, the prime mover was steam for ships. The technology only became relevant to sea travel after 1815, the year Pierre Andriel crossed the English Channel aboard the steamship Élise.
Rise of steam vessels
Steamships gradually replaced sailing ships for commercial shipping in the 19th century – mostly through the latter part of the century. Paradoxically, steam supported sail, by providing tugs that could speed the arrival of ships that would otherwise often be windbound in anchorages close to their point of departure or destination. Larger sailing vessels could be built for bulk cargoes, as the availability of tugs meant that they could be docked efficiently. Steam "donkey engines" enabled these larger ships to work with smaller crews, being used for hoisting large sails and generally doing the heavy work on the ship.
Steam technology required a number of developmental steps to be able to compete with sail propulsion. Better materials and designs were needed for the boilers that ran at the higher pressures that allowed the increases in fuel efficiency from, first, compound engines (successfully used in SS Agamemnon (1865)) and then the triple expansion engine (starting with SS Aberdeen (1881)). The early practice of using sea water in boilers caused build up of salt in the boilers, so requiring regular cleaning on route. An interim solution was to regularly replace the water, to keep the salt content low – needing development of heat exchangers to recover the heat from old water. Ultimately condensers were designed to recover the fresh water used in later boilers. The inherent problems of paddlewheel propulsion were solved by the screw propeller, but that needed a functional stern gland and thrust bearing. Iron hulls overcame the structural issues of wooden-hulled steamers, but needed anti-fouling materials, or, failing that, dry docks in which hulls could be regularly cleaned. For steamships to operate around the world, coaling stations had to be provided for shipping routes and coal of the correct quality had to be transported there.
Whilst the technology steadily improved, sail remained the most economical choice for ship-owners who wished to make a good return on the capital they had invested. Steam was an option only for a limited number of trades until the 1860s, focusing on routes requiring scheduled services and/or reliable average speeds on a voyage – and only where the customer was prepared to pay the higher costs involved. Most of this was passenger transport and mail contracts. Only when the much more fuel efficient triple expansion engine had become common (by the 1890s) were all shipping routes fully commercially viable for steamers.
Ironclads are steam-propelled warships of the later 19th century, protected by iron or steel armor plates. The ironclad was developed as a result of the vulnerability of wooden warships to explosive or incendiary shells. The first ironclad battleship, , was launched by the French Navy in 1859; she prompted the British Royal Navy to start building ironclads. After the first clashes of ironclads took place during the American Civil War, it became clear that the ironclad had replaced the unarmored line-of-battle ship as the most powerful warship afloat.
In 1880, the American passenger steamer Columbia became the first ship to utilize the dynamo and incandescent light bulb. Furthermore, Columbia was the first structure besides Thomas Edison's laboratory in Menlo Park, New Jersey to use the incandescent light bulb.
Greek War of Independence
The Greek War of Independence was a successful war waged by the Greeks to win independence for Greece from the Ottoman Empire. Success at sea was vital for the Greeks. If they failed to counter the Ottoman Navy, it would be able to resupply the isolated Ottoman garrisons and land reinforcements from the Ottoman Empire's Asian provinces at will, crushing the rebellion. The Greeks decided to use fireships and found an effective weapon against the Ottoman vessels. Conventional naval actions were also fought, at which naval commanders like Andreas Miaoulis, Nikolis Apostolis, Iakovos Tombazis and Antonios Kriezis distinguished themselves. The early successes of the Greek fleet in direct confrontations with the Ottomans at Patras and Spetsai gave the crews confidence, and contributed greatly to the survival and success of the uprising in the Peloponnese. Despite victories at Samos and Gerontas, the Revolution was threatened with collapse until the intervention of the Great Powers in the Battle of Navarino in 1827. The Ottoman fleet was decisively defeated by the combined fleets of the Britain, France and the Russian Empire, effectively securing the independence of Greece.
1850 to the end of the century
Most warships used steam propulsion until the advent of the gas turbine. Steamships were superseded by diesel-driven ships in the second half of the 20th century.
The Confederate States Navy (CSN) was the naval branch of the Confederate States armed forces established by an act of the Confederate Congress on February 21, 1861. It was responsible for Confederate naval operations during the American Civil War. The two major tasks of the Confederate Navy during the whole of its existence were the protection of Southern harbors and coastlines from outside invasion, and making the war costly for the North by attacking merchant ships and breaking the Union Blockade.
David Farragut was the first senior officer of the United States Navy during the American Civil War. He was the first rear admiral, vice admiral, and full admiral of the Navy. He is remembered in popular culture for his possibly apocryphal order at the Battle of Mobile Bay, usually paraphrased: "Damn the torpedoes, full speed ahead!".
Franklin Buchanan was an officer in the United States Navy who became an admiral in the Confederate Navy during the American Civil War. He was the captain of the ironclad (formerly ) during the Battle of Hampton Roads in Virginia. He climbed to the top deck of Virginia and began furiously firing toward shore with a carbine as was shelled. He soon was brought down by a sharpshooter's minie ball to the thigh. He would eventually recover from his leg wound. He never did get to command Virginia against . That honor went to Catesby ap Roger Jones. But Buchanan had handed the US Navy the worst defeat it would take until Pearl Harbor.
Raphael Semmes was an officer in the United States Navy from 1826 to 1860 and the Confederate States Navy from 1860 to 1865. During the American Civil War he was captain of the famous commerce raider , taking a record fifty-five prizes. Late in the war he was promoted to admiral and also served briefly as a brigadier general in the Confederate States Army.
In Italy, Carlo Pellion di Persano was an Italian admiral and commander of the Regia Marina fleet at the Battle of Lissa. He commanded the fleet from 1860 to 1861, and saw action in the struggle for Italian unification. After unification he was elected to the legislature; he became Minister of Marine in 1862 and in 1865 he was nominated a Senator. However, his career was marred during the war with Austria when he commanded the Italian fleet at Lissa. After the defeat, he was condemned for incapacity, and discharged.
Again in America, Charles Edgar Clark was an officer in the United States Navy during the American Civil War and the Spanish–American War. He commanded the battleship at the Mare Island Naval Shipyard, San Francisco, and when war with Spain was deemed inevitable, he received orders to proceed to Key West, Florida, with all haste. After a most remarkable voyage of over , around Cape Horn, he joined the American fleet in Cuban waters on May 26, and on July 3 commanded his ship at the destruction of Cervera's squadron.
George Dewey was an admiral of the United States Navy, best known for his victory (without the loss of a single life of his own forces due to combat; one man died of a heart attack) at the Battle of Manila Bay during the Spanish–American War. He was also the only person in the history of the United States to have attained the rank of Admiral of the Navy, the most senior rank in the United States Navy.
Garrett J. Pendergrast was an officer in the United States Navy during the American Civil War. He commanded during the Mexican–American War in 1846. In 1856, he commissioned , the ship that would later become CSS Virginia.
Lewis Nixon was a shipbuilding executive, naval architect, and political activist. Nixon graduated first in his class from the Naval Academy in 1882 and was sent to study naval architecture at the Royal Naval College where, again, he graduated first in the class in 1885. In 1890, with help from assistant naval constructor David W. Taylor, he designed the Indiana-class battleships which included , and USS Oregon.
Patricio Montojo was the Spanish naval commander at the Battle of Manila Bay (May 1, 1898), a decisive battle of the Spanish–American War. At the outbreak of the Spanish–American War, Montojo was in command of the Spanish Squadron that was destroyed by the U.S. Asiatic Squadron in the Battle of Manila Bay on May 1, 1898. Montojo was wounded during this battle, as was also one of his two sons who were participating in this battle. United States naval forces under Commodore George Dewey decisively defeated Spain's Pacific fleet, at anchor in Manila Bay, the Philippines. Most of the seven Spanish vessels sank or surrendered.
20th century
In the 20th century, the internal combustion engine and gas turbine came to replace the steam engine in most ship applications. Trans-oceanic travel, transatlantic and transpacific, was a particularly important application, with steam powered Ocean liners replacing sailing ships, then culminating in the massive Superliners which included the . The event with the Titanic lead to the Maritime Distress Safety System.
Maritime events of World War I
At the start of the war, the German Empire had cruisers scattered across the globe. Some of them were subsequently used to attack Allied merchant shipping. The British Royal Navy systematically hunted them down, though not without some embarrassment from its inability to protect allied shipping. For example, the detached light cruiser , part of the East-Asia squadron stationed at Tsingtao, seized or destroyed 15 merchantmen, as well as sinking a Russian cruiser and a French destroyer. However, the bulk of the German East-Asia squadron – consisting of the armoured cruisers and , light cruisers and and two transport ships – did not have orders to raid shipping and was instead underway to Germany when it was lost at the Battle of the Falkland Islands in December 1914.
Soon after the outbreak of hostilities, Britain initiated a naval blockade of Germany, preventing supplies from reaching its ports. The strategy proved effective, cutting off vital military and civilian supplies, although this blockade violated generally accepted international law codified by international agreements. A blockade of stationed ships within a three-mile (5 km) radius was considered legitimate, however Britain mined international waters to prevent any ships from entering entire sections of ocean, causing danger to even neutral ships. Though there was limited response to this tactic, some expected a better response for German's aim with its unrestricted submarine warfare.
German U-boats attempted to cut the supply lines between North America and Britain. The nature of submarine warfare meant that attacks often came without warning, giving the crews of the merchant ships little hope of survival. After the infamous sinking of the passenger ship in 1915, Germany promised not to target passenger liners. In 1916 the United States launched a protest over a cross-channel passenger ferry sinking, Germany modified its rules of engagement. Finally, in early 1917 Germany adopted a policy of unrestricted submarine warfare, realizing the Americans would eventually enter the war. Germany sought to strangle Allied sea lanes before the U.S. could transport a large army overseas.
The U-boat threat lessened in 1917, when merchant ships entered convoys escorted by destroyers. This tactic made it difficult for U-boats to find targets. The accompanying destroyers might sink a submerged submarine with depth charges. The losses to submarine attacks were reduced significantly. But the convoy system slowed the flow of supplies. The solution to the delays was a massive program to build new freighters. Various troop ships were too fast for the submarines and did not have to travel the North Atlantic in convoys.
The First World War also saw the first use of aircraft carriers in combat, with launching Sopwith Camels in a successful raid against the Zeppelin hangars at Tondern in July 1918.
Maritime events of World War II
Battle of the Atlantic
In the North Atlantic, German U-boats attempted to cut supply lines to the United Kingdom by sinking merchant ships. In the first four months of the war they sank more than 110 vessels. In addition to supply ships, the U-boats occasionally attacked British and Canadian warships. One U-boat sank the British carrier , while another managed to sink the battleship in her home anchorage of Scapa Flow.
In the summer of 1941, the Soviet Union entered the war on the side of the Allies. Although the Soviets had tremendous reserves in manpower, they had lost much of their equipment and manufacturing base in the first few weeks following the German invasion. The Western Allies attempted to remedy this by sending Arctic convoys, which travelled from the United Kingdom and the United States to the northern ports of the Soviet Union: Archangel and Murmansk. The treacherous route around the North Cape of Norway was the site of many battles as the Germans continually tried to disrupt the convoys using U-boats, bombers, and surface ships.
Following the entry of the United States into the war in December 1941, U-boats sank shipping along the East Coast of the United States and Canada, the waters around Newfoundland, the Caribbean Sea, and the Gulf of Mexico. They were initially so successful that this became known among U-boat crews as the second happy time. Eventually, the institution of shore blackouts and an interlocking convoy system resulted in a drop in attacks and U-boats shifted their operations back to the mid-Atlantic.
The turning point of the Battle of the Atlantic took place in early 1943 as the Allies refined their naval tactics, effectively making use of new technology to counter the U-boats. The Allies produced ships faster than they were sunk, and lost fewer ships by adopting the convoy system. Improved anti-submarine warfare meant that the life expectancy of a typical U-boat crew would be measured in months. The vastly improved Type 21 U-boat appeared as the war was ending, but too late to affect the outcome. In December 1943, the last major sea battle between the Royal Navy and Nazi Germany's Kriegsmarine took place. At the Battle of North Cape, the German battleship , was sunk by , , and several destroyers.
Pacific War
The Pacific War was the part of World War II, especially following the successful Japanese attack on United States forces at Pearl Harbor to 1945. The main American naval theaters were as Pacific Ocean Areas and Southwest Pacific Area. The British fought chiefly in the Indian Ocean. It was a war of logistics, with American home bases in California and Hawaii sending supplies to Australia. The U.S. used its submarines to sink Japanese transports and oil tankers, thereby cutting off Japan's supplies to its outposts and causing a severe shortage of gasoline.
Island hopping was the key strategy to bypass heavily fortified Japanese positions and instead concentrate the limited Allied resources on strategically important islands that were not well defended but capable of supporting the drive to the main islands of Japan. This strategy was possible in part because the Allies used submarine and air attacks to blockade and isolate Japanese bases, weakening their garrisons and reducing the Japanese ability to resupply and reinforce. Most Japanese soldiers killed in the Pacific died of starvation, and Japan used its submarine fleet to try to resupply them.
Hard-fought battles on the Japanese home islands of Iwo Jima, Okinawa, and others resulted in horrific casualties on both sides, but finally produced a Japanese retreat. Faced with the loss of most of their experienced pilots, the Japanese increased their use of kamikaze tactics in an attempt to create unacceptably high casualties for the Allies. After the turning point of the Pacific where a third of the Imperial Japanese Navy fleet was hit in the Battle of Midway, the United States Department of the Navy recommended various positions for and against an invasion of Japan in 1945. Some staff proposed to force a Japanese surrender through a total naval blockade or air raids.
Latter half of the 20th century
In the latter half of the 20th century, various vessels, notably aircraft carriers, nuclear submarines, and nuclear-powered icebreakers, made use of nuclear marine propulsion. Sonar and radio augmented existing navigational technology.
Various blockades were set up in international action. Egypt blockaded of the Straits of Tiran from 1948 to 1956 and 1967. The United States set up a blockade of Cuba during the Cuban Missile Crisis in 1962. The Israelis set up a sea blockade of the Gaza Strip since the outbreak of the Second Intifada (2000) and up to the present. The Israeli blockades of some or all the shores of Lebanon at various times during the Lebanese Civil War (1975–1990), the 1982 Lebanon War, and the South Lebanon conflict (1985–2000)—resumed during the 2006 Lebanon War.
Cuban Missile Crisis
The Cuban Missile Crisis was seen as an event that brought the U.S. closest to nuclear war and nearly the end of human existence. The event was on October the 22, 1962 during the presidency of John F. Kennedy. It also happened over a 13-day period. The nuclear power of the United States had a hand in why this event occurred. From this situation, the country learned that nuclear power does not have a lot of influence in politics. The soviet leader, Klrushchv, was the first to have his missiles fall back. At the time, it did not look like the United States was going to do the same. The U.S. did not back down because an American plane had been shot down in Cuba during the event. The blockade ended when the two powers resolved the issue peacefully.
Gulf of Tonkin Incident
The Gulf of Tonkin Incident was an alleged pair of attacks by the Democratic Republic of Vietnam against two American warships in 1964. One night a U.S. ship was sailing in North Vietnam when they thought they were being attacked. The president at this time decided that he needed to make a statement and asked congress for permission to act on this. Congress gave him permission by approving the Gulf of Tonkin resolution on August 7, 1964. With this resolution, president Johnson was able to release missiles on North Vietnamese torpedo boats and oil storage facilities. The Resolution was repealed in January 1971.
Falklands War
In 1982, the Falklands War was a war over the Falkland Islands with Argentina. This was said to be a very desperate war between Britain and Argentina. Argentina invaded the Falkland Islands where they were going in and out of the island. Britain was initially taken by surprise when the Argentine attack on the South Atlantic islands happened, but launched a naval task force to engage the Argentine Navy and Air Force, and retake the islands by amphibious assault. Argentina ended up losing the war.
Panama Canal handover
Though controversial within the United States, a process of handing the Panama Canal lead to Panamanian control of the Panama Canal Zone by the Panama Canal Authority (ACP). It was effective at noon on December 31, 1999. Before this handover, the government of Panama held an international bid to negotiate a 25-year contract for operation of the Canal's container shipping ports (chiefly two facilities at the Atlantic and Pacific outlets), which was won by the Chinese firm Hutchison Whampoa, a Hong Kong-based shipping concern whose owner Li Ka Shing is the wealthiest man in Asia. One of the conditions on the handover to the Panama Canal Authority by the United States was the permanent neutrality of the Canal and the explicit statements that allowed the United States to come back at any time.
21st century
Since the turn of the millennium, the construction of stealth ships have occurred. These are ships which employs stealth technology construction techniques in an effort to ensure that it is harder to detect by one or more of radar, visual, sonar, and infrared methods. These techniques borrow from stealth aircraft technology, although some aspects such as wake reduction are unique to stealth ships' design.
Some of the major social changes of this period include women becoming admirals in defensive navies, being allowed to work on submarines, and being appointed captains of cruise ships.
Arctic Resources Race
As of March 2020, global superpowers are currently in competition of laying claim to both regions of the Arctic Circle and shipping routes that lead directly into the Pacific and Atlantic oceans from the North Pole. Extensive access to the sea routes of the North Pole would allow, for example, save thousands of kilometers in distance from Europe to China. Most prominently, claims to territory in the Arctic Circle would guarantee a plethora of resources; some including: oil, gas, minerals, and fish.
Piracy
Seaborne piracy against transport vessels remains a significant issue (with estimated worldwide losses of US$13 to $16 billion per year), particularly in the waters between the Red Sea and Indian Ocean, off the Somali coast, and also in the Strait of Malacca and Singapore, which are used by over 50,000 commercial ships a year.
Modern pirates favor small boats and taking advantage of the small number of crew members on modern cargo vessels. They also use large vessels to supply the smaller attack/boarding vessels. Modern pirates can be successful because a large amount of international commerce occurs via shipping. Major shipping routes take cargo ships through narrow bodies of water (such as the Gulf of Aden and the Strait of Malacca) making them vulnerable to be overtaken and boarded by small motorboats. Other active areas include the South China Sea and the Niger Delta. As usage increases, many of these ships have to lower cruising speeds to allow for navigation and traffic control, making them prime targets for piracy.
The International Maritime Bureau (IMB) maintains statistics regarding pirate attacks dating back to 1995. Their records indicate hostage-taking overwhelmingly dominates the types of violence against seafarers. For example, in 2006, there were 239 attacks, 77 crew members were kidnapped and 188 taken hostage but only 15 of the pirate attacks resulted in murder. In 2007 the attacks rose by 10% to 263 attacks. There was a 35% increase on reported attacks involving guns. Crew members that were injured numbered 64 compared to just 17 in 2006. That number does not include hostages/kidnapping where they were not injured.
Modern definitions of piracy include the following acts:
Boarding
Extortion
Hostage taking
Kidnapping of people for ransom
Murder
Robbery
Sabotage resulting in the ship subsequently sinking
Seizure of items or the ship
Shipwrecking done intentionally to a ship
See also
General
Atlantic history
Atlantic World
Bibliography of early U.S. naval history
Bibliography of 18th–19th century Royal Naval history
Congo River
History of the Royal Navy
History of whaling
Indian maritime history
List of museum ships
List of former museum ships
List of naval battles
Maritime history of Africa
Maritime history of Colonial America
Maritime history of Europe
Maritime museum
Maritime timeline
Maritime transport
Military history
Ming treasure voyages
Naval history
Niger River
Ocean liner
Sailortowns
Ships of ancient Rome
Timeline of maritime migration and exploration
Historiography articles
American Neptune, a scholarly journal
Atlantic history, historiography of the Atlantic region
Frank C. Munson Institute of American Maritime History
International Commission for Maritime History
North American Society for Oceanic History
References
Citations and notes
General resources
Listed by date
Pearson, Michael, ed. Trade, Circulation, and Flow in the Indian Ocean World (2016), Nine essays by experts; excerpt
Pearson, Michael N. "Notes on world history and maritime history." Asian Review of World History 3#1 (2015): 137–151. online
Catsambis, Alexis, and Ben Ford, eds. The Oxford Handbook of Maritime Archaeology (2013) excerpt
Smith, Joshua M. “Toward a Taxonomy of Maritime Historians,” International Journal of Maritime History XXV:2 (December, 2013), 1–16
Paine, Lincoln. The sea and civilization: a maritime history of the world (Knopf, 2013). Pp. xxxv+ 744. 72 illustrations, 17 maps. excerpt
Tucker, Spencer C. World War II at Sea: An Encyclopedia (2 vol. 2011) excerpt and text search
Blume, Kenneth. Historical Dictionary of the U.S. Maritime Industry (2011) excerpt and text search
Sohn, Louis B. et al. The Law of the Sea in a Nutshell (2nd ed. 2010) excerpt and text search
Haycock, David Boyd and Sally Archer, eds. Health and Medicine at Sea, 1700–1900 (Woodbridge Boydell Press, 2009) online review
Black, Jeremy. Naval Power: A History of Warfare and the Sea from 1500 Onwards (2009)
O'Hara, Glen. (2009) "'The Sea is Swinging Into View'": Modern British Maritime History in a Globalised World," English Historical Review, Vol. 124 Issue 510, pp 1109–1134
Sobecki, S. (2008) The Sea and Medieval English Literature
Hattendorf, John B. (4 vol. 2007) Oxford Encyclopedia of Maritime History
Fremont-Barnes, Gregory. (2007) The Royal Navy 1793–1815 (Battle Orders) excerpt and text search
Kennedy, Paul M. The Rise And Fall of British Naval Mastery (2nd ed. 2006) excerpt and text search
Shiflett, T. D. (2005). America's Line of Battle: Its Construction & History. Tiger Lily Publications LLC.
Callo, J. F. (2004). Who's Who in Naval History
Rasor, Eugene L. (2004) English/British Naval History to 1815: A Guide to the Literature; (1990) British Naval History after 1815: A Guide to the Literature
Herman, Arthur. (2004) To Rule the Waves: How the British Navy Shaped the Modern World
Friel, Ian. (2003) The British Museum Maritime History of Britain and Ireland: c. 400–2001
Chen, Yan (2002). Maritime Silk Route and Chinese-Foreign Cultural Exchanges. Beijing: Peking University Press. .
Burnett, John. (2002). Dangerous Waters: Modern Piracy and Terror on the High Seas
Samson, Jane. "Maritime history" in , Historiography
Deng, Gang. Maritime Sector, Institutions, and Sea Power of Premodern China (1999) online
Labaree, Benjamin W. et al. (1998) America and the Sea: A Maritime History; 686 pp excerpt and text search; covers the men and women involved in exploring, fishing, merchant marine, the navy, coastal trade, river boats, and canals.
Bjork, Katharine. (1998). "The Link That Kept the Philippines Spanish: Mexican Merchant Interests and the Manila Trade, 1571–1815" Journal of World History 1#1 pp. 25–50.
Rodger, Nicholas. (1997) The Safeguard of the Sea: A Naval History of Britain Vol 1: 660-1649 and (2004) The Command of the Ocean: A Naval History of Britain, 1649–1815
Stopford, Martin. Maritime Economics (2nd ed. 1997) online
Hill, J.R. (1995) The Oxford Illustrated History of the Royal Navy
De La Pedraja, René. Historical Dictionary of the U.S. Merchant Marine and Shipping Industry: Since the Introduction of Steam (1994) online
Sager, Eric W. Ships and Memories: Merchant Seafarers in Canada's Age of Steam (1993) online
Love Jr., Robert W., (1992) History of the U.S. Navy (2 vol) excerpt and text search vol 1; excerpt and text search vol 2
Chang, Pin-tsun. (1989). "The Evolution of Chinese Thought on Maritime Foreign Trade from the Sixteenth to the Eighteenth Century," International Journal of Maritime History 1: 51–64.
Needham, Joseph (1986). Science and Civilization in China: Volume 4, Part 3. Taipei: Caves Books Ltd.
Potter, E. B. (1981) Sea Power: A Naval History; worldwide combat history
Boxer, Charles R. (1969) The Portuguese Seaborne Empire, 1415–1825
Parry, J. H. (1973) The Spanish seaborne empire
Boxer, Charles R. (1965) The Dutch Seaborne Empire 1600–1800
Morison, S. E. (1961). The maritime history of Massachusetts, 1783–1860. Boston: Houghton Mifflin.
Burwash, Dorothy, English Merchant Shipping, 1460–1540 (1947) online
Paine, Ralph D. The Old Merchant Marine: A Chronicle of American Ships and Sailers (1919) online
Allen, G. W. (1913). A naval history of the American Revolution.
Mahan, A. T. (1905). Sea power in its relations to the War of 1812. Boston: Little, Brown, and Company.
Maclay, E. S. (1899). A history of American privateers. New York: D. Appleton and Co.
Mahan, A. T. (1898). The influence of seapower upon the French revolution and empire, 1793–1812. Boston: Little, Brown & Co.
Corbett, S. J. S. (1898). Drake and the Tudor navy, with a history of the rise of England as a maritime power. New York: B. Franklin.
Mahan, A.T. (1890) The Influence of Sea Power Upon History: 1660-1783 thesingle most influential book online
Scharf, J. T. (1887). History of the Confederate States navy from its organization to the surrender of its last vessel: Its stupendous struggle with the great navy of the United States; the engagements fought in the rivers and harbors of the South, and upon the high seas; blockade-running, first use of iron-clads and torpedoes, and privateer history.online
Primary sources
Hattendorf, John B. et al. eds. (1991) British Naval documents, 1204–1960 (1993)
External links
Coriolis: The Interdisciplinary Journal of Maritime Studies
International Commission for Maritime History
The Institute of Maritime History – a non-profit institute focused on research, preservation and education in maritime history
Society for Nautical Research
The Maritime History Podcast
Federation of Maritime History and Archeology Research (Sorbonne University)
International Association for the History of Transport, Traffic and Mobility
The Australian Association for Maritime History
The North American Society for Oceanic History
Greek Maritime History
History of technology | 0.775386 | 0.993962 | 0.770705 |
Gender | Gender includes the social, psychological, cultural and behavioral aspects of being a man, woman, or other gender identity. Depending on the context, this may include sex-based social constructs (i.e. gender roles) as well as gender expression. Most cultures use a gender binary, in which gender is divided into two categories, and people are considered part of one or the other (girls/women and boys/men); those who are outside these groups may fall under the umbrella term non-binary. A number of societies have specific genders besides "man" and "woman," such as the hijras of South Asia; these are often referred to as third genders (and fourth genders, etc.). Most scholars agree that gender is a central characteristic for social organization.
The word is also used as a synonym for sex, and the balance between these usages has shifted over time. In the mid-20th century, a terminological distinction in modern English (known as the sex and gender distinction) between biological sex and gender began to develop in the academic areas of psychology, sociology, sexology, and feminism. Before the mid-20th century, it was uncommon to use the word gender to refer to anything but grammatical categories. In the West, in the 1970s, feminist theory embraced the concept of a distinction between biological sex and the social construct of gender. The distinction between gender and sex is made by most contemporary social scientists in Western countries, behavioral scientists and biologists, many legal systems and government bodies, and intergovernmental agencies such as the WHO.
The social sciences have a branch devoted to gender studies. Other sciences, such as psychology, sociology, sexology, and neuroscience, are interested in the subject. The social sciences sometimes approach gender as a social construct, and gender studies particularly does, while research in the natural sciences investigates whether biological differences in females and males influence the development of gender in humans; both inform the debate about how far biological differences influence the formation of gender identity and gendered behavior. Biopsychosocial approaches to gender include biological, psychological, and social/cultural aspects.
Etymology and usage
Derivation
The modern English word gender comes from the Middle English gender, gendre, a loanword from Anglo-Norman and Middle French gendre. This, in turn, came from Latin genus. Both words mean "kind", "type", or "sort". They derive ultimately from a Proto-Indo-European (PIE) root *ǵénh₁- 'to beget', which is also the source of kin, kind, king, and many other English words, with cognates widely attested in many Indo-European languages. It appears in Modern French in the word genre (type, kind, also genre sexuel) and is related to the Greek root gen- (to produce), appearing in gene, genesis, and oxygen. The Oxford Etymological Dictionary of the English Language of 1882 defined gender as kind, breed, sex, derived from the Latin ablative case of genus, like genere natus, which refers to birth. The first edition of the Oxford English Dictionary (OED1, Volume 4, 1900) notes the original meaning of gender as "kind" had already become obsolete.
History of the concept
The concept of gender, in the modern social science sense, is a recent invention in human history. The ancient world had no basis of understanding gender as it has been understood in the humanities and social sciences for the past few decades. The term gender had been associated with grammar for most of history and only started to move towards it being a malleable cultural construct in the 1950s and 1960s.
Before the terminological distinction between biological sex and gender as a role developed, it was uncommon to use the word gender to refer to anything but grammatical categories. For example, in a bibliography of 12,000 references on marriage and family from 1900 to 1964, the term gender does not even emerge once. Analysis of more than 30 million academic article titles from 1945 to 2001 showed that the uses of the term "gender", were much rarer than uses of "sex", was often used as a grammatical category early in this period. By the end of this period, uses of "gender" outnumbered uses of "sex" in the social sciences, arts, and humanities. It was in the 1970s that feminist scholars adopted the term gender as way of distinguishing "socially constructed" aspects of male–female differences (gender) from "biologically determined" aspects (sex).
As of 2024, many dictionaries list "synonym for 'sex'" as one of gender'''s meanings, alongside its sociocultural meaning. According to the Oxford English Dictionary, gender came into use as a synonym for sex during the twentieth century, initially as a euphemism, as sex was undergoing its own usage shift toward referring to sexual intercourse rather than male/female categories. During the last two decades of the 20th century, gender was often used as a synonym for sex in its non-copulatory senses, especially outside the social sciences. David Haig, writing in 2003, said "the sex/gender distinction is now only fitfully observed." Within the social sciences, however, use of gender in academia increased greatly, outnumbering uses of sex during that same period. In the natural sciences, gender was more often used as a synonym for sex. This can be attributed to the influence of feminism. Haig stated, "Among the reasons that working [natural] scientists have given me for choosing gender rather than sex in biological contexts are desires to signal sympathy with feminist goals, to use a more academic term, or to avoid the connotation of copulation." Haig also notes that "gender" became the preferred term when discussing phenomena for which the social versus biological cause was unknown, disputed, or actually an interaction between the two. In 1993, the US Food and Drug Administration (FDA) started to use gender instead of sex to avoid confusion with sexual intercourse. Later, in 2011, the FDA reversed its position and began using sex as the biological classification and gender as "a person's self-representation as male or female, or how that person is responded to by social institutions based on the individual's gender presentation."
In legal cases alleging discrimination, a 2006 law review article by Meredith Render notes "as notions of gender and sexuality have evolved over the last few decades, legal theories concerning what it means to discriminate "because of sex" under Title VII have experienced a similar evolution". In a 1999 law review article proposing a legal definition of sex that "emphasizes gender self-identification," Julie Greenberg writes, "Most legislation utilizes the word 'sex,' yet courts, legislators, and administrative agencies often substitute the word 'gender' for 'sex' when they interpret these statutes." In J.E.B. v. Alabama ex rel. T.B., a 1994 United States Supreme Court case addressing "whether the Equal Protection Clause forbids intentional discrimination on the basis of gender", the majority opinion noted that with regard to gender, "It is necessary only to acknowledge that 'our Nation has had a long and unfortunate history of sex discrimination,' id., at 684, 93 S.Ct., at 1769, a history which warrants the heightened scrutiny we afford all gender-based classifications today", and stated "When state actors exercise peremptory challenges in reliance on gender stereotypes, they ratify and reinforce prejudicial views of the relative abilities of men and women."
As a grammatical category
The word was still widely used, however, in the specific sense of grammatical gender (the assignment of nouns to categories such as masculine, feminine and neuter). According to Aristotle, this concept was introduced by the Greek philosopher Protagoras.
In 1926, Henry Watson Fowler stated that the definition of the word pertained to this grammar-related meaning:
As distinct from sex
In 1945, Madison Bentley defined gender as the "socialized obverse of sex". Simone de Beauvoir's 1949 book The Second Sex has been interpreted as the beginning of the distinction between sex and gender in feminist theory,Butler, Judith, "Sex and Gender in Simone de Beauvoir's Second Sex" in Yale French Studies, No. 72 (1986), pp. 35–49. although this interpretation is contested by many feminist theorists, including Sara Heinämaa.
Controversial sexologist John Money coined the term gender role, and was the first to use it in print in a scientific trade journal in 1955. In the seminal 1955 paper, he defined it as "all those things that a person says or does to disclose himself or herself as having the status of boy or man, girl or woman."
The modern academic sense of the word, in the context of social roles of men and women, dates at least back to 1945, and was popularized and developed by the feminist movement from the 1970s onwards (see Feminist theory and gender studies below), which theorizes that human nature is essentially epicene and social distinctions based on sex are arbitrarily constructed. In this context, matters pertaining to this theoretical process of social construction were labelled matters of gender.
The popular use of gender simply as an alternative to sex (as a biological category) is also widespread, although attempts are still made to preserve the distinction. The American Heritage Dictionary (2000) uses the following two sentences to illustrate the difference, noting that the distinction "is useful in principle, but it is by no means widely observed, and considerable variation in usage occurs at all levels."
Gender identity and gender roles Gender identity refers to a personal identification with a particular gender and gender role in society. The term woman has historically been used interchangeably with reference to the female body, though more recently this usage has been viewed as controversial by some feminists.
There are qualitative analyses that explore and present the representations of gender; however, feminists challenge these dominant ideologies concerning gender roles and biological sex. One's biological sex is often times tied to specific social roles and expectations. Judith Butler considers the concept of being a woman to have more challenges, owing not only to society's viewing women as a social category but also as a felt sense of self, a culturally conditioned or constructed subjective identity. Social identity refers to the common identification with a collectivity or social category that creates a common culture among participants concerned. According to social identity theory, an important component of the self-concept is derived from memberships in social groups and categories; this is demonstrated by group processes and how inter-group relationships impact significantly on individuals' self perception and behaviors. The groups people belong to therefore provide members with the definition of who they are and how they should behave within their social sphere.
Categorizing males and females into social roles creates a problem for some individuals who feel they have to be at one end of a linear spectrum and must identify themselves as man or woman, rather than being allowed to choose a section in between. Globally, communities interpret biological differences between men and women to create a set of social expectations that define the behaviors that are "appropriate" for men and women and determine their different access to rights, resources, power in society and health behaviors. Although the specific nature and degree of these differences vary from one society to the next, they still tend to typically favor men, creating an imbalance in power and gender inequalities within most societies. Many cultures have different systems of norms and beliefs based on gender, but there is no universal standard to a masculine or feminine role across all cultures. Social roles of men and women in relation to each other is based on the cultural norms of that society, which lead to the creation of gender systems. The gender system is the basis of social patterns in many societies, which include the separation of sexes, and the primacy of masculine norms.
Philosopher Michel Foucault said that as sexual subjects, humans are the object of power, which is not an institution or structure, rather it is a signifier or name attributed to "complex strategical situation". Because of this, "power" is what determines individual attributes, behaviors, etc. and people are a part of an ontologically and epistemologically constructed set of names and labels. For example, being female characterizes one as a woman, and being a woman signifies one as weak, emotional, and irrational, and incapable of actions attributed to a "man". Butler said that gender and sex are more like verbs than nouns. She reasoned that her actions are limited because she is female. "I am not permitted to construct my gender and sex willy-nilly," she said. "[This] is so because gender is politically and therefore socially controlled. Rather than 'woman' being something one is, it is something one does." More recent criticisms of Judith Butler's theories critique her writing for reinforcing the very conventional dichotomies of gender.
Social assignment and gender fluidity
According to gender theorist Kate Bornstein, gender can have ambiguity and fluidity. There are two contrasting ideas regarding the definition of gender, and the intersection of both of them is definable as below:
The World Health Organization defines gender as "the characteristics of women, men, girls and boys that are socially constructed". The beliefs, values and attitude taken up and exhibited by them is as per the agreed upon norms of the society and the personal opinion of the person is not taken into the primary consideration of assignment of gender and imposition of gender roles as per the assigned gender.
The assignment of gender involves taking into account the physiological and biological attributes assigned by nature followed by the imposition of the socially constructed conduct. Gender is a term used to exemplify the attributes that a society or culture constitutes as "masculine" or "feminine". Although a person's sex as male or female stands as a biological fact that is identical in any culture, what that specific sex means in reference to a person's gender role as a man or a woman in society varies cross-culturally according to what things are considered to be masculine or feminine. These roles are learned from various, intersecting sources such as parental influences, the socialization a child receives in school, and what is portrayed in the local media. Learning gender roles starts from birth and includes seemingly simple things like what color outfits a baby is clothed in or what toys they are given to play with. However, a person's gender does not always align with what has been assigned at birth. Factors other than learned behaviors play a role in the development of gender.
The article Adolescent Gender-Role Identity and Mental Health: Gender Intensification Revisited focuses on the work of Heather A. Priess, Sara M. Lindberg, and Janet Shibley Hyde on whether or not girls and boys diverge in their gender identities during adolescent years. The researchers based their work on ideas previously mentioned by Hill and Lynch in their gender intensification hypothesis in that signals and messages from parents determine and affect their children's gender role identities. This hypothesis argues that parents affect their children's gender role identities and that different interactions spent with either parents will affect gender intensification. Priess and among other's study did not support the hypothesis of Hill and Lynch which stated "that as adolescents experience these and other socializing influences, they will become more stereotypical in their gender-role identities and gendered attitudes and behaviors." However, the researchers did state that perhaps the hypothesis Hill and Lynch proposed was true in the past but is not true now due to changes in the population of teens in respect to their gender-role identities.
Authors of "Unpacking the Gender System: A Theoretical Perspective on Gender Beliefs and Social Relations", Cecilia Ridgeway and Shelley Correll, argue that gender is more than an identity or role but is something that is institutionalized through "social relational contexts." Ridgeway and Correll define "social relational contexts" as "any situation in which individuals define themselves in relation to others in order to act." They also point out that in addition to social relational contexts, cultural beliefs plays a role in the gender system. The coauthors argue that daily people are forced to acknowledge and interact with others in ways that are related to gender. Every day, individuals are interacting with each other and comply with society's set standard of hegemonic beliefs, which includes gender roles. They state that society's hegemonic cultural beliefs sets the rules which in turn create the setting for which social relational contexts are to take place. Ridgeway and Correll then shift their topic towards sex categorization. The authors define sex categorization as "the sociocognitive process by which we label another as male or female."
The failure of an attempt to raise David Reimer from infancy through adolescence as a girl after his genitals were accidentally mutilated is cited as disproving the theory that gender identity is determined solely by parenting. Revised in 2006 Reimer's case is used by organizations such as the Intersex Society of North America to caution against needlessly modifying the genitals of unconsenting minors. Between the 1960s and 2000, many other male newborns and infants were surgically and socially reassigned as females if they were born with malformed penises, or if they lost their penises in accidents. At the time, surgical reconstruction of the vagina was more advanced than reconstruction of the penis, leading many doctors and psychologists, including John Money who oversaw Reimer's case, to recommend sex reassignment based on the idea that these patients would be happiest living as women with functioning genitalia. Available evidence indicates that in such instances, parents were deeply committed to raising these children as girls and in as gender-typical a manner as possible. A 2005 review of these cases found that about half of natal males reassigned female lived as women in adulthood, including those who knew their medical history, suggesting that gender assignment and related social factors has a major, though not determinative, influence on eventual gender identity.
In 2015, the American Academy of Pediatrics released a webinar series on gender, gender identity, gender expression, transgender, etc. In the first lecture Sherer explains that parents' influence (through punishment and reward of behavior) can influence gender expression but not gender identity. Sherer argued that kids will modify their gender expression to seek reward from their parents and society, but this will not affect their gender identity (their internal sense of self).
Societal categories
Sexologist John Money coined the term gender role in 1955. The term gender role is defined as the actions or responses that may reveal their status as boy, man, girl or woman, respectively. Elements surrounding gender roles include clothing, speech patterns, movement, occupations, and other factors not limited to biological sex. In contrast to taxonomic approaches, some feminist philosophers have argued that gender "is a vast orchestration of subtle mediations between oneself and others", rather than a "private cause behind manifest behaviours".
Non-binary and third genders
Historically, most societies have recognized only two distinct, broad classes of gender roles, a binary of masculine and feminine, largely corresponding to the biological sexes of male and female.Maria Llorente, Culture, Heritage, and Diversity in Older Adult Mental Health Care (2018, ), p. 184: "Historically, in many, if not most, cultures, gender traditionally has been conceived as binary, but the modern and preferred understanding is that gender actually occurs on a spectrum." When a baby is born, society allocates the child to one gender or the other, on the basis of what their genitals resemble.
However, some societies have historically acknowledged and even honored people who fulfill a gender role that exists more in the middle of the continuum between the feminine and masculine polarity. For example, the Hawaiian māhū, who occupy "a place in the middle" between male and female, or the Ojibwe ikwekaazo, "men who choose to function as women", or ininiikaazo, "women who function as men". In the language of the sociology of gender, some of these people may be considered third gender, especially by those in gender studies or anthropology. Contemporary Native American and FNIM people who fulfill these traditional roles in their communities may also participate in the modern, two-spirit community, however, these umbrella terms, neologisms, and ways of viewing gender are not necessarily the type of cultural constructs that more traditional members of these communities agree with.
The hijras of India and Pakistan are often cited as third gender.Reddy, Gayatri (2005). With Respect to Sex: Negotiating Hijra Identity in South India. (Worlds of Desire: The Chicago Series on Sexuality, Gender, and Culture), University of Chicago Press (2005). Another example may be the muxe (pronounced ), found in the state of Oaxaca, in southern Mexico. The Bugis people of Sulawesi, Indonesia have a tradition that incorporates all the features above.
In addition to these traditionally recognized third genders, many cultures now recognize, to differing degrees, various non-binary gender identities. People who are non-binary (or genderqueer) have gender identities that are not exclusively masculine or feminine. They may identify as having an overlap of gender identities, having two or more genders, having no gender, having a fluctuating gender identity, or being third gender or other-gendered. Recognition of non-binary genders is still somewhat new to mainstream Western culture, and non-binary people may face increased risk of assault, harassment, and discrimination.
Measurement of gender identity
Two instruments incorporating the multidimensional nature of masculinity and femininity have dominated gender identity research: The Bem Sex Role Inventory (BSRI) and the Personal Attributes Questionnaire (PAQ). Both instruments categorize individuals as either being sex typed (males report themselves as identifying primarily with masculine traits, females report themselves as identifying primarily with feminine traits), cross sex-typed (males report themselves as identifying primarily with feminine traits, females report themselves as identifying primarily with masculine traits), androgynous (either males or females who report themselves as high on both masculine and feminine traits) or undifferentiated (either males or females who report themselves as low on both masculine and feminine traits). Twenge (1997) noted that men are generally more masculine than women and women generally more feminine than men, but the association between biological sex and masculinity/femininity is waning.
Biological factors and views
Some gendered behavior is influenced by prenatal and early life androgen exposure. This includes, for example, gender normative play, self-identification with a gender, and tendency to engage in aggressive behavior. Males of most mammals, including humans, exhibit more rough and tumble play behavior, which is influenced by maternal testosterone levels. These levels may also influence sexuality, with non-heterosexual persons exhibiting sex atypical behavior in childhood.
The biology of gender became the subject of an expanding number of studies over the course of the late 20th century. One of the earliest areas of interest was what became known as "gender identity disorder" (GID) and which is now also described as gender dysphoria. Studies in this, and related areas, inform the following summary of the subject by John Money. He stated:
Although causation from the biological—genetic and hormonal—to the behavioral has been broadly demonstrated and accepted, Money is careful to also note that understanding of the causal chains from biology to behavior in sex and gender issues is very far from complete. Money had previously stated that in the 1950s, American teenage girls who had been exposed to androgenic steroids by their mothers in utero exhibited more traditionally masculine behavior, such as being more concerned about their future career than marriage, wearing pants, and not being interested in jewelry.
There are studies concerning women who have a condition called congenital adrenal hyperplasia, which leads to the overproduction of the masculine sex hormone, androgen. These women usually have ordinary female appearances (though nearly all girls with congenital adrenal hyperplasia (CAH) have corrective surgery performed on their genitals). However, despite taking hormone-balancing medication given to them at birth, these females are statistically more likely to be interested in activities traditionally linked to males than female activities. Psychology professor and CAH researcher Dr. Sheri Berenbaum attributes these differences to an exposure of higher levels of male sex hormones in utero.
Non-human animals
In non-human animal research, gender is commonly used to refer to the biological sex of the animals. According to biologist Michael J. Ryan, gender identity is a concept exclusively applied to humans. Also, in a letter Ellen Ketterson writes, "[w]hen asked, my colleagues in the Department of Gender Studies agreed that the term gender could be properly applied only to humans, because it involves one's self-concept as man or woman. Sex is a biological concept; gender is a human social and cultural concept." However, notes that the question of whether behavioural similarities across species can be associated with gender identity or not is "an issue of no easy resolution", and suggests that mental states, such as gender identity, are more accessible in humans than other species due to their capacity for language. Poiani suggests that the potential number of species with members possessing a gender identity must be limited due to the requirement for self-consciousness.
Jacques Balthazart suggests that "there is no animal model for studying sexual identity. It is impossible to ask an animal, whatever its species, to what sex it belongs." He notes that "this would imply that the animal is aware of its own body and sex, which is far from proved", despite recent research demonstrating sophisticated cognitive skills among non-human primates and other species. has also stated that whether or not non-human animals consider themselves to be feminine or masculine is a "difficult, if not impossible, question to answer", as this would require "judgements about what constitutes femininity or masculinity in any given species". Nonetheless, she asserts that "non-human animals do experience femininity and masculinity to the extent that any given species' behaviour is gender segregated."
Despite this, Poiani and Dixson emphasise the applicability of the concept of gender role to non-human animals such as rodents throughout their book. The concept of gender role has also been applied to non-human primates such as rhesus monkeys.
Feminist theory and gender studies
Biologist and feminist academic Anne Fausto-Sterling rejects the discourse of biological versus social determinism and advocates a deeper analysis of how interactions between the biological being and the social environment influence individuals' capacities.
The philosopher and feminist Simone de Beauvoir applied existentialism to women's experience of life: "One is not born a woman, one becomes one." In context, this is a philosophical statement. However, it may be analyzed in terms of biology—a girl must pass puberty to become a woman—and sociology, as a great deal of mature relating in social contexts is learned rather than instinctive.
Within feminist theory, terminology for gender issues developed over the 1970s. In the 1974 edition of Masculine/Feminine or Human, the author uses "innate gender" and "learned sex roles", but in the 1978 edition, the use of sex and gender is reversed.
By 1980, most feminist writings had agreed on using gender only for socioculturally adapted traits.
Gender studies is a field of interdisciplinary study and academic field devoted to gender, gender identity and gendered representation as central categories of analysis. This field includes Women's studies (concerning women, feminity, their gender roles and politics, and feminism), Men's studies (concerning men, masculinity, their gender roles, and politics), and LGBT studies.
Sometimes Gender studies is offered together with Study of Sexuality.
These disciplines study gender and sexuality in the fields of literature and language, history, political science, sociology, anthropology, cinema and media studies, human development, law, and medicine.
It also analyses race, ethnicity, location, nationality, and disability.Healey, J.F. (2003). Race, Ethnicity, Gender and Class: the Sociology of Group Conflict and Change, Pine Forge Press
In gender studies, the term gender refers to proposed social and cultural constructions of masculinities and femininities. In this context, gender explicitly excludes reference to biological differences, to focus on cultural differences. This emerged from a number of different areas: in sociology during the 1950s; from the theories of the psychoanalyst Jacques Lacan; and in the work of French psychoanalysts like Julia Kristeva, Luce Irigaray, and American feminists such as Judith Butler. Those who followed Butler came to regard gender roles as a practice, sometimes referred to as "performative".
Charles E. Hurst states that some people think sex will, "...automatically determine one's gender demeanor and role (social) as well as one's sexual orientation" (sexual attractions and behavior). Gender sociologists believe that people have cultural origins and habits for dealing with gender. For example, Michael Schwalbe believes that humans must be taught how to act appropriately in their designated gender to fill the role properly, and that the way people behave as masculine or feminine interacts with social expectations. Schwalbe comments that humans "are the results of many people embracing and acting on similar ideas". People do this through everything from clothing and hairstyle to relationship and employment choices. Schwalbe believes that these distinctions are important, because society wants to identify and categorize people as soon as we see them. They need to place people into distinct categories to know how we should feel about them.
Hurst comments that in a society where we present our genders so distinctly, there can often be severe consequences for breaking these cultural norms. Many of these consequences are rooted in discrimination based on sexual orientation. Gays and lesbians are often discriminated against in our legal system because of societal prejudices.Center for American Progress, (2016). Unjust: How The Broken Criminal Justice System Fails LGBT People. Washington. Hurst describes how this discrimination works against people for breaking gender norms, no matter what their sexual orientation is. He says that "courts often confuse sex, gender, and sexual orientation, and confuse them in a way that results in denying the rights not only of gays and lesbians, but also of those who do not present themselves or act in a manner traditionally expected of their sex". This prejudice plays out in our legal system when a person is judged differently because they do not present themselves as the "correct" gender.
Andrea Dworkin stated her "commitment to destroying male dominance and gender itself" while stating her belief in radical feminism.
Political scientist Mary Hawkesworth addresses gender and feminist theory, stating that since the 1970s the concept of gender has transformed and been used in significantly different ways within feminist scholarship. She notes that a transition occurred when several feminist scholars, such as Sandra Harding and Joan Scott, began to conceive of gender "as an analytic category within which humans think about and organize their social activity". Feminist scholars in Political Science began employing gender as an analytical category, which highlighted "social and political relations neglected by mainstream accounts". However, Hawkesworth states "feminist political science has not become a dominant paradigm within the discipline".
American political scientist Karen Beckwith addresses the concept of gender within political science arguing that a "common language of gender" exists and that it must be explicitly articulated in order to build upon it within the political science discipline. Beckwith describes two ways in which the political scientist may employ 'gender' when conducting empirical research: "gender as a category and as a process." Employing gender as a category allows for political scientists "to delineate specific contexts where behaviours, actions, attitudes and preferences considered masculine or feminine result in particular political outcomes". It may also demonstrate how gender differences, not necessarily corresponding precisely with sex, may "constrain or facilitate political" actors. Gender as a process has two central manifestations in political science research, firstly in determining "the differential effects of structures and policies upon men and women," and secondly, the ways in which masculine and feminine political actors "actively work to produce favorable gendered outcomes".
With regard to gender studies, Jacquetta Newman states that although sex is determined biologically, the ways in which people express gender is not. Gendering is a socially constructed process based on culture, though often cultural expectations around women and men have a direct relationship to their biology. Because of this, Newman argues, many privilege sex as being a cause of oppression and ignore other issues like race, ability, poverty, etc. Current gender studies classes seek to move away from that and examine the intersectionality of these factors in determining people's lives. She also points out that other non-Western cultures do not necessarily have the same views of gender and gender roles. Newman also debates the meaning of equality, which is often considered the goal of feminism; she believes that equality is a problematic term because it can mean many different things, such as people being treated identically, differently, or fairly based on their gender. Newman believes this is problematic because there is no unified definition as to what equality means or looks like, and that this can be significantly important in areas like public policy.
Social construction of gender hypotheses
The World Health Organization states "As a social construct, gender varies from society to society and can change over time." Sociologists generally regard gender as a social construct. For instance, sexologist John Money suggests the distinction between biological sex and gender as a role. Moreover, Ann Oakley, a professor of sociology and social policy, says "the constancy of sex must be admitted, but so also must the variability of gender." Lynda Birke, a feminist biologist, maintains "'biology' is not seen as something which might change."
However, there are scholars who argue that sex is also socially constructed. For example, gender studies writer Judith Butler states that "perhaps this construct called 'sex' is as culturally constructed as gender; indeed, perhaps it was always already gender, with the consequence that the distinction between sex and gender turns out to be no distinction at all."
She continues:It would make no sense, then, to define gender as the cultural interpretation of sex, if sex is itself a gender-centered category. Gender should not be conceived merely as the cultural inscription of meaning based on a given sex (a juridical conception); gender must also designate the very apparatus of production whereby the sexes themselves are established. [...] This production of sex as the pre-discursive should be understood as the effect of the apparatus of cultural construction designated by gender.
Butler argues that "bodies only appear, only endure, only live within the productive constraints of certain highly gendered regulatory schemas," and sex is "no longer as a bodily given on which the construct of gender is artificially imposed, but as a cultural norm which governs the materialization of bodies."
With regard to history, Linda Nicholson, a professor of history and women's studies, argues that the understanding of human bodies as sexually dimorphic was historically not recognised. She states that male and female genitals were considered inherently the same in Western society until the 18th century. At that time, female genitals were regarded as incomplete male genitals, and the difference between the two was conceived as a matter of degree. In other words, there was a belief in a gradation of physical forms, or a spectrum. Scholars such as Helen King, Joan Cadden, and Michael Stolberg have criticized this interpretation of history. Cadden notes that the "one-sex" model was disputed even in ancient and medieval medicine, and Stolberg points out that already in the sixteenth century, medicine had begun to move towards a two-sex model.
In addition, drawing from the empirical research of intersex children, Anne Fausto-Sterling, a professor of biology and gender studies, describes how the doctors address the issues of intersexuality. She starts her argument with an example of the birth of an intersexual individual and maintains "our conceptions of the nature of gender difference shape, even as they reflect, the ways we structure our social system and polity; they also shape and reflect our understanding of our physical bodies." Then she adds how gender assumptions affects the scientific study of sex by presenting the research of intersexuals by John Money et al., and she concludes that "they never questioned the fundamental assumption that there are only two sexes, because their goal in studying intersexuals was to find out more about 'normal' development." She also mentions the language the doctors use when they talk with the parents of the intersexuals. After describing how the doctors inform parents about the intersexuality, she asserts that because the doctors believe that the intersexuals are actually male or female, they tell the parents of the intersexuals that it will take a little bit more time for the doctors to determine whether the infant is a boy or a girl. That is to say, the doctors' behavior is formulated by the cultural gender assumption that there are only two sexes. Lastly, she maintains that the differences in the ways in which the medical professionals in different regions treat intersexual people also give us a good example of how sex is socially constructed. In her Sexing the Body: gender politics and the construction of sexuality, she introduces the following example: A group of physicians from Saudi Arabia recently reported on several cases of XX intersex children with congenital adrenal hyperplasia (CAH), a genetically inherited malfunction of the enzymes that aid in making steroid hormones. [...] In the United States and Europe, such children, because they have the potential to bear children later in life, are usually raised as girls. Saudi doctors trained in this European tradition recommended such a course of action to the Saudi parents of CAH XX children. A number of parents, however, refused to accept the recommendation that their child, initially identified as a son, be raised instead as a daughter. Nor would they accept feminizing surgery for their child. [...] This was essentially an expression of local community attitudes with [...] the preference for male offspring.
Thus it is evident that culture can play a part in assigning gender, particularly in relation to intersex children.
Psychology and sociology
Many of the more complicated human behaviors are influenced by both innate factors and by environmental ones, which include everything from genes, gene expression, and body chemistry, through diet and social pressures. A large area of research in behavioral psychology collates evidence in an effort to discover correlations between behavior and various possible antecedents such as genetics, gene regulation, access to food and vitamins, culture, gender, hormones, physical and social development, and physical and social environments.
A core research area within sociology is the way human behavior operates on itself, in other words, how the behavior of one group or individual influences the behavior of other groups or individuals. Starting in the late 20th century, the feminist movement has contributed extensive study of gender and theories about it, notably within sociology but not restricted to it.
Social theorists have sought to determine the specific nature of gender in relation to biological sex and sexuality, with the result being that culturally established gender and sex have become interchangeable identifications that signify the allocation of a specific 'biological' sex within a categorical gender. The second wave feminist view that gender is socially constructed and hegemonic in all societies, remains current in some literary theoretical circles, Kira Hall and Mary Bucholtz publishing new perspectives as recently as 2008.
As the child grows, "...society provides a string of prescriptions, templates, or models of behaviors appropriate to the one sex or the other," which socialises the child into belonging to a culturally specific gender. There is huge incentive for a child to concede to their socialisation with gender shaping the individual's opportunities for education, work, family, sexuality, reproduction, authority, and to make an impact on the production of culture and knowledge. Adults who do not perform these ascribed roles are perceived from this perspective as deviant and improperly socialized.
Some believe society is constructed in a way that splits gender into a dichotomy via social organisations that constantly invent and reproduce cultural images of gender. Joan Acker believed gendering occurs in at least five different interacting social processes:
The construction of divisions along the lines of gender, such as those produced by labor, power, family, the state, even allowed behaviors and locations in physical space
The construction of symbols and images such as language, ideology, dress and the media, that explain, express and reinforce, or sometimes oppose, those divisions
Interactions between men and women, women and women and men and men that involve any form of dominance and submission. Conversational theorists, for example, have studied the way that interruptions, turn taking and the setting of topics re-create gender inequality in the flow of ordinary talk
The way that the preceding three processes help to produce gendered components of individual identity, i.e., the way they create and maintain an image of a gendered self
Gender is implicated in the fundamental, ongoing processes of creating and conceptualising social structures.
Looking at gender through a Foucauldian lens, gender is transfigured into a vehicle for the social division of power. Gender difference is merely a construct of society used to enforce the distinctions made between what is assumed to be female and male, and allow for the domination of masculinity over femininity through the attribution of specific gender-related characteristics. "The idea that men and women are more different from one another than either is from anything else, must come from something other than nature... far from being an expression of natural differences, exclusive gender identity is the suppression of natural similarities."
Gender conventions play a large role in attributing masculine and feminine characteristics to a fundamental biological sex. Socio-cultural codes and conventions, the rules by which society functions, and which are both a creation of society as well as a constituting element of it, determine the allocation of these specific traits to the sexes. These traits provide the foundations for the creation of hegemonic gender difference. It follows then, that gender can be assumed as the acquisition and internalisation of social norms. Individuals are therefore socialized through their receipt of society's expectations of 'acceptable' gender attributes that are flaunted within institutions such as the family, the state and the media. Such a notion of 'gender' then becomes naturalized into a person's sense of self or identity, effectively imposing a gendered social category upon a sexed body.
The conception that people are gendered rather than sexed also coincides with Judith Butler's theories of gender performativity. Butler argues that gender is not an expression of what one is, but rather something that one does. It follows then, that if gender is acted out in a repetitive manner it is in fact re-creating and effectively embedding itself within the social consciousness. Contemporary sociological reference to male and female gender roles typically uses masculinities and femininities in the plural rather than singular, suggesting diversity both within cultures as well as across them.
The difference between the sociological and popular definitions of gender involve a different dichotomy and focus. For example, the sociological approach to "gender" (social roles: female versus male) focuses on the difference in (economic/power) position between a male CEO (disregarding the fact that he is heterosexual or homosexual) to female workers in his employ (disregarding whether they are straight or gay). However the popular sexual self-conception approach (self-conception: gay versus straight) focuses on the different self-conceptions and social conceptions of those who are gay/straight, in comparison with those who are straight (disregarding what might be vastly differing economic and power positions between female and male groups in each category). There is then, in relation to definition of and approaches to "gender", a tension between historic feminist sociology and contemporary homosexual sociology.
Gender as biopsychosocial
According to Alex Iantaffi, Meg-John Barker, and others, gender is biopsychosocial. This is because it is derived from biological, psychological, and social factors, with all three factors feeding back into each other to form a person's gender.
Biological factors such as sex chromosomes, hormones, and anatomy play a significant role in the development of gender. Hormones such as testosterone and estrogen also play a crucial role in shaping gender identity and expression. Anatomy, including genitalia and reproductive organs, can also influence one's gender identity and expression.
Psychological factors such as cognition, personality, and self-concept also contribute to gender development. Gender identity emerges around the age of two to three years. Gender expression, which refers to the outward manifestation of gender, is influenced by cultural norms, personal preferences, and individual differences in personality.
Social factors such as culture, socialization, and institutional practices shape gender identity and expression.
In some English literature, there is also a trichotomy between biological sex, psychological gender, and social gender role. This framework first appeared in a feminist paper on transsexualism in 1978.
Gender and society
Languages
Grammatical gender is a property of some languages in which every noun is assigned a gender, often with no direct relation to its meaning. For example, the word for "girl" is muchacha (grammatically feminine) in Spanish, Mädchen (grammatically neuter) or the older Maid (grammatically feminine) in German, and cailín (grammatically masculine) in Irish.
The term "grammatical gender" is often applied to more complex noun class systems. This is especially true when a noun class system includes masculine and feminine as well as some other non-gender features like animate, edible, manufactured, and so forth. An example of the latter is found in the Dyirbal language. Other gender systems exist with no distinction between masculine and feminine; examples include a distinction between animate and inanimate things, which is common to, amongst others, Ojibwe, Basque and Hittite; and systems distinguishing between people (whether human or divine) and everything else, which are found in the Dravidian languages and Sumerian.
A sample of the World Atlas of Language Structures by Greville G Corbett found that fewer than half of the 258 languages sampled have any system of grammatical gender. Of the remaining languages that feature grammatical gender, over half have more than the minimum requirement of two genders. Grammatical gender may be based on biological sex (which is the most common basis for grammatical gender), animacy, or other features, and may be based on a combination of these classes. One of the four genders of the Dyirbal language consists mainly of fruit and vegetables. Languages of the Niger-Congo language family can have as many as twenty genders, including plants, places, and shapes.
Many languages include terms that are used asymmetrically in reference to men and women. Concern that current language may be biased in favor of men has led some authors in recent times to argue for the use of a more gender-neutral vocabulary in English and other languages.
Several languages attest the use of different vocabulary by men and women, to differing degrees. See, for instance, Gender differences in Japanese. The oldest documented language, Sumerian, records a distinctive sub-language, Emesal, only used by female speakers. Conversely, many Indigenous Australian languages have distinctive registers with a limited lexicon used by men in the presence of their mothers-in-law (see Avoidance speech). As well, quite a few sign languages have a gendered distinction due to boarding schools segregated by gender, such as Irish Sign Language.
Several languages such as Persian or Hungarian are gender-neutral. In Persian the same word is used in reference to men and women. Verbs, adjectives and nouns are not gendered. (See Gender-neutrality in genderless languages).
Several languages employ different ways to refer to people where there are three or more genders, such as Navajo
Legal status
A person's gender can have legal significance. In some countries and jurisdictions there are same-sex marriage laws.
Transgender people
The legal status of transgender people varies greatly around the world. Some countries have enacted laws protecting the rights of transgender individuals, but others have criminalized their gender identity or expression. Many countries now legally recognize sex reassignments by permitting a change of legal gender on an individual's birth certificate.
Intersex people
For intersex people, who according to the UN Office of the High Commissioner for Human Rights, "do not fit typical binary notions of male or female bodies", access to any form of identification document with a gender marker may be an issue. For other intersex people, there may be issues in securing the same rights as other individuals assigned male or female; other intersex people may seek non-binary gender recognition.
Non-binary and third genders
Some countries now legally recognize non-binary or third genders, including Canada, Germany, Australia, New Zealand, India and Pakistan. In the United States, Oregon was the first state to legally recognize non-binary gender in 2017, and was followed by California and the District of Columbia.
Science
Historically, science has been portrayed as a masculine pursuit in which women have faced significant barriers to participate. Even after universities began admitting women in the 19th century, women were still largely relegated to certain scientific fields, such as home science, nursing, and child psychology. Women were also typically given tedious, low-paying jobs and denied opportunities for career advancement. This was often justified by the stereotype that women were naturally more suited to jobs that required concentration, patience, and dexterity, rather than creativity, leadership, or intellect. Although these stereotypes have been dispelled in modern times, women are still underrepresented in prestigious "hard science" fields such as physics, and are less likely to hold high-ranking positions, a situation global initiatives such as the United Nations Sustainable Development Goal 5 are trying to rectify.
Religion
This topic includes internal and external religious issues such as gender of God and deities creation myths about human gender, roles and rights (for instance, leadership roles especially ordination of women, sex segregation, gender equality, marriage, abortion, homosexuality).
In Taoism, yin and yang are considered feminine and masculine, respectively. The Taijitu and concept of the Zhou period reach into family and gender relations. Yin is female and yang is male. They fit together as two parts of a whole. The male principle was equated with the sun: active, bright, and shining; the female principle corresponds to the moon: passive, shaded, and reflective. Thus "male toughness was balanced by female gentleness, male action and initiative by female endurance and need for completion, and male leadership by female supportiveness."
In Judaism, God is traditionally described in the masculine, but in the mystical tradition of the Kabbalah, the Shekhinah represents the feminine aspect of God's essence. However, Judaism traditionally holds that God is completely non-corporeal, and thus neither male nor female. Conceptions of the gender of God notwithstanding, traditional Judaism places a strong emphasis on individuals following Judaism's traditional gender roles, though many modern denominations of Judaism strive for greater egalitarianism. Moreover, traditional Jewish culture recognizes at least six genders.
In Christianity, God is traditionally described in masculine terms and the Church has historically been described in feminine terms. On the other hand, Christian theology in many churches distinguishes between the masculine images used of God (Father, King, God the Son) and the reality they signify, which transcends gender, embodies all the virtues of both men and women perfectly, which may be seen through the doctrine of Imago Dei. In the New Testament, Jesus at several times mentions the Holy Spirit with the masculine pronoun i.e. John 15:26 among other verses. Hence, the Father, the Son and the Holy Spirit (i.e. Trinity) are all mentioned with the masculine pronoun; though the exact meaning of the masculinity of the Christian triune God is contested.
In Hinduism, one of the several forms of the Hindu god Shiva is Ardhanarishvara (literally half-female god). In this composite form, the left half of the body represents shakti (energy, power) in the form of the goddess Parvati (otherwise his consort) while the right half represents Shiva. Whereas Parvati is regarded to be the cause of arousal of kama (desire), Shiva is the destroyer of the concept. Symbolically, Shiva is pervaded by the power of Parvati and Parvati is pervaded by the power of Shiva.
This myth projects an inherent view in ancient Hinduism, that each human carries within himself both female and male components, which are forces rather than sexes, and it is the harmony between the creative and the annihilative, the strong and the soft, the proactive and the passive, that makes a true person. Evidence of homosexuality, bisexuality, androgyny, multiple sex partners, and open representation of sexual pleasures are found in artworks like the Khajuraho temples, believed to have been accepted within prevalent social frameworks.
Poverty
Gender inequality is most common in women dealing with poverty. Many women must shoulder all the responsibility of the household because they must take care of the family. Oftentimes this may include tasks such as tilling land, grinding grain, carrying water and cooking. Also, women are more likely to earn low incomes because of gender discrimination, as men are more likely to receive higher pay, have more opportunities, and have overall more political and social capital then women. Approximately 75% of world's women are unable to obtain bank loans because they have unstable jobs. It shows that there are many women in the world's population but only a few represent world's wealth. In many countries, the financial sector largely neglects women even though they play an important role in the economy, as Nena Stoiljkovic pointed out in D+C Development and Cooperation''. In 1978 Diana M. Pearce coined the term feminization of poverty to describe the problem of women having higher rates of poverty. Women are more vulnerable to chronic poverty because of gender inequalities in the distribution of income, property ownership, credit, and control over earned income. Resource allocation is typically gender-biased within households, and continue on a higher level regarding state institutions.
Gender and Development (GAD) is a holistic approach to give aid to countries where gender inequality has a great effect of not improving the social and economic development. It is a program focused on the gender development of women to empower them and decrease the level of inequality between men and women.
The largest discrimination study of the transgender community, conducted in 2013, found that the transgender community is four times more likely to live in extreme poverty (income of less than $10,000 a year) than people who are cisgender.
General strain theory
According to general strain theory, studies suggest that gender differences between individuals can lead to externalized anger that may result in violent outbursts. These violent actions related to gender inequality can be measured by comparing violent neighborhoods to non-violent neighborhoods. By noticing the independent variables (neighborhood violence) and the dependent variable (individual violence), it is possible to analyze gender roles. The strain in the general strain theory is the removal of a positive stimulus and or the introduction of a negative stimulus, which would create a negative effect (strain) within individual, which is either inner-directed (depression/guilt) or outer-directed (anger/frustration), which depends on whether the individual blames themselves or their environment. Studies reveal that even though males and females are equally likely to react to a strain with anger, the origin of the anger and their means of coping with it can vary drastically.
Males are likely to put the blame on others for adversity and therefore externalize feelings of anger. Females typically internalize their angers and tend to blame themselves instead. Female internalized anger is accompanied by feelings of guilt, fear, anxiety and depression. Women view anger as a sign that they've somehow lost control, and thus worry that this anger may lead them to harm others and/or damage relationships. On the other end of the spectrum, men are less concerned with damaging relationships and more focused on using anger as a means of affirming their masculinity. According to the general strain theory, men would more likely engage in aggressive behavior directed towards others due to externalized anger whereas women would direct their anger towards themselves rather than others.
Economic development
Gender, and particularly the role of women is widely recognized as vitally important to international development issues. This often means a focus on gender-equality, ensuring participation, but includes an understanding of the different roles and expectation of the genders within the community.
Climate change
Gender is a topic of increasing concern within climate change policy and science. Generally, gender approaches to climate change address gender-differentiated consequences of climate change, as well as unequal adaptation capacities and gendered contribution to climate change. Furthermore, the intersection of climate change and gender raises questions regarding the complex and intersecting power relations arising from it. These differences, however, are mostly not due to biological or physical differences, but are formed by the social, institutional and legal context. Subsequently, vulnerability is less an intrinsic feature of women and girls but rather a product of their marginalization.
Roehr notes that, while the United Nations officially committed to gender mainstreaming, in practice gender equality is not reached in the context of climate change policies. This is reflected in the fact that discourses of and negotiations over climate change are mostly dominated by men.
Some feminist scholars hold that the debate on climate change is not only dominated by men but also primarily shaped in 'masculine' principles, which limits discussions about climate change to a perspective that focuses on technical solutions. This perception of climate change hides subjectivity and power relations that actually condition climate-change policy and science, leading to a phenomenon that Tuana terms 'epistemic injustice'.
Similarly, MacGregor attests that by framing climate change as an issue of 'hard' natural scientific conduct and natural security, it is kept within the traditional domains of hegemonic masculinity.
Social media
Forbes published an article in 2010 that reported 57% of Facebook users are women, which was attributed to the fact that women are more active on social media. On average, women have 8% more friends and account for 62% of posts that are shared via Facebook. Another study in 2010 found that in most Western cultures, women spend more time sending text messages compared to men as well as spending more time on social networking sites as a way to communicate with friends and family.
Research conducted in 2013 found that over 57% of pictures posted on social networking sites were sexual and were created to gain attention. Moreover, 58% of women and 45% of men do not look into the camera, which creates an illusion of withdrawal. Other factors to be considered are the poses in pictures such as women lying down in subordinate positions or even touching themselves in childlike ways.
Adolescent girls generally use social networking sites as a tool to communicate with peers and reinforce existing relationships; boys on the other hand tend to use social networking sites as a tool to meet new friends and acquaintances. Furthermore, social networking sites have allowed individuals to truly express themselves, as they are able to create an identity and socialize with other individuals that can relate. Social networking sites have also given individuals access to create a space where they feel more comfortable about their sexuality. Recent research has indicated that social media is becoming a stronger part of younger individuals' media culture, as more intimate stories are being told via social media and are being intertwined with gender, sexuality, and relationships.
Research has found that almost all U.S. teens (95%) aged 12 through 17 are online, compared to only 78% of adults. Of these teens, 80% have profiles on social media sites, as compared to only 64% of the online population aged 30 and older. According to a study conducted by the Kaiser Family Foundation, 11-to-18-year-olds spend on average over one and a half hours a day using a computer and 27 minutes per day visiting social network sites, i.e. the latter accounts for about one fourth of their daily computer use.
Studies have shown that female users tend to post more "cute" pictures, while male participants were more likely to post pictures of themselves in activities. Women in the U.S. also tend to post more pictures of friends, while men tend to post more about sports and humorous links. The study also found that males would post more alcohol and sexual references. The roles were reversed however, when looking at a teenage dating site: women made sexual references significantly more often than males. Boys share more personal information, while girls are more conservative about the personal information they post. Boys, meanwhile, are more likely to orient towards technology, sports, and humor in the information they post to their profile.
Research in the 1990s suggested that different genders display certain traits, such as being active, attractive, dependent, dominant, independent, sentimental, sexy, and submissive, in online interaction. Even though these traits continue to be displayed through gender stereotypes, recent studies show that this is not necessarily the case any more.
See also
Androcentrism
Anti-gender movement
Biological determinism
Coloniality of gender
Feminist metaphysics
Gender and politics
Gender bender
Gender paradox
Gynocentrism
Postgenderism
Sexism
Sex ratio
References
Bibliography
External links
GenPORT: Your gateway to gender and science resources
Gender in Agriculture Sourcebook
Social concepts
Social constructionism
Sociological theories
Feminism
LGBTQ | 0.771243 | 0.999269 | 0.770679 |
Global North and Global South | Global North and Global South are terms that denote a method of grouping countries based on their defining characteristics with regard to socioeconomics and politics. According to UN Trade and Development (UNCTAD), the Global South broadly comprises Africa, Latin America and the Caribbean, Asia (excluding Israel, Japan, and South Korea), and Oceania (excluding Australia and New Zealand). Most of the Global South's countries are commonly identified as lacking in their standard of living, which includes having lower incomes, high levels of poverty, high population growth rates, inadequate housing, limited educational opportunities, and deficient health systems, among other issues. Additionally, these countries' cities are characterized by their poor infrastructure. Opposite to the Global South is the Global North, which the UNCTAD describes as broadly comprising Northern America and Europe, Israel, Japan, South Korea, Australia, and New Zealand. As such, the two terms do not refer to the Northern Hemisphere or the Southern Hemisphere, as many of the Global South's countries are geographically located in the former and, similarly, a number of the Global North's countries are geographically located in the latter.
More specifically, the Global North consists of the world's developed countries, whereas the Global South consists of the world's developing countries and least developed countries. The Global South classification, as used by governmental and developmental organizations, was first introduced as a more open and value-free alternative to “Third World”, and likewise potentially “valuing” terms such as developed and developing. Countries of the Global South have also been described as being newly industrialized or in the process of industrializing, many of them are current or former subjects of colonialism.
The Global North and the Global South are often defined in terms of their differing levels of wealth, economic development, income inequality, and strength of democracy, as well as by their political freedom and economic freedom, as defined by a variety of freedom indices. Countries of the Global North tend to be wealthier, and capable of exporting technologically advanced manufactured products, among other characteristics. In contrast, countries of the Global South tend to be poorer, and heavily dependent on their largely agrarian-based economic primary sectors. Some scholars have suggested that the inequality gap between the Global North and the Global South has been narrowing due to the effects of globalization. Other scholars have disputed this position, suggesting that the Global South has instead become poorer vis-à-vis the Global North in this same timeframe.
Since World War II, the phenomenon of “South–South cooperation” (SSC) to “challenge the political and economic dominance of the North” has become more prominent among the Global South's countries. It has become popular in light of the geographical migration of manufacturing and production activity from the Global North to the Global South, and has since influenced the diplomatic policies of the Global South’s more powerful countries, such as China. Thus, these contemporary economic trends have “enhanced the historical potential of economic growth and industrialization in the Global South” amidst renewed targeted efforts by the SSC to “loosen the strictures imposed during the colonial era, and transcend the boundaries of postwar political and economic geography” as an aspect of decolonization.
Definition
The terms "Global North" and "Global South" are not strictly geographical, and are not "an image of the world divided by the equator, separating richer countries from their poorer counterparts." Rather, geography should be more readily understood as economic and migratory, in the "wider context of globalization or global capitalism."
In general, definitions for Global North and Global South, do not refer to the geographical North or the geographical South. The Global North broadly comprises Northern America and Europe, Israel, Japan, South Korea, Australia, and New Zealand, as per the UNCTAD. The Global South broadly comprises Africa, Latin America and the Caribbean, Asia excluding Israel, Japan, and South Korea, and Oceania excluding Australia and New Zealand, also according to the UNCTAD. Some, such as Australian sociologists Fran Collyer and Raewyn Connell, have argued that Australia and New Zealand are marginalized in similar ways to other Global South countries, due to their geographical isolation and location in the Southern Hemisphere.
The term Global North is often used interchangeably with developed countries, whereas the term Global South with developing countries.
Characteristically, most countries in the Global South are commonly identified as lacking in their standard of living, these include having: lower incomes, high levels of poverty, high population growth rates, limited educational opportunities, deficient health care systems, among other issues. Also, cities in the Global South are identified for their poor infrastructure. Economies of the Global North are diversified, whereas agriculture sector happens to be the major contributor of economy activity in the Global South.
Development of the terms
Carl Oglesby used the term "global south" in 1969, writing in Catholic journal Commonweal in a special issue on the Vietnam War. Oglesby argued that centuries of northern "dominance over the global south […] [has] converged […] to produce an intolerable social order."
The term gained appeal throughout the second half of the 20th century, which rapidly accelerated in the early 21st century. It appeared in fewer than two dozen publications in 2004, but in hundreds of publications by 2013. The emergence of the new term meant looking at the troubled realities of its predecessors, i.e.: Third World or Developing World. The term "Global South", in contrast, was intended to be less hierarchical.
The idea of categorizing countries by their economic and developmental status began during the Cold War with the classifications of East and West. The Soviet Union and China represented the East, and the United States and their allies represented the West. The term Third World came into parlance in the second half of the twentieth century. It originated in a 1952 article by Alfred Sauvy entitled "Trois Mondes, Une Planète". Early definitions of the Third World emphasized its exclusion from the east–west conflict of the Cold War as well as the ex-colonial status and poverty of the peoples it comprised.
Efforts to mobilize the Third World as an autonomous political entity were undertaken. The 1955 Bandung Conference was an early meeting of Third World states in which an alternative to alignment with either the Eastern or Western Blocs was promoted. Following this, the first Non-Aligned Summit was organized in 1961. Contemporaneously, a mode of economic criticism which separated the world economy into "core" and "periphery" was developed and given expression in a project for political reform which "moved the terms 'North' and 'South' into the international political lexicon."
In 1973, the pursuit of a New International Economic Order which was to be negotiated between the North and South was initiated at the Non-Aligned Summit held in Algiers. Also in 1973, the oil embargo initiated by Arab OPEC countries as a result of the Yom Kippur War caused an increase in world oil prices, with prices continuing to rise throughout the decade. This contributed to a worldwide recession which resulted in industrialized nations increasing economically protectionist policies and contributing less aid to the less developed countries of the South. The slack was taken up by Western banks, which provided substantial loans to Third World countries. However, many of these countries were not able to pay back their debt, which led the IMF to extend further loans to them on the condition that they undertake certain liberalizing reforms. This policy, which came to be known as structural adjustment, and was institutionalized by International Financial Institutions (IFIs) and Western governments, represented a break from the Keynesian approach to foreign aid which had been the norm from the end of the Second World War.
After 1987, reports on the negative social impacts that structural adjustment policies had had on affected developing nations led IFIs to supplement structural adjustment policies with targeted anti-poverty projects. Following the end of the Cold War and the break-up of the Soviet Union, some Second World countries joined the First World, and others joined the Third World. A new and simpler classification was needed. Use of the terms "North" and "South" became more widespread.
Brandt Line
The Brandt Line is a visual depiction of the north–south divide, proposed by West German former Chancellor Willy Brandt in the 1980s in the report titled North-South: A Programme for Survival which was later known as the Brandt Report. This line divides the world at a latitude of approximately 30° North, passing between the United States and Mexico, north of Africa and the Middle East, climbing north over China and Mongolia, then dipping south to include Japan, Australia, and New Zealand in the "Rich North". As of 2023 the Brandt line has been criticised for being outdated, yet is still regarded as a helpful way to visualise global inequalities.
Uses of the term Global South
Global South "emerged in part to aid countries in the southern hemisphere to work in collaboration on political, economic, social, environmental, cultural, and technical issues." This is called South–South cooperation (SSC), a "political and economical term that refers to the long-term goal of pursuing world economic changes that mutually benefit countries in the Global South and lead to greater solidarity among the disadvantaged in the world system." The hope is that countries within the Global South will "assist each other in social, political, and economical development, radically altering the world system to reflect their interests and not just the interests of the Global North in the process." It is guided by the principles of "respect for national sovereignty, national ownership, independence, equality, non-conditionality, non-interference in domestic affairs, and mutual benefit." Countries using this model of South–South cooperation see it as a "mutually beneficial relationship that spreads knowledge, skills, expertise and resources to address their development challenges such as high population pressure, poverty, hunger, disease, environmental deterioration, conflict and natural disasters." These countries also work together to deal with "cross border issues such as environmental protection, HIV/AIDS", and the movement of capital and labor.
Social psychiatrist Vincenzo Di Nicola has applied the Global South as a bridge between the critiques globalization and the gaps and limitations of the Global Mental Health Movement, invoking Boaventura de Sousa Santos' notion of "epistemologies of the South" to create a new epistemology for social psychiatry.
Defining development
The Dictionary of Human Geography defines development as "processes of social change or [a change] to class and state projects to transform national economies".
Economic development is a measure of progress in a specific economy. It refers to advancements in technology, a transition from an economy based largely on agriculture to one based on industry and an improvement in living standards.
Being categorized as part of the "North" implies development as opposed to belonging to the "South", which implies a lack thereof. According to N. Oluwafemi Mimiko, the South lacks the right technology, it is politically unstable, its economies are divided, and its foreign exchange earnings depend on primary product exports to the North, along with the fluctuation of prices. The low level of control it exercises over imports and exports condemns the South to conform to the 'imperialist' system. The South's lack of development and the high level of development of the North deepen the inequality between them and leave the South a source of raw material for the developed countries. The North becomes synonymous with economic development and industrialization while the South represents the previously colonized countries which are in need of help in the form of international aid agendas.
Furthermore, in Regionalism Across the North-South Divide: State Strategies and Globalization, Jean Grugel stated that the three factors that direct the economic development of states in the Global south are "élite behaviour within and between nation states, integration and cooperation within 'geographic' areas, and the resulting position of states and regions within the global world market and related political economic hierarchy."
Theories explaining the divide
The development disparity between the North and the South has sometimes been explained in historical terms. Dependency theory looks back on the patterns of colonial relations which persisted between the North and South and emphasizes how colonized territories tended to be impoverished by those relations. Theorists of this school maintain that the economies of ex-colonial states remain oriented towards serving external rather than internal demand, and that development regimes undertaken in this context have tended to reproduce in underdeveloped countries the pronounced class hierarchies found in industrialized countries while maintaining higher levels of poverty. Dependency theory is closely intertwined with Latin American Structuralism, the only school of development economics emerging from the Global South to be affiliated with a national research institute and to receive support from national banks and finance ministries. The Structuralists defined dependency as the inability of a nation's economy to complete the cycle of capital accumulation without reliance on an outside economy. More specifically, peripheral nations were perceived as primary resource exporters reliant on core economies for manufactured goods. This led structuralists to advocate for import-substitution industrialization policies which aimed to replace manufactured imports with domestically made products.
New Economic Geography explains development disparities in terms of the physical organization of industry, arguing that firms tend to cluster in order to benefit from economies of scale and increase productivity which leads ultimately to an increase in wages. The North has more firm clustering than the South, making its industries more competitive. It is argued that only when wages in the North reach a certain height, will it become more profitable for firms to operate in the South, allowing clustering to begin.
Associated theories
The term of the Global South has many researched theories associated with it. Since many of the countries that are considered to be a part of the Global South were once colonized by Global North countries, they are at a disadvantage to become as quickly developed. Dependency theorists suggest that information has a top-down approach and first goes to the Global North before countries in the Global South receive it. Although many of these countries rely on political or economic help, this also opens up opportunity for information to develop Western bias and create an academic dependency. Meneleo Litonjua describes the reasoning behind distinctive problems of dependency theory as "the basic context of poverty and underdevelopment of Third World/Global South countries was not their traditionalism, but the dominance-dependence relationship between rich and poor, powerful and weak counties."
What brought about much of the dependency, was the push to become modernized. After World War II, the U.S. made effort to assist developing countries financially in attempt to pull them out of poverty. Modernization theory "sought to remake the Global South in the image and likeliness of the First World/Global North." In other terms, "societies can be fast-tracked to modernization by 'importing' Western technical capital, forms of organization, and science and technology to developing countries." With this ideology, as long as countries follow in Western ways, they can develop quicker.
After modernization attempts took place, theorists started to question the effects through post-development perspectives. Postdevelopment theorists try to explain that not all developing countries need to be following Western ways but instead should create their own development plans. This means that "societies at the local level should be allowed to pursue their own development path as they perceive it without the influences of global capital and other modern choices, and thus a rejection of the entire paradigm from Eurocentric model and the advocation of new ways of thinking about the non-Western societies." The goals of postdevelopment was to reject development rather than reform by choosing to embrace non-Western ways.
Challenges
The accuracy of the North–South divide has been challenged on a number of grounds. Firstly, differences in the political, economic and demographic make-up of countries tend to complicate the idea of a monolithic South. Globalization has also challenged the notion of two distinct economic spheres. Following the liberalization of post-Mao China initiated in 1978, growing regional cooperation between the national economies of Asia has led to the growing decentralization of the North as the main economic power. The economic status of the South has also been fractured. As of 2015, all but roughly the bottom 60 nations of the Global South were thought to be gaining on the North in terms of income, diversification, and participation in the world market.
However, other scholars, notably Jason Hickel and Robert Wade have suggested that the Global South is not rising economically, and that global inequality between the North and South has risen since globalization. Hickel has suggested that the exchange of resources between the South and the North is substantially unbalanced in favor of the North, with Global North countries extracting a windfall of over 240 trillion dollars from the Global South in 2015. This figure outstrips the amount of financial aid given to Global South by a factor of 30.
Globalization has largely displaced the North–South divide as the theoretical underpinning of the development efforts of international institutions such as the IMF, World Bank, WTO, and various United Nations affiliated agencies, though these groups differ in their perceptions of the relationship between globalization and inequality. Yet some remain critical of the accuracy of globalization as a model of the world economy, emphasizing the enduring centrality of nation-states in world politics and the prominence of regional trade relations. Lately, there have been efforts to integrate the Global South more meaningfully into the world economic order.
The divide between the North and South challenges international environmental cooperation. The economic differences between North and South have created dispute over the scientific evidence and data regarding global warming and what needs to be done about it, as the South do not trust Northern data and cannot afford the technology to be able to produce their own. In addition to these disputes, there are serious divisions over responsibility, who pays, and the possibility for the South to catch up. This is becoming an ever-growing issue with the emergence of rising powers, imploding these three divisions just listed and making them progressively blurry. Multiplicity of actors, such as governments, businesses, and NGO's all influence any positive activity that can be taken into preventing further global warming problems with the Global North and Global South divide, contributing to the severity of said actors. Disputes between Northern countries governments and Southern countries governments has led to a breakdown in international discussions with governments from either side disagreeing with each other. Addressing most environmental problems requires international cooperation, and the North and South contribute to the stagnation concerning any form of implementation and enforcement, which remains a key issue.
Debates over the term
With its development, many scholars preferred using the Global South over its predecessors, such as "developing countries" and "Third World". Leigh Anne Duck, co-editor of Global South, argued that the term is better suited at resisting "hegemonic forces that threaten the autonomy and development of these countries." The Global South / Global North distinction has been preferred to the older developed / developing dichotomy as it does not imply a hierarchy. Alvaro Mendez, co-founder of the London School of Economics and Political Science's Global South Unit, have applauded the empowering aspects of the term. In an article, Discussion on Global South, Mendez discusses emerging economies in nations like China, India and Brazil. It is predicted that by 2030, 80% of the world's middle-class population will be living in developing countries. The popularity of the term "marks a shift from a central focus on development and cultural difference" and recognizes the importance of geopolitical relations.
Critics of this usage often argue that it is a vague blanket term. Others have argued that the term, its usage, and its subsequent consequences mainly benefit those from the upper classes of countries within the Global South; who stand "to profit from the political and economic reality [of] expanding south-south relations."
According to scholar Anne Garland Mahler, this nation-based understanding of the Global South is regarded as an appropriation of a concept that has deeper roots in Cold War radical political thought. In this political usage, the Global South is employed in a more geographically fluid way, referring to "spaces and peoples negatively impacted by contemporary capitalist globalization." In other words, "there are economic Souths in the geographic North and Norths in the geographic South." Through this geographically fluid definition, another meaning is attributed to the Global South where it refers to a global political community that is formed when the world's "Souths" recognize one another and view their conditions as shared.
The geographical boundaries of the Global South remain a source of debate. Some scholars agree that the term is not a "static concept". Others have argued against "grouping together a large variety of countries and regions into one category [because it] tends to obscure specific (historical) relationships between different countries and/or regions", and the power imbalances within these relationships. This "may obscure wealth differences within countries – and, therefore, similarities between the wealthy in the Global South and Global North, as well as the dire situation the poor may face all around the world."
Future development
Some economists have argued that international free trade and unhindered capital flows across countries could lead to a contraction in the North–South divide. In this case more equal trade and flow of capital would allow the possibility for developing countries to further develop economically.
As some countries in the South experience rapid development, there is evidence that those states are developing high levels of South–South aid. Brazil, in particular, has been noted for its high levels of aid ($1 billion annually—ahead of many traditional donors) and the ability to use its own experiences to provide high levels of expertise and knowledge transfer. This has been described as a "global model in waiting".
The United Nations has also established its role in diminishing the divide between North and South through the Millennium Development Goals, all of which were to be achieved by 2015. These goals seek to eradicate extreme poverty and hunger, achieve global universal education and healthcare, promote gender equality and empower women, reduce child mortality, improve maternal health, combat HIV/AIDS, malaria, and other diseases, ensure environmental sustainability, and develop a global partnership for development. These were replaced in 2015 by 17 Sustainable Development Goals (SDGs). The SDGs, set in 2015 by the United Nations General Assembly and intended to be achieved by 2030, are part of a UN Resolution called "The 2030 Agenda".
Society and culture
Digital and technological divide
The global digital divide is often characterized as corresponding to the north–south divide; however, Internet use, and especially broadband access, is now soaring in Asia compared with other continents. This phenomenon is partially explained by the ability of many countries in Asia to leapfrog older Internet technology and infrastructure, coupled with booming economies which allow vastly more people to get online.
Media representation
Mass media has often compared the Global South to the North, and is thought to be an aid in the divide. Western media tends to present a generalized view of developing countries through biased media coverage, mass media outlets tend to focus disproportionately on poverty and other negative imagery. This common coverage has created a dominant stereotype of developing countries as: "the 'South' is characterized by socioeconomic and political backwardness, measured against Western values and standards."
Mass media has also played a role in what information the people in developing countries receive. The news often covers developed countries and creates an imbalance of information flow.
See also
BRICS, CIVETS, MINT, VISTA
East–West dichotomy
Global West
Global East
First World
Global majority, roughly corresponding to Global South peoples
Golden billion
Group of 77
Inglehart–Welzel cultural map of the world
International Solar Alliance
Non-Aligned Movement
North–South Centre, an institution of the Council of Europe, awarding the North–South Prize
North–South model, in economics theory
North–South Summit, the only North–South summit ever held, with 22 heads of state and government taking part
Northern and southern China
Three-world model
World-systems theory
Fourth World
Subregions of Global North
Arctic Circle
Global Northwest
North Atlantic
NATO
North Pacific
Subregions of Global South
Afro-Asia
Global Southeast
Notes
References
External links
Share The World's Resources: The Brandt Commission Report, a 1980 report by a commission led by Willy Brandt that popularized the terminology
Brandt 21 Forum, a recreation of the original commission with an updated report (information on original commission at site)
Demographics
Dichotomies
Economic country classifications
Economic globalization
Imperialism studies
Geographical neologisms
Population geography
1960s neologisms
Economic geography | 0.771173 | 0.999341 | 0.770665 |
The woman question | In historiography, querelle des femmes ("dispute of women"), indicates an early-modern debate on the nature of women. This literary genre developed in Italian and French early humanist circles and was led by numerous women scholars, who wrote in Latin and vernacular to counter dominant misogynistic literature.
While the French phrase querelle des femmes deals specifically with the late medieval and Renaissance periods, the phrase woman question came to indicate feminist campaigns for social change after the 1700s, culminating in the later 19th century, with women's struggle to gain more recognition and relevance in modern industrialized societies. Issues of women's suffrage, reproductive rights, bodily autonomy, property rights, legal rights, medical rights, and marriage increasingly concerned public opinion in newspapers, political rallies and manifestos, conferences, pamphlets, and intellectual discussion. While women were leading the debate over a change in the roles played by women in the society, they initially represented a minority voice. Issues of marriage and sexual freedom often divided female public opinion.
Context
The querelle des femmes or "dispute of women" originally referred to a literary genre and broad debate, that originated in humanistic and aristocratic circles in the Italian peninsula and France during the early modern period, regarding the nature of women, their capabilities, and whether they should be permitted to study, write, or govern in the same manner as men. Both in the scholarly and popular sphere, authors criticized and praised women's natures, arguing for or against their capacity to be educated in the same manner as men. As classical Aristotelianism held that women are incapable of reason, many argued that women's nature prevented them from higher learning. As the debate developed, some agreed that men were not naturally more intelligent than women – but argued that the female nature also prevented them from taking higher learning seriously. In addition, there was great controversy over Classical notions of women as inherently defective; literate women such as Christine de Pizan, Laura Cereta, Marguerite de Navarre, or Moderata Fonte refuted misogynistic attacks against women as a whole. While this debate was deeply meaningful and personal to some of the authors who wrote in support of or against women, participation in the querelle des femmes was also viewed as an intellectual exercise.
A resurgence in the debate over the nature and role of women is illustrated by the Romantic movement's exploration in fiction and drama (and opera) of the nature of "man", of human beings as individuals and as members of society. Conflict between women's prescribed roles, their own values, and their perceptions of self are prominent in such works as Die Walküre, Effi Briest, Madame Bovary, Middlemarch, Anna Karenina, A Doll's House, and Hedda Gabler. Each of these addresses women's emotional, social, economic, and religious lives, highlighting the ways in which "the woman question" had disrupted notions of a static nature which all women share.
History
First use and traditional debate
The term was first used in France: the querelle des femmes (literally, 'dispute of women'). From 1450 into the years that witnessed the beginning of the Reformation, institutions controlled by the Catholic Church, had come into question. Secular states had begun to form in early modern Europe, and the feudal system was overtaken by centralized governments. This disruption extended to the relationships between men and women, and the Renaissance created a contraction of individual freedom for women, unlike men. These changes were justified through a number of arguments which referred to the inherent nature of women as subordinate to men.
On one side of the quarrel, many argued that women were inferior to men because man was created by God first, and were therefore stronger and more important. Also, much of Christianity throughout the ages, has viewed women as the Daughters of Eve, the original temptress responsible for humanity being expelled from the Garden of Eden. Augustine in particular understood women as having souls that were 'naturally more seductive', and emphasized their 'powerful inborn potential to corrupt'.
Religious justifications were not the only sources of information regarding woman's nature. As Renaissance humanism developed, there was great interest in returning to classical Greek and Roman philosophy. Classical philosophy held that women were inferior to men at a physical level, and this physical inferiority made them intellectually inferior as well. While the extent of this inferiority was hotly debated by the likes of Christine de Pizan and Moderata Fonte, women continued to be understood as inherently subordinate to men, and this was the basis for preventing women from attending universities or participating in the public sphere.
The 'defenders of women' on one side of the debate, according to Joan Kelly, "pointed out that the writings of the literate and the learned were distorted by what we now call sexism." They pointed out that accounts of women's deeds and nature were almost entirely written by men, many of whom had reasons to speak poorly of women. These writers, who were referred to as 'ladies' advocates' by the 17th and 18th centuries, promoted an empirical approach, which would measure the deeds and capabilities of women without bias. These arguments did not always insist that women were individuals, as modern feminists would argue, but often simply attempted to defend the 'nature' of women from slander.
1400s
One of the first women to answer 'the woman question' was Christine de Pizan. She published The Book of the City of Ladies in 1405, in which de Pizan narrated her learning of the value of women and their virtue. The book is also a response to the Romance of the Rose, one of the most widely read books of the period, which attacked women and the value of marriage. While de Pizan wrote this book to justify her place in the world of literature and publishing at the time, The Book of the City of Ladies can be considered one important source in early feminism.
In the 1480s, Bartolomeo Goggio argued the superiority of women in his "De laudibus mulierum" [On the Merits of Women], which was dedicated to Eleanor of Naples, Duchess of Ferrara.
1500s
Baldassare Castiglione contributed to the querelle in The Courtier in 1527, which voiced some support for the 'gentle' side of the debate, which favored women. In 1529, Heinrich Agrippa contended that men in society did not oppress women because of some natural law, but because they wanted to keep their social power and status. Agrippa argued for the nobility of women and thought women were created better than men. He argued that in the first place, women being made better than man, received the better name. Man was called Adam, which means Earth; woman Eva, which is by interpretation Life. Man was created from the dust of the earth, while woman was made from something far purer. Agrippa's metaphysical argument was that creation itself is a circle that began when God created light and ended when he created woman. Therefore, women and light occupy adjacent points on the circle of creation and must have similar properties of purity.
1600s to 1700s
Moderata Fonte's The Worth of Women was published in 1600, with a preface from her daughter Cecilia and her son Pietro. According to her daughter, Moderata Fonte (Modesta di Pozzo di Zorzi) finished writing the dialog in 1592, before dying in childbirth. The dialog collected poetry and dialogues which proclaimed the value of women, arguing that their intelligence and capability to rule cannot be recognized if they are not educated. The tradition of defending women from specific attacks continued into the 1600s and 1700s:Another poet, Sarah Fyge Field Egerton, appears to have written The Female Advocate (1686) – at age 14! – in reply to the "late satire on women" quoted for its obscenity; Judith Drake penned An Essay in Defence of the Female Sex (1696); and women of low and high station continued the polemic in the eighteenth century. – Joan Kelly, "Early Feminist Theory and the Querelle des Femmes.
The social and religious more and norms effecting the perception of women's behavior in the early modern era depended on the woman's social class, not only in terms of the expectations society had of them, but because their autonomy and ability to make choices, the legal protections and dignity privilege afforded, and access to education was not available for all women. The inequality in society was not only between men and women, but also among women of differing social and economic status. These matters took their place in the social discourse beginning only in the early 1700s, and there is little evidence that the querelle des femmes occupied a significant role in the public consciousness prior to the 18th century.
Victorian era
The term querelle des femmes was used in England in the Victorian era, stimulated, for example, by the Reform Act 1832 and the Reform Act 1867. The Industrial Revolution brought hundreds of thousands of lower-class women into factory jobs, presenting a challenge to traditional ideas of a woman's place.
A prime issue of contention was whether what was referred to as women's "private virtue" could be transported into the public arena; opponents of women's suffrage claimed that bringing women into public would dethrone them, and sully their feminine virtue.
Areas of discussion
The woman question was raised in many different social areas. For example, in the second half of the 19th century, in the context of religion, extensive discussion within the United States took place on the participation of women in church. In the Methodist Episcopal Church, the woman question was the most pressing issue in the 1896 conference.
See also
A Vindication of the Rights of Woman
Beguinage, community living for lay women
The Book of the City of Ladies
The Book of the Courtier
References
Further reading
Case, Holly. The Age of Questions (Princeton University Press, 2018) excerpt
Eliza Lynn Linton in the Saturday Review, reprinted as Modern Women and What is Said of Them (1868)
Sarah Stickney Ellis (1839). The Women of England: Their Social Duties and Domestic Habits (11th ed.). London; Paris: Fisher, Son & Co.
Alexandra Kollontai (1909). "The Social Basis of the Woman Question"
Bernard Shaw: Candida and Mrs. Warren's Profession
Feminism and history
Women's rights | 0.782265 | 0.985131 | 0.770634 |
Religion in Europe | Religion has been a major influence on the societies, cultures, traditions, philosophies, artistic expressions and laws within present-day Europe. The largest religion in Europe is Christianity. However, irreligion and practical secularisation are also prominent in some countries. In Southeastern Europe, three countries (Bosnia and Herzegovina, Kosovo and Albania) have Muslim majorities, with Christianity being the second-largest religion in those countries. Ancient European religions included veneration for deities such as Zeus. Modern revival movements of these religions include Heathenism, Rodnovery, Romuva, Druidry, Wicca, and others. Smaller religions include Indian religions, Judaism, and some East Asian religions, which are found in their largest groups in Britain, France, and Kalmykia.
Little is known about the prehistoric religion of Neolithic Europe. Bronze and Iron Age religion in Europe as elsewhere was predominantly polytheistic (Ancient Greek religion, Ancient Roman religion, Basque mythology, Finnish paganism, Celtic polytheism, Germanic paganism, etc.).
The Roman Empire officially adopted Christianity in AD 380. During the Early Middle Ages, most of Europe underwent Christianization, a process essentially complete with the Christianization of Scandinavia in the High Middle Ages. The notion of "Europe" and the "Western World" has been intimately connected with the concept of "Christendom", and many even consider Christianity as the unifying belief that created a European identity, especially since Christianity in the Middle East was marginalized by the rise of Islam from the 8th century. This confrontation led to the Crusades, which ultimately failed militarily, but were an important step in the emergence of a European identity based on religion. Despite this, traditions of folk religion continued at all times, largely independent from institutional religion or dogmatic theology.
The Great Schism of the 11th century and Reformation of the 16th century tore apart Christendom into hostile factions, and following the Age of Enlightenment of the 18th century, atheism and agnosticism have spread across Europe. Nineteenth-century Orientalism contributed to a certain popularity of Hinduism and Buddhism, and the 20th century brought increasing syncretism, New Age, and various new religious movements divorcing spirituality from inherited traditions for many Europeans. Recent times have seen increased secularisation and religious pluralism.
Religiosity
Some European countries have experienced a decline in church membership and church attendance. A relevant example of this trend is Sweden where the Church of Sweden, previously the state-church until 2000, claimed to have 82.9% of the Swedish population as its flock in 2000. Surveys showed this had dropped to 72.9% by 2008 and to 56.4% by 2019. Moreover, in the 2005 Eurobarometer survey 23% of the Swedish population said that they do not believe there is any sort of spirit, God or life force and in the 2010 Eurobarometer survey 34% said the same.
Gallup survey 2008–2009
During 2008–2009, a Gallup survey asked in several countries the question "Is religion important in your daily life?" The table and map below shows percentage of people who answered "Yes" to the question.
During 2007–2008, a Gallup poll asked in several countries the question "Does religion occupy an important place in your life?" The table on right shows percentage of people who answered "No".
Eurobarometer survey 2010
The 2010 Eurobarometer survey found that, on average, 51% of the citizens of the EU member states state that they "believe there is a God", 26% "believe there is some sort of spirit or life force" while 20% "don't believe there is any sort of spirit, God or life force". 3% declined to answer.
According to a recent study (Dogan, Mattei, Religious Beliefs in Europe: Factors of Accelerated Decline), 47% of French people declared themselves as agnostics in 2003. This situation is often called "Post-Christian Europe". A decrease in religiousness and church attendance in Denmark, Belgium, France, Germany, Netherlands, and Sweden has been noted, despite a concurrent increase in some countries like Greece (2% in 1 year). The Eurobarometer survey must be taken with caution, however, as there are discrepancies between it and national census results. For example, in the United Kingdom, the 2001 census revealed over 70% of the population regarded themselves as "Christian" with only 15% professing to have "no religion", though the wording of the question has been criticized as "leading" by the British Humanist Association. Romania, one of the most religious countries in Europe, witnessed a threefold increase in the number of atheists between 2002 and 2011, as revealed by the most recent national census.
The following is a list of European countries ranked by religiosity, based on the rate of belief, according to the Eurobarometer survey 2010. The 2010 Eurobarometer survey asked whether the person "believes there is a God", "believes there is some sort of spirit or life force", or "doesn't believe there is any sort of spirit, God or life force".
The decrease in theism is illustrated in the 1981 and 1999 according to the World Values Survey, both for traditionally strongly theist countries (Spain: 86.8%:81.1%; Ireland 94.8%:93.7%) and for traditionally secular countries (Sweden: 51.9%:46.6%; France 61.8%:56.1%; Netherlands 65.3%:58.0%). Some countries nevertheless show increase of theism over the period, Italy 84.1%:87.8%, Denmark 57.8%:62.1%. For a comprehensive study on Europe, see Mattei Dogan's "Religious Beliefs in Europe: Factors of Accelerated Decline" in Research in the Social Scientific Study of Religion.
Eurobarometer survey 2019
According to the 2019 Eurobarometer survey about Religiosity in the European Union Christianity is the largest religion in the European Union accounting 64% of the EU population, down from 72% in 2012. Catholics are the largest Christian group in EU, accounting for 41% of EU population, while Eastern Orthodox make up 10%, and Protestants make up 9%, and other Christians account for 4% of the EU population. Non believer/Agnostic account 17%, Atheist 10%, and Muslim 2% of the EU population. 3% refuse to answer or didn't know.
Maps
Pew Research Poll
According to the 2012 Global Religious Landscape survey by the Pew Research Center, 75.2% of the Europe residents are Christians, 18.2% are irreligious, atheist or agnostic, 5.9% are Muslims and 0.2% are Jews, 0.2% are Hindus, 0.2% are Buddhist, and 0.1% adhere to other religions. According to the 2015 Religious Belief and National Belonging in Central and Eastern Europe survey by the Pew Research Center, 57.9% of the Central and Eastern Europeans identified as Orthodox Christians, and according to a 2018 study by the Pew Research Center, 71.0% of Western Europeans identified as Christians, 24.0% identified as religiously unaffiliated and 5% identified as adhere to other religions. According to the same study a large majority (83%) of those who were raised as Christians in Western Europe still identify as such, and the remainder mostly self-identify as religiously unaffiliated.
Pew Research Poll
(*) 13% of respondents in Hungary identify as Presbyterian. In Estonia and Latvia, 20%
and 19%, respectively, identify as Lutherans. And in Lithuania, 14% say they are "just a
Christian" and do not specify a particular denomination. They are included in the "other"
category.
(**) Identified as "don't know/refused" from the "other/idk/ref" column are excluded from this statistic.
(***) Figures may not add to subtotals due to rounding.
(**) Identified with answers "don't know/refused" are not shown.
Abrahamic religions
Bahá'í Faith
The first newspaper reference to the religious movement began with coverage of the Báb, whom Bahá'ís consider the forerunner of the Bahá'í Faith, which occurred in The Times on 1 November 1845, only a little over a year after the Báb first started his mission. British, Russian, and other diplomats, businessmen, scholars, and world travelers also took note of the precursor Bábí religion most notably in 1865 by Frenchman Arthur de Gobineau who wrote the first and most influential account. In April 1890 Edward G. Browne of Cambridge University met Bahá'u'lláh, the prophet-founder of the Bahá'í Faith, and left the only detailed description by a Westerner.
Starting in the 1890s Europeans began to convert to the religion. In 1910 Bahá'u'lláh's son and appointed successor, 'Abdu'l-Bahá embarked on a three-year journey to including Europe and North America and then wrote a series of letters that were compiled together in the book titled Tablets of the Divine Plan which included mention of the need to spread the religion in Europe following the war.
A 1925 list of "leading local Bahá'í Centres" of Europe listed organized communities of many countries – the largest being in Germany. However the religion was soon banned in a couple of countries: in 1937 Heinrich Himmler disbanded the Bahá'í Faith's institutions in Germany because of its 'international and pacifist tendencies' and in Russia in 1938 "monstrous accusations" against Bahá'ís and a Soviet government policy of oppression of religion resulted in Bahá'í communities in 38 cities across Soviet territories ceasing to exist. However the religion recovered in both countries. The religion has generally spread such that in recent years the Association of Religion Data Archives estimated the Bahá'ís in European countries to number in hundreds to tens of thousands.
Christianity
The majority of Europeans describe themselves as Christians, divided into a large number of denominations. Christian denominations are usually classed in three categories: Catholicism (consider only two groups, the Roman-Latin Catholic and the Eastern Greek and Armenian Catholics), Orthodoxy (consider only two groups, the Eastern Byzantine Orthodox and the Armenian Apostolic which is within the Oriental Orthodox Church) and Protestantism (a diverse group including Lutheranism, Calvinism and Anglicanism as well as numerous minor denominations, including Baptists, Methodism, Evangelicalism, Pentecostalism, etc.).
Christianity, more specifically the Catholic Church, which played an important part in the shaping of Western civilization since at least the 4th century. Historically, Europe has been the center and "cradle of Christian civilization".
European culture, throughout most of its recent history, has been heavily influenced by Christian belief and has been nearly equivalent to Christian culture. The Christian culture was one of the more dominant forces to influence Western civilization, concerning the course of philosophy, art, music, science, social structure and architecture. The civilizing influence of Christianity includes social welfare, founding hospitals, economics (as the Protestant work ethic), politics, architecture, literature and family life.
Christianity is still the largest religion in Europe. According to a survey about Religiosity in the European Union in 2019 by Eurobarometer, Christianity was the largest religion in the European Union accounting 64% of EU population, down from 72% in 2012. Catholics were the largest Christian group in EU, and accounted for 41% of the EU population, while Eastern Orthodox made up 10%, Protestants made up 9%, and other Christians 4%. According to a 2010 study by the Pew Research Center, 76.2% of the European population identified themselves as Christians, constitute in absolute terms the world's largest Christian population.
According to Scholars, in 2017, Europe's population was 77.8% Christian (up from 74.9% 1970), these changes were largely result of the collapse of Communism and switching to Christianity in the former Soviet Union and Eastern Bloc countries.
Christian denominations
Catholicism (majorly followed to the Roman–Latin Catholic Church with various minorities of the few Greek Catholic Churches in the Eastern European regions, and the Armenian Catholic Church in Armenia and its diaspora) is the largest denomination with adherents mostly existing in Latin Europe (which includes France, Italy, Spain, Portugal, Malta, San Marino, Monaco, Vatican City,); southern [Wallon] Belgium, Czech Republic, Ireland, Lithuania, Poland, Hungary, Slovakia, Slovenia, Croatia, western Ukraine, parts of Bosnia and Herzegovina (Mostly in predominantly Croat areas), but also the southern parts of Germanic Europe (which includes Austria, Luxembourg, northern Flemish Belgium, southern and western Germany, parts of the Netherlands, parts of Switzerland, and Liechtenstein).
Orthodox Christianity (the churches are in full communion, i.e. the national churches are united in theological concept and part of the One, Holy, Catholic and Apostolic Eastern Orthodox Church)
Ecumenical Patriarchate of Constantinople
Russian Orthodox Church
Serbian Orthodox Church
Romanian Orthodox Church
Church of Greece
Bulgarian Orthodox Church
Georgian Orthodox Church
Finnish Orthodox Church
Cypriot Orthodox Church
Albanian Orthodox Church
Polish Orthodox Church
Church of the Czech Lands and Slovakia
Ukrainian Orthodox Church
Turkish Orthodox Church
Macedonian Orthodox Church – Ohrid Archbishopric
Montenegrin Orthodox Church
Oriental Orthodoxy
Armenian Apostolic Church
Armenian Patriarchate of Constantinople
Protestantism
Lutheranism
Independent Evangelical-Lutheran Church
Danish National Church
Estonian Evangelical Lutheran Church
Evangelical Lutheran Church of Finland
United Protestant Church of France
Protestant Church in Germany
Evangelical-Lutheran Church in Hungary
Evangelical Lutheran Church of Latvia
Church of Norway
Church of Sweden
Anglicanism
Church of England
Church of Ireland
Scottish Episcopal Church
Church in Wales
Lusitanian Catholic Apostolic Evangelical Church
Spanish Reformed Episcopal Church
Calvinism
United Reformed Church
Evangelical Presbyterian Church in England and Wales
Reformed Church in Hungary
Church of Scotland
Presbyterian Church in Ireland
Methodist Church of Great Britain
Protestant Church in the Netherlands (Neo-Calvinism)
United Protestant Church of France
Swiss Reformed Church
Restorationism
The Church of Jesus Christ of Latter-day Saints
Jehovah's Witnesses
Other
Baptist Union of Great Britain
Baptist Union of Sweden
Bruderhof Communities
Seventh-day Adventist Church
There are numerous minor Protestant movements, including various Evangelical congregations.
Islam
Islam came to parts of European islands and coasts on the Mediterranean Sea during the 8th-century Muslim conquests. In the Iberian Peninsula and parts of southern France, various Muslim states existed before the Reconquista; Islam spread in southern Italy briefly through the Emirate of Sicily and Emirate of Bari. During the Ottoman expansion, Islam was spread from into the Balkans and even part of Central Europe. Muslims have also been historically present in Ukraine (Crimea and vicinity, with the Crimean Tatars), as well as modern-day Russia, beginning with Volga Bulgaria in the 10th century and the conversion of the Golden Horde to Islam. In recent years, Muslims have migrated to Europe as residents and temporary workers.
According to the Pew Forum, the total number of Muslims in Europe in 2010 was about 44 million (6%). While the total number of Muslims in the European Union in 2007 was about 16 million (3.2%). Data from the 2000s for the rates of growth of Islam in Europe showed that the growing number of Muslims was due primarily to immigration and higher birth rates.
Muslims make up 99% of the population in Turkey, Northern Cyprus, 96% in Kosovo, 56% in Albania, 51% in Bosnia and Herzegovina, 32.17% in North Macedonia, 20% in Montenegro, between 10 and 15% in Russia, 7–9% in France, 8% in Bulgaria, 6% in the Netherlands, 5% in Denmark, United Kingdom and Germany, just over 4% in Switzerland and Austria, and between 3 and 4% in Greece.
A survey conducted by the Pew Research Center in 2016 found that Muslims make up 4.9% of all of Europe's population. According to a same study conversion does not add significantly to the growth of the Muslim population in Europe, with roughly 160,000 more people leaving Islam than converting into Islam between 2010 and 2016.
Judaism
The Jews were dispersed within the Roman Empire from the 2nd century. At one time Judaism was practiced widely throughout the European continent; throughout the Middle Ages, Jews were accused of ritual murder and faced pogroms and legal discrimination. The Holocaust perpetrated by Nazi Germany decimated the Jewish population, and today, France is home to the largest Jewish community in Europe with 1% of the total population (between 483,000 and 500,000 Jews). Other European countries with notable Jewish populations include the United Kingdom (291,000 Jews), Germany (119,000), and Russia (194,000) which is home to Eastern Europe's largest Jewish community. The Jewish population of Europe in 2010 was estimated to be approximately 1.4 million (0.2% of European population) or 10% of the world's Jewish population.
Deism
During the Enlightenment, Deism became influential especially in France, Germany, the Netherlands, and the United Kingdom. Interpretations of the Bible then common were challenged by concepts such as a heliocentric universe and other scientific concepts posited to be challenges to the Bible. Notable early deists include Voltaire, Kant, and Mendeleev.
Irreligion
The trend towards secularism during the 20th and 21st centuries has a number of reasons, depending on the individual country:
France has been traditionally laicist since the French Revolution. Today the country is 25% to 32% irreligious. The remaining population is made up evenly of both Christians and people who believe in a god or some form of spiritual life force, but are not involved in organized religion. French society is still secular overall.
Some parts of Eastern Europe were secularized as a matter of state doctrine under communist rule in the countries of the former Eastern Bloc. Albania was an officially (and constitutionally binding) atheist state from 1967 to 1991. The countries where the most people reported no religious belief were France (33%), the Czech Republic (30%), Belgium (27%), Netherlands (27%), Estonia (26%), Germany (25%), Sweden (23%) and Luxembourg (22%). The region of Eastern Germany, which was also under communist rule, is by far the least religious region in Europe. Other post-communist countries, however, have seen the opposite effect, with religion being very important in countries such as Romania, Lithuania and Poland.
The trend towards secularism has been less pronounced in the traditionally Catholic countries of Mediterranean Europe. Greece as the only traditionally Eastern Orthodox country in Europe which has not been part of the communist Eastern Bloc also retains a very high religiosity, with in excess of 95% of Greeks adhering to the Greek Orthodox Church.
According to a Pew Research Center Survey in 2012 the religiously unaffiliated (atheists and agnostics) make up about 18.2% of the European population in 2010. According to the same survey the religiously unaffiliated make up the majority of the population in only two European countries: Czech Republic (76%) and Estonia (60%). A newer study (released in 2015) found that in the Netherlands there is also an irreligious majority of 68%.
Atheism and agnosticism
During the late 20th and early 21st centuries, atheism and agnosticism have increased, with falling church attendance and membership in various European countries. The 2010 Eurobarometer survey found that on total average, of the EU28 population, 51% "believe there is a God", 26% "believe there is some sort of spirit or life force", and 20% "don't believe there is any sort of spirit, God or life force".
Across the EU, belief was higher among women, increased with age, those with a strict upbringing, those with the lowest level of formal education and those leaning towards right-wing politics. Results were varied widely between different countries.
According to a survey measuring religious identification in the European Union in 2019 by Eurobarometer, 10% of EU citizens identify themselves as atheists. , the top seven European countries with the most people who viewed themselves as atheists were Czech Republic (22%), France (21%), Sweden (16%), Estonia (15%), Slovenia (14%), Spain (12%) and Netherlands (11%). 17% of EU citizens called themselves non-believers or agnostics and this percentage was the highest in Netherlands (41%), Czech Republic (34%), Sweden (34%), United Kingdom (28%), Estonia (23%), Germany (21%) and Spain (20%).
Modern Paganism
Germanic
Heathenism or Esetroth (Icelandic: Ásatrú), and the organised form Odinism, are names for the modern folk religion of the Germanic nations.
In the United Kingdom Census 2001, 300 people registered as Heathen in England and Wales. However, many Heathens followed the advice of the Pagan Federation (PF) and simply described themselves as "Pagan", while other Heathens did not specify their religious beliefs. In the 2011 census, 1,958 people self-identified as Heathen in England and Wales. A further 251 described themselves as Reconstructionist and may include some people reconstructing Germanic paganism.
Ásatrúarfélagið (Esetroth Fellowship) was recognized as an official religion by the Icelandic government in 1973. For its first 20 years it was led by farmer and poet Sveinbjörn Beinteinsson. By 2003, it had 777 members, and by 2014, it had 2,382 members, corresponding to 0.8% of Iceland's population. In Iceland, Germanic religion has an impact larger than the number of its adherents.
In Sweden, the Swedish Forn Sed Assembly (Forn Sed, or the archaic Forn Siðr, means "Old Custom") was formed in 1994 and is since 2007 recognized as a religious organization by the Swedish government. In Denmark Forn Siðr was formed in 1999, and was officially recognized in 2003 The Norwegian Åsatrufellesskapet Bifrost (Esetroth Fellowship Bifrost) was formed in 1996; as of 2011, the fellowship has some 300 members. Foreningen Forn Sed was formed in 1999, and has been recognized by the Norwegian government as a religious organization. In Spain there is the Odinist Community of Spain – Ásatrú.
Roman
The Roman polytheism also known as Religio Romana (Roman religion) in Latin or the Roman Way to the Gods (in Italian 'Via romana agli Déi') is alive in small communities and loosely related organizations, mainly in Italy.
Druidry
The religious development of Druidry was largely influenced by Iolo Morganwg. Modern practises aim to imitate the practises of the Celtic peoples of the Iron Age.
Official religions
A number of countries in Europe have official religions, including Greece (Orthodox), Liechtenstein, Malta, Monaco, the Vatican City (Catholic); Armenia (Apostolic Orthodoxy); Denmark, Iceland (Lutheran); and the United Kingdom (England alone) (Anglican). In Switzerland, some cantons are officially Catholic, others Reformed Protestant. Some Swiss villages even have their religion as well as the village name written on the signs at their entrances.
Georgia, while technically has no official church per se, has special constitutional agreement with Georgian Orthodox Church, which enjoys de facto privileged status. Much the same applies in Germany with the Evangelical Church and the Roman Catholic Church, and the Jewish community. In Finland, both the Finnish Orthodox Church and the Lutheran Church are official. England, a part of the United Kingdom, has Anglicanism as its official religion. Scotland, another part of the UK, has Presbyterianism as its national church, but it is no longer "official". In Sweden, the national church used to be Lutheranism, but it is no longer "official" since 2000. Azerbaijan, Czech Republic, Germany, France, Ireland, Italy, Luxembourg, Portugal, Serbia, Romania, Russia, Spain and Turkey are officially secular.
Indian religions
Buddhism
Buddhism is thinly spread throughout Europe, and the fastest growing religion in recent years with about 3 million adherents. In Kalmykia, Tibetan Buddhism is prevalent.
Hinduism
Hinduism is mainly practised among Indian immigrants. It has been growing rapidly in recent years, notably in the United Kingdom, France, the Netherlands and Italy. In 2010, there were an estimated 1.4 million Hindu adherents in Europe.
Jainism
Jainism, small membership rolls, mainly among Indian immigrants in Belgium and the United Kingdom, as well as several converts from western and northern Europe.
Sikhism
Sikhism has nearly 700,000 adherents in Europe. Most of the community live in United Kingdom (450,000) and Italy (100,000). Around 10,000 Sikhs live in Belgium and France. Netherlands and Germany have a Sikh population of 22,000. All other countries, such as Greece, have 5,000 or fewer Sikhs.
Other religions
Other religions represented in Europe include:
Animism
Confucianism
Eckankar
Ietsism
Raëlism
Beliefs of the Romani people
Romuva
Satanism
Shinto
Spiritualism
Taoism
Thelema
Unitarian Universalism
Yazidism
Zoroastrianism
Rastafari communities in the United Kingdom, France, Spain, Portugal, Italy and elsewhere.
Traditional African Religions (including Muti), mainly in the United Kingdom and France, including
West African Vodun and Haitian Vodou (Voodoo), mainly among West African and black Caribbean immigrants in the UK and France.
Religious distribution
Central Europe
Eastern Europe
Northern Europe
Southeastern Europe (Balkans)
Southern Europe
Western Europe
See also
Buddhism by country
Christianity in Europe
Europeanism
Hinduism by country
Irreligion (no faith) by country
Islam by country
Judaism by country
List of religious populations
Major world religions
Protestantism by country
Post Christianity
Religion in the European Union
Roman Catholicism by country
References
External links
Eurel: sociological and legal data on religions in Europe and beyond | 0.77345 | 0.996311 | 0.770596 |
Prehistoric technology | Prehistoric technology is technology that predates recorded history. History is the study of the past using written records. Anything prior to the first written accounts of history is prehistoric, including earlier technologies. About 2.5 million years before writing was developed, technology began with the earliest hominids who used stone tools, which they first used to hunt food, and later to cook.
There are several factors that made the evolution of prehistoric technology possible or necessary. One of the key factors is behavioral modernity of the highly developed brain of Homo sapiens capable of abstract reasoning, language, introspection, and problem-solving. The advent of agriculture resulted in lifestyle changes from nomadic lifestyles to ones lived in homes, with domesticated animals, and land farmed using more varied and sophisticated tools. Art, architecture, music and religion evolved over the course of the prehistoric periods.
Old World
Stone Age
The Stone Age is a broad prehistoric period during which stone was widely used in the manufacture of implements with a sharp edge, a point, or a percussion surface. The period lasted roughly 2.5 million years, from the time of early hominids to Homo sapiens in the later Pleistocene era, and largely ended between 6000 and 2000 BCE with the advent of metalworking.
The Stone Age lifestyle was that of hunter-gatherers who traveled to hunt game and gather wild plants, with minimal changes in technology. As the last glacial period of the current ice age neared its end (about 12,500 years ago), large animals like the mammoth and bison antiquus became extinct and the climate changed. Humans adapted by maximizing the resources in local environments, gathering and eating a wider range of wild plants and hunting or catching smaller game. Domestication of plants and animals with early stages in the Old World (Afro-Eurasia) Mesolithic and New World (Americas) Archaic periods led to significant changes and reliance on agriculture in the Old World Neolithic and New World Formative stage. The agricultural life led to more settled existences and significant technological advancements.
Although Paleolithic cultures left no written records, the shift from nomadic life to settlement and agriculture can be inferred from a range of archaeological evidence. Such evidence includes ancient tools, cave paintings, and other prehistoric art, such as the Venus of Willendorf. Human remains also provide direct evidence, both through the examination of bones, and the study of mummies. Though concrete evidence is limited, scientists and historians have been able to form significant inferences about the lifestyle and culture of various prehistoric peoples, and the role technology played in their lives.
Lower Paleolithic
The Lower Paleolithic period was the earliest subdivision of the Paleolithic or Old Stone Age. It spans the time from around 2.5 million years ago when the first evidence of craft and use of stone tools by hominids appears in the current archaeological record, until around 300,000 years ago, spanning the Oldowan ("mode 1") and Acheulean ("mode 2") lithic technology.
Early humans (hominids) used stone tool technology, such as a hand axe that was similar to that used by primates, which are found to have intelligence levels of modern children aged 3 to 5 years. Intelligence and use of technology did not change much for millions of years. The first "Homo" species began with Homo habilis about . Homo habilis ("handy man') created stone tools called Oldowan tools. Homo ergaster lived in eastern and southern Africa about and used more diverse and sophisticated stone tools than its predecessor, Homo habilis, including having refined the inherited Oldowan tools and developed the first Acheulean bifacial axes.
Homo erectus ("upright man") lived about in West Asia and Africa and is thought to be the first hominid to hunt in coordinated groups, use complex tools, and care for infirm or weaker companions.<ref>New discovery suggests Homo erectus originated from Asia Daily News & Analysis. June 8, 2011. Retrieved December 17, 2011.</ref> Homo antecessor the earliest hominid in Northern Europe lived from 1.2 million to 800,000 years ago and used stone tools.Ghosh, Pallab. (July 7, 2010). "Humans' early arrival in Britain." BBC Retrieved July 8, 2010. Homo heidelbergensis lived between 600,000 and 400,000 years ago and used stone tool technology similar the Acheulean tools used by Homo erectus.
European and Asian sites dating back 1.5 million years ago seem to indicate controlled use of fire by Homo erectus. A northern Israel site from about 690,000 to 790,000 years ago suggests that man could light fires. Homo heidelbergensis may have been the first species to bury their dead about 500,000 years ago.
Middle Paleolithic
The Middle Paleolithic period occurred in Europe and the Near East, during which the Neanderthals lived (c. 300,000–28,000 years ago). The earliest evidence (Mungo Man) of settlement in Australia dates to around 40,000 years ago when modern humans likely crossed from Asia by island-hopping. The Bhimbetka rock shelters exhibit the earliest traces of human life in India, some of which are approximately 30,000 years old.Homo neanderthalensis used Mousterian Stone tools that date back to around 300,000 years ago and include smaller, knife-like and scraper tools. They buried their dead in shallow graves along with stone tools and animal bones, although the reasons and significance of the burials are disputed."Evolving in their graves: early burials hold clues to human origins - research of burial rituals of Neanderthals." Findarticles.com December 15, 2001. Retrieved March 25, 2011.Homo sapiens, the only living species in the genus Homo, originated in Africa about 200,000 years ago. As compared to their predecessors, Homo sapiens had a more complex brain structure, which provided better coordination for manipulating objects and far greater use of tools. There was art created during this period. Intentional burial, particularly with grave goods, may be one of the earliest detectable forms of religious practice since it may signify a "concern for the dead that transcends daily life." The earliest undisputed human burial so far dates back 130,000 years. Human skeletal remains stained with red ochre were discovered in the Skhul cave at Qafzeh, Israel with a variety of grave goods.
Upper Paleolithic Revolution
During the Upper Paleolithic Revolution, advancements in human intelligence and technology changed radically with the advent of behavioral modernity between 60,000 and 30,000 years ago. Behavioral modernity is a set of traits that distinguish Homo sapiens from extinct hominid lineages. Homo sapiens reached full behavioral modernity around 50,000 years ago due to a highly developed brain capable of abstract reasoning, language, introspection, and problem-solving.
Aurignacian tools, such as stone-bladed tools, tools made of antlers, and tools made of bones were created during this period. People began creating clothing. What appear to be sewing needles were found around 40,000 years ago and dyed flax fibers dated 36,000 BP were found in a prehistoric cave in the Republic of Georgia. Human beings may have begun wearing clothing as far back as 190,000 years ago.
Cultural aspects emerged, such as art of the Upper Paleolithic period, which included cave painting, sculpture such as the Venus figurines, carvings and engravings of bone and ivory. The most common subject matter was large animals that were hunted by the people of the time.
The Cave of Altamira and Paleolithic Cave Art of Northern Spain and Côa Valley Paleolithic Art are examples of such artwork. Musical instruments such as flutes emerged during this period.
Mesolithic period
The Mesolithic period was a transitional era between the Paleolithic hunter-gatherers, beginning with the Holocene warm period around 11,660 BP and ending with the Neolithic introduction of farming, the date of which varied in each geographical region. Adaptation was required during this period due to climate changes that affected environment and the types of available food.
Small stone tools called microliths, including small bladelets and microburins, emerged during this period. For instance, spears or arrows were found at the earliest known Mesolithic battle site at Cemetery 117 in the Sudan. Holmegaard bows were found in the bogs of Northern Europe dating from the Mesolithic period. These microliths point to the use of projectile technology since they are widely assumed to have formed the tips and barbs of arrows. This is demonstrated by mesolithic assemblages found in southwest Germany, which revealed two types of projectiles used: arrows with transverse, trapezoidal stone tips and large barbed antler "harpoons". These implements indicate the nature of human adaptation to the environment during the period, describing the Mesolithic societies as hunter-gatherers.
Neolithic Revolution
The Neolithic Revolution was the first agricultural revolution, representing a transition from hunting and gathering nomadic life to an agriculture existence. It evolved independently in six separate locations worldwide circa 10,000–7,000 years BP (8,000–5,000 BC). The earliest known evidence exists in the tropical and subtropical areas of southwestern/southern Asia, northern/central Africa and Central America.
There are some key defining characteristics. The introduction of agriculture resulted in a shift from nomadic to more sedentary lifestyles, and the use of agricultural tools such as the plough, digging stick and hoe made agricultural labor more efficient. Animals were domesticated, including dogs. Another defining characteristic of the period was the emergence of pottery, and, in the late Neolithic period, the wheel was introduced for making pottery.
Neolithic architecture included houses and villages built of mud-brick and wattle and daub and the construction of storage facilities, tombs and monuments. Copper metalworking was employed as early as 9000 BC in the Middle East; and a copper pendant found in northern Iraq dated to 8700 BC. Ground and polished stone tools continued to be created and used during the Neolithic period.
Numeric record keeping evolved from a system of counting using small clay tokens that began in Sumer about 8000 BC.
Bronze Age
The Stone Age developed into the Bronze Age after the Neolithic Revolution. The Neolithic Revolution involved radical changes in agricultural technology which included development of agriculture, animal domestication, and the adoption of permanent settlements.
The Bronze Age is characterised by metal smelting of copper and its alloy bronze, an alloy of tin and copper, to create implements and weapons. Polished stone tools continued to be used due to their abundance compared with the less common metals (especially tin).
This technological trend apparently began in the Fertile Crescent, and spread outward.
Iron Age
The Iron Age involved the adoption of iron or steel smelting technology, either by casting or forging. Iron replaced bronze,The Junior Encyclopædia Britannica: A reference library of general knowledge. (1897). Chicago: E.G. Melvin. and made it possible to produce tools which were stronger, lighter and cheaper to make than bronze equivalents. The best tools and weapons were made from steel.
Other societal changes often accompanied the introduction of iron, including practice changes in art, religion and agriculture. The Iron Age ends with the beginning of the historic periods, generally marked by the development of written language that enabled creation of historic records.
The timing of the adoption of iron depended upon "the availability of iron ore and the state of knowledge". Iron was smelted in Egypt about 6000 B.C. and iron replaced bronze in the Middle East about 1500 B.C. Chinese began casting iron about 5000 B.C. and their methods for casting iron was the precursor to modern steel manufacturing methods. Most of Asia, however, did not adopt production of iron until the historic period.
In Europe, iron was introduced about 1100 B.C. and had replaced bronze for creating weapons and tools by 500 B.C. They made iron through the forging smelting process and integrated casting in the Middle Ages. Large hill forts or oppida were built either as a refuge in time of war, or sometimes as permanent settlements. Agricultural practices were made more efficient with more effective and varied iron tools.
Iron was extracted from metal ore starting about 2000 B.C. in Africa.
New World
The New World periods began with the crossing of the Paleo-Indians, Athabaskan, Aleuts and Eskimos along the Bering Land Bridge onto the North American continent.
The Paleo-Indians were the first people who entered, and subsequently inhabited, the Americas during the final glacial episodes of the late Pleistocene period. Evidence suggests big-game hunters crossed the Bering Strait from Asia into North America over a land and ice bridge (Beringia), that existed between 45,000 BCE – 12,000 BCE, following herds of large herbivores far into Alaska.
In their book, Method and Theory in American Archaeology, Gordon Willey and Philip Phillips defined five cultural stages for the Americas, including the three prehistoric Lithic, Archaic and Formative stages. The historic stages are the Classic and Post-Classic stages.
Lithic
The Lithic period occurred from 12,000 to 6,000 years before present and included the Clovis, Folsom and Plano cultures. Clovis culture was considered the first culture to use projectile points to hunt on the North American continent. Since then, a pre-Clovis site was found in Manis, Washington that found use of projectile points to hunt mastodons.
Archaic
The Archaic period in the Americas was dated from 8,000 to 2,000 years before present. People were hunters of small game, such as deer, antelope and rabbits, and gatherers of wild plants, moving seasonally to hunting and gathering sites. Late in the Archaic period, about 200-500 A.D., corn was introduced into the diet and pottery-making became an occupation for storing and curing food.
Formative
The Formative stage followed the Archaic period in the Americas and continued until there was contact by European people. Some of the cultures from that period include that of the Ancient Pueblo People, Mississippian culture and Olmec cultures.
Cultures of the Formative Stage are supposed to possess the technologies of pottery, weaving, and developed food production. Social organization is supposed to involve permanent towns and villages, as well as the first ceremonial centers. Ideologically, an early priestly class or theocracy is often present or in development.
Gallery
See also
List of Stone Age art
Timeline of prehistory
Notes
References
Further reading
Fagan, Brian; Shermer, Michael; Wrangham, Richard. (2010). Science & Humanity: From Past to the Future. Los Angeles Times Festival of Books.
Karlin, C.; Julien, M. Prehistoric technology: a cognitive science? University of Washington.
Klein, Richard. (2009). The Human Career: Human Biological and Cultural Origins, Third Edition. Palmer, Douglas. (1999). Atlas of the Prehistoric World. Discovery Channel Books.
Schick, Kathy Diane. (1994). Making Silent Stones Speak: Human Evolution and the Dawn of Technology. Tudge, Colin. (1997). The Time Before History: 5 Million Years of Human Impact. Touchstone.
Wescott, David. (2001). Primitive Technology:A Book of Earth Skills. Wescott, David. (2001). Primitive Technology II: Ancestral Skill - From the Society of Primitive Technology. Wrangham, Richard. (2010). Catching Fire: How Cooking Made Us Human. Basic Books; First Trade Paper Edition.
Zimmer, Carl. (2007). Smithsonian Intimate Guide to Human Origins.'' Harper Perennial.
External links
Ancient human occupation of Britain
Department of Prehistory of Europe, British Museum
Index of Ancient Sites and Monuments, Ancient Wisdom
Online Exhibits, University of California Museum of Paleontology
Prehistoric Science and Technology, Ancient Wisdom
Prehistoric Technology, Ancient Arts
Prehistoric Technology, Access Science
Prehistoric Technology, Royal Alberta Museum, Canada
Prehistory for Kids
Show me: Prehistory, Interactive, educational site
Smithsonian Institution, National Museum of Natural History
Timeline: 2,500,000 BCE to 8,000 BCE, Jeremy Norman
Quinson's Museum of Prehistory, France
Prehistory
Prehistoric | 0.780792 | 0.986877 | 0.770546 |
Schneider's dynamic model | Edgar W. Schneider's dynamic model of postcolonial Englishes adopts an evolutionary perspective emphasizing language ecologies. It shows how language evolves as a process of 'competition-and-selection', and how certain linguistic features emerge. The Dynamic Model illustrates how the histories and ecologies will determine language structures in the different varieties of English, and how linguistic and social identities are maintained.
Underlying principles
Five underlying principles underscore the Dynamic Model:
The closer the contact, or higher the degree of bilingualism or multilingualism in a community, the stronger the effects of contact.
The structural effects of language contact depend on social conditions. Therefore, history will play an important part.
Contact-induced changes can be achieved by a variety of mechanisms, from code-switching to code alternation to acquisition strategies.
Language evolution, and the emergence of contact-induced varieties, can be regarded as speakers making selections from a pool of linguistic variants made available to them.
Which features will be ultimately adopted depends on the complete “ecology” of the contact situation, including factors such as demography, social relationships, and surface similarities between languages etc.
The Dynamic Model outlines five major stages of the evolution of world Englishes. These stages will take into account the perspectives of the two major parties of agents – settlers (STL) and indigenous residents (IDG). Each phase is defined by four parameters:
Extralinguistic factors (e.g. historical events)
Characteristic identity constructions for both parties
Sociolinguistic determinants of contact setting
Structural effects that emerge
See also
Bilingualism
Identity (social science)
Indigenous languages
Language change
Language contact
World Englishes
References
Anglic languages
Language contact
Sociolinguistics
Theories of language | 0.790169 | 0.975139 | 0.770524 |
Dystopia | A dystopia, also called a cacotopia or anti-utopia, is a community or society that is extremely bad or frightening. It is often treated as an antonym of utopia, a term that was coined by Sir Thomas More and figures as the title of his best known work, published in 1516, which created a blueprint for an ideal society with minimal crime, violence, and poverty. The relationship between utopia and dystopia is in actuality, not one of simple opposition, as many dystopias claim to be utopias and vice versa.
Dystopias are often characterized by fear or distress, tyrannical governments, environmental disaster, or other characteristics associated with a cataclysmic decline in society. Themes typical of a dystopian society include: complete control over the people in a society through the usage of propaganda and police state tactics, heavy censoring of information or denial of free thought, worshiping an unattainable goal, the complete loss of individuality, and heavy enforcement of conformity. Despite certain overlaps, dystopian fiction is distinct from post-apocalyptic fiction, and an undesirable society is not necessarily dystopian. Dystopian societies appear in many fictional works and artistic representations, particularly in historical fiction, such as A Tale of Two Cities (1859) by Charles Dickens, Quo Vadis? by Henryk Sienkiewicz, and A Man for All Seasons (1960) by Robert Bolt, stories set in the alternate history timelines, like Robert Harris' Fatherland (1992), or in the future. Famous examples set in the future included Robert Hugh Benson's Lord of the World (1907), Yevgeny Zamyatin's We (1920), Aldous Huxley's Brave New World (1932), George Orwell's Nineteen Eighty-Four (1949), and Ray Bradbury's Fahrenheit 451 (1953). Dystopian societies appear in many sub-genres of fiction and are often used to draw attention to society, environment, politics, economics, religion, psychology, ethics, science, or technology. Some authors use the term to refer to existing societies, many of which are, or have been, totalitarian states or societies in an advanced state of collapse. Dystopias, through an exaggerated worst-case scenario, often make a criticism about a current trend, societal norm, or political system.
Etymology
"Dustopia", the original spelling of "dystopia", first appeared in Lewis Henry Younge's Utopia: or Apollo's Golden Days in 1747. Additionally, dystopia was used as an antonym for utopia by John Stuart Mill in one of his 1868 Parliamentary Speeches (Hansard Commons) by adding the prefix "dys" ( "bad") to "topia", reinterpreting the initial "u" as the prefix "eu" ( "good") instead of "ou" ( "not"). It was used to denounce the government's Irish land policy: "It is, perhaps, too complimentary to call them Utopians, they ought rather to be called dys-topians, or caco-topians. What is commonly called Utopian is something too good to be practicable; but what they appear to favour is too bad to be practicable".
Decades before the first documented use of the word "dystopia" was "cacotopia"/"kakotopia" (using , "bad, wicked") originally proposed in 1818 by Jeremy Bentham, "As a match for utopia (or the imagined seat of the best government) suppose a cacotopia (or the imagined seat of the worst government) discovered and described". Though dystopia became the more popular term, cacotopia finds occasional use; Anthony Burgess, author of A Clockwork Orange (1962), said it was a better fit for Orwell's Nineteen Eighty-Four because "it sounds worse than dystopia".
Theory
Some scholars, such as Gregory Claeys and Lyman Tower Sargent, make certain distinctions between typical synonyms of dystopias. For example, Claeys and Sargent define literary dystopias as societies imagined as substantially worse than the society in which the author writes. Some of these are anti-utopias, which criticise attempts to implement various concepts of utopia. In the most comprehensive treatment of the literary and real expressions of the concept, Dystopia: A Natural History, Claeys offers a historical approach to these definitions. Here the tradition is traced from early reactions to the French Revolution. Its commonly anti-collectivist character is stressed, and the addition of other themes—the dangers of science and technology, of social inequality, of corporate dictatorship, of nuclear war—are also traced. A psychological approach is also favored here, with the principle of fear being identified with despotic forms of rule, carried forward from the history of political thought, and group psychology introduced as a means of understanding the relationship between utopia and dystopia. Andrew Norton-Schwartzbard noted that "written many centuries before the concept "dystopia" existed, Dante's Inferno in fact includes most of the typical characteristics associated with this genre – even if placed in a religious framework rather than in the future of the mundane world, as modern dystopias tend to be". In the same vein, Vicente Angeloti remarked that "George Orwell's emblematic phrase, a boot stamping on a human face – forever, would aptly describe the situation of the denizens in Dante's Hell. Conversely, Dante's famous inscription Abandon all hope, ye who enter here would have been equally appropriate if placed at the entrance to Orwell's "Ministry of Love" and its notorious "Room 101".
Society
Dystopias typically reflect contemporary sociopolitical realities and extrapolate worst-case scenarios as warnings for necessary social change or caution. Dystopian fictions invariably reflect the concerns and fears of their creators' contemporaneous culture. Due to this, they can be considered a subject of social studies. In dystopias, citizens may live in a dehumanized state, be under constant surveillance, or have a fear of the outside world. In the film What Happened to Monday the protagonists (identical septuplet sisters) risk their lives by taking turns onto the outside world because of a one-child policy place in this futuristic dystopian society.
In a 1967 study, Frank Kermode suggests that the failure of religious prophecies led to a shift in how society apprehends this ancient mode. Christopher Schmidt notes that, while the world goes to waste for future generations, people distract themselves from disaster by passively watching it as entertainment.
In the 2010s, there was a surge of popular dystopian young adult literature and blockbuster films. Some have commented on this trend, saying that "it is easier to imagine the end of the world than it is to imagine the end of capitalism". Cultural theorist and critic Mark Fisher identified the phrase as encompassing the theory of capitalist realism—the perceived "widespread sense that not only is capitalism the only viable political and economic system, but also that it is now impossible even to imagine a coherent alternative to it"—and used the above quote as the title to the opening chapter of his book, Capitalist Realism: Is There No Alternative?. In the book, he also refers to dystopian film such as Children of Men (originally a novel by P. D. James) to illustrate what he describes as the "slow cancellation of the future". Theo James, an actor in Divergent (originally a novel by Veronica Roth), explains that "young people in particular have such a fascination with this kind of story [...] It's becoming part of the consciousness. You grow up in a world where it's part of the conversation all the time – the statistics of our planet warming up. The environment is changing. The weather is different. These are things that are very visceral and very obvious, and they make you question the future, and how we will survive. It's so much a part of everyday life that young people inevitably – consciously or not – are questioning their futures and how the Earth will be. I certainly do. I wonder what kind of world my children's kids will live in."
The entire substantial sub-genre of alternative history works depicting a world in which Nazi Germany won the Second World War can be considered as dystopias. So can other works of Alternative History, in which a historical turning point led to a manifestly repressive world. For example, the 2004 mockumentary C.S.A.: The Confederate States of America, and Ben Winters' Underground Airlines, in which slavery in the United States continues to the present, with "electronic slave auctions" carried out via the Internet and slaves controlled by electronic devices implanted in their spines, or Keith Roberts Pavane in which 20th Century Britain is ruled by a Catholic theocracy and the Inquisition is actively torturing and burning "heretics".
Common themes
Politics
In When the Sleeper Wakes, H. G. Wells depicted the governing class as hedonistic and shallow. George Orwell contrasted Wells's world to that depicted in Jack London's The Iron Heel, where the dystopian rulers are brutal and dedicated to the point of fanaticism, which Orwell considered more plausible.
The political principles at the root of fictional utopias (or "perfect worlds") are idealistic in principle and result in positive consequences for the inhabitants; the political principles on which fictional dystopias are based, while often based on utopian ideals, result in negative consequences for inhabitants because of at least one fatal flaw.
Dystopias are often filled with pessimistic views of the ruling class or a government that is brutal or uncaring, ruling with an "iron fist". Dystopian governments are sometimes ruled by a fascist or communist regime or dictator. These dystopian government establishments often have protagonists or groups that lead a "resistance" to enact change within their society, as is seen in Alan Moore's V for Vendetta.
Dystopian political situations are depicted in novels such as We, Parable of the Sower, Darkness at Noon, Nineteen Eighty-Four, Brave New World, The Handmaid's Tale, The Hunger Games, Divergent and Fahrenheit 451 and such films as Metropolis, Brazil (1985), Battle Royale, FAQ: Frequently Asked Questions, Soylent Green, The Purge: Election Year, Logan's Run, and The Running Man (1987).. An earlier example is Jules Verne's The Begum's Millions with its depiction of Stahlstadt (Steel City), a vast industrial and mining complex, which is totally devoted to the production of ever more powerful and destructive weapons, and which is ruled by the dictatorial and totally ruthless Prof. Schultze – a militarist and racist who dreams of world conquest and as the first step plots the complete destruction of the nearby Ville-France, a utopian model city constructed and maintained with public health as its government's primary concern.
Economics
The economic structures of dystopian societies in literature and other media have many variations, as the economy often relates directly to the elements that the writer is depicting as the source of the oppression. There are several archetypes that such societies tend to follow. A theme is the dichotomy of planned economies versus free market economies, a conflict which is found in such works as Ayn Rand's Anthem and Henry Kuttner's short story "The Iron Standard". Another example of this is reflected in Norman Jewison's 1975 film Rollerball (1975).
Some dystopias, such as that of Nineteen Eighty-Four, feature black markets with goods that are dangerous and difficult to obtain or the characters may be at the mercy of the state-controlled economy. Kurt Vonnegut's Player Piano depicts a dystopia in which the centrally controlled economic system has indeed made material abundance plentiful but deprived the mass of humanity of meaningful labor; virtually all work is menial, unsatisfying and only a small number of the small group that achieves education is admitted to the elite and its work. In Tanith Lee's Don't Bite the Sun, there is no want of any kind – only unabashed consumption and hedonism, leading the protagonist to begin looking for a deeper meaning to existence. Even in dystopias where the economic system is not the source of the society's flaws, as in Brave New World, the state often controls the economy; a character, reacting with horror to the suggestion of not being part of the social body, cites as a reason that works for everyone else.
Other works feature extensive privatization and corporatism; both consequences of capitalism, where privately owned and unaccountable large corporations have replaced the government in setting policy and making decisions. They manipulate, infiltrate, control, bribe, are contracted by and function as government. This is seen in the novels Jennifer Government and Oryx and Crake and the movies Alien, Avatar, RoboCop, Visioneers, Idiocracy, Soylent Green, WALL-E and Rollerball. Corporate republics are common in the cyberpunk genre, as in Neal Stephenson's Snow Crash and Philip K. Dick's Do Androids Dream of Electric Sheep? (as well as the film Blade Runner, influenced by and based upon Dick's novel).
Class
Dystopian fiction frequently draws stark contrasts between the privileges of the ruling class and the dreary existence of the working class. In the 1931 novel Brave New World by Aldous Huxley, a class system is prenatally determined with Alphas, Betas, Gammas, Deltas and Epsilons, with the lower classes having reduced brain function and special conditioning to make them satisfied with their position in life. Outside of this society there also exist several human settlements that exist in the conventional way but which the World Government describes as "savages".
In George Orwell's Nineteen Eighty-Four, the dystopian society described within has a tiered class structure with the ruling elite "Inner Party" at the top, the "Outer Party" below them functioning as a type of middle-class with minor privileges, and the working-class "Proles" (short for proletariat) at the bottom of the hierarchy with few rights, yet making up the vast majority of the population.
In Ypsilon Minus by Herbert W. Franke, people are divided into numerous alphabetically ranked groups.
In the film Elysium, the majority of Earth's population on the surface lives in poverty with little access to health care and are subject to worker exploitation and police brutality, while the wealthy live above the Earth in luxury with access to technologies that cure all diseases, reverse aging, and regenerate body parts.
Written a century earlier, the future society depicted in H. G. Wells' The Time Machine had started in a similar way to Elysium – the workers consigned to living and working in underground tunnels while the wealthy live on a surface made into an enormous beautiful garden. But over a long time period, the roles were eventually reversed – the rich degenerated and became a decadent "livestock" regularly caught and eaten by the underground cannibal Morlocks.
Family
Some fictional dystopias, such as Brave New World and Fahrenheit 451, have eradicated the family and kept it from re-establishing itself as a social institution. In Brave New World, where children are reproduced artificially, the concepts of "mother" and "father" are considered obscene. In some novels, such as We, the state is hostile to motherhood, as a pregnant woman from One State is in revolt.
Religion
In dystopias, religious groups may play the role of oppressed or oppressor. One of the earliest examples is Robert Hugh Benson's Lord of the World, about a futuristic world where Marxists and Freemasons led by the Antichrist have taken over the world and the only remaining source of dissent is a tiny and persecuted Catholic minority. In Brave New World the establishment of the state included lopping off the tops of all crosses (as symbols of Christianity) to make them "T"s (as symbols of Henry Ford's Model T). In C. S. Lewis's That Hideous Strength the leaders of the fictional National Institute of Coordinated Experiments, a joint venture of academia and government to promote an anti-traditionalist social agenda, are contemptuous of religion and require initiates to desecrate Christian symbols. Margaret Atwood's novel The Handmaid's Tale takes place in a future United States under a Christian-based theocratic regime.
Identity
In the Russian novel We by Yevgeny Zamyatin, first published in 1921, people are permitted to live out of public view twice a week for one hour and are only referred to by numbers instead of names. The latter feature also appears in the film THX 1138. In some dystopian works, such as Kurt Vonnegut's Harrison Bergeron, society forces individuals to conform to radical egalitarian social norms that discourage or suppress accomplishment or even competence as forms of inequality. Complete conformity and suppression of individuality (to the point of acting in unison) is also depicted in Madeleine L'Engle's A Wrinkle in Time.
Violence
Violence is prevalent in many dystopias, often in the form of war, but also in urban crimes led by (predominately teenage) gangs (e.g. A Clockwork Orange), or rampant crime met by blood sports (e.g. Battle Royale, The Running Man, The Hunger Games, Divergent, and The Purge). It is also explained in Suzanne Berne's essay "Ground Zero", where she explains her experience of the aftermath of 11 September 2001.
Nature
Fictional dystopias are commonly urban and frequently isolate their characters from all contact with the natural world. Sometimes they require their characters to avoid nature, as when walks are regarded as dangerously anti-social in Ray Bradbury's Fahrenheit 451, as well as within Bradbury's short story "The Pedestrian". In That Hideous Strength, science coordinated by government is directed toward the control of nature and the elimination of natural human instincts. In Brave New World, the lower class is conditioned to be afraid of nature but also to visit the countryside and consume transport and games to promote economic activity. Lois Lowry's "The Giver" shows a society where technology and the desire to create a utopia has led humanity to enforce climate control on the environment, as well as to eliminate many undomesticated species and to provide psychological and pharmaceutical repellent against human instincts. E. M. Forster's "The Machine Stops" depicts a highly changed global environment which forces people to live underground due to an atmospheric contamination. As Angel Galdon-Rodriguez points out, this sort of isolation caused by external toxic hazard is later used by Hugh Howey in his series of dystopias of the Silo Series.
Excessive pollution that destroys nature is common in many dystopian films, such as The Matrix, RoboCop, WALL-E, April and the Extraordinary World and Soylent Green, as well as in videogames like Half-Life 2. A few "green" fictional dystopias do exist, such as in Michael Carson's short story "The Punishment of Luxury", and Russell Hoban's Riddley Walker. The latter is set in the aftermath of nuclear war, "a post-nuclear holocaust Kent, where technology has reduced to the level of the Iron Age".
Science and technology
Contrary to the technologically utopian claims, which view technology as a beneficial addition to all aspects of humanity, technological dystopia concerns itself with and focuses largely (but not always) on the negative effects caused by new technology.
Technologies reflect and encourage the worst aspects of human nature. Jaron Lanier, a digital pioneer, has become a technological dystopian: "I think it's a way of interpreting technology in which people forgot taking responsibility." "'Oh, it's the computer that did it, not me.' 'There's no more middle class? Oh, it's not me. The computer did it'" This quote explains that people begin to not only blame the technology for the changes in lifestyle but also believe that technology is an omnipotence. It also points to a technological determinist perspective in terms of reification.
Technologies harm our interpersonal communication, relationships, and communities. A decrease in communication within family members and friend groups due to increased time in technology use. Virtual space misleadingly heightens the impact of real presence; people resort to technological medium for communication nowadays.
Technologies reinforce hierarchies – concentrate knowledge and skills; increase surveillance and erode privacy; widen inequalities of power and wealth; giving up control to machines. Douglas Rushkoff, a technological utopian, states in his article that the professional designers "re-mystified" the computer so it wasn't so readable anymore; users had to depend on the special programs built into the software that was incomprehensible for normal users.
New technologies are sometimes regressive (worse than previous technologies).
The unforeseen impacts of technology are negative. "The most common way is that there's some magic artificial intelligence in the sky or in the cloud or something that knows how to translate, and what a wonderful thing that this is available for free. But there's another way to look at it, which is the technically true way: You gather a ton of information from real live translators who have translated phrases… It's huge but very much like Facebook, it's selling people back to themselves… [With translation] you're producing this result that looks magical but in the meantime, the original translators aren't paid for their work… You're actually shrinking the economy."
More efficiency and choices can harm our quality of life (by causing stress, destroying jobs, making us more materialistic). In his article "Prest-o! Change-o!", technological dystopian James Gleick mentions the remote control being the classic example of technology that does not solve the problem "it is meant to solve". Gleick quotes Edward Tenner, a historian of technology, that the ability and ease of switching channels by the remote control serves to increase distraction for the viewer. Then it is only expected that people will become more dissatisfied with the channel they are watching.
New technologies can solve problems of old technologies or just create new problems. The remote control example explains this claim as well, for the increase in laziness and dissatisfaction levels was clearly not a problem in times without the remote control. He also takes social psychologist Robert Levine's example of Indonesians "'whose main entertainment consists of watching the same few plays and dances, month after month, year after year,' and with Nepalese Sherpas who eat the same meals of potatoes and tea through their entire lives. The Indonesians and Sherpas are perfectly satisfied". Because of the invention of the remote control, it merely created more problems.
Technologies destroy nature (harming human health and the environment). The need for business replaced community and the "story online" replaced people as the "soul of the Net". Because information was now able to be bought and sold, there was not as much communication taking place.
In pop culture
Dystopian themes are in many television shows and video games such as Cyberpunk 2077, The Hunger Games, Cyberpunk: Edgerunners, Blade Runner 2049, Elysium and Titanfall.
See also
Alternate history
Horror fiction
Apocalyptic and post-apocalyptic fiction
Biopunk
Digital dystopia
Dissident
Inner emigration
Kafkaesque
List of dystopian comics
List of dystopian films
List of dystopian literature
List of dystopian works
Lovecraftian horror
Plutocracy
Police state
Self-fulfilling prophecy
Social science fiction
Societal collapse
Soft science fiction
References
See also Gregory Claeys. "When Does Utopianism Produce Dystopia?" in: Zsolt Czigányik, ed.
Utopian Horizons. Utopia and Ideology – The Interaction of Political and Utopian Thought
(Budapest: CEU Press, 2016), pp. 41–61.
External links
Dystopia Tracker, predictions about the future and their realisations in real life.
Dystopic, dystopian fiction and its place in reality.
Dystopias, in The Encyclopedia of Science Fiction.
Climate Change Dystopia, discusses current popularity of the dystopian genre.
Alexandru Bumbas, Penser l'anachronisme comme moteur esthétique de la dystopie théâtrale: quelques considérations sur Bond, Barker, Gabily, et Delbo (In French)
Science fiction themes
Speculative fiction
Suffering | 0.77091 | 0.999499 | 0.770523 |
Feminism | Feminism is a range of socio-political movements and ideologies that aim to define and establish the political, economic, personal, and social equality of the sexes. Feminism holds the position that modern societies are patriarchal—they prioritize the male point of view—and that women are treated unjustly in these societies. Efforts to change this include fighting against gender stereotypes and improving educational, professional, and interpersonal opportunities and outcomes for women.
Originating in late 18th-century Europe, feminist movements have campaigned and continue to campaign for women's rights, including the right to vote, run for public office, work, earn equal pay, own property, receive education, enter into contracts, have equal rights within marriage, and maternity leave. Feminists have also worked to ensure access to contraception, legal abortions, and social integration; and to protect women and girls from sexual assault, sexual harassment, and domestic violence. Changes in female dress standards and acceptable physical activities for women have also been part of feminist movements.
Many scholars consider feminist campaigns to be a main force behind major historical societal changes for women's rights, particularly in the West, where they are near-universally credited with achieving women's suffrage, gender-neutral language, reproductive rights for women (including access to contraceptives and abortion), and the right to enter into contracts and own property. Although feminist advocacy is, and has been, mainly focused on women's rights, some argue for the inclusion of men's liberation within its aims, because they believe that men are also harmed by traditional gender roles. Feminist theory, which emerged from feminist movements, aims to understand the nature of gender inequality by examining women's social roles and lived experiences. Feminist theorists have developed theories in a variety of disciplines in order to respond to issues concerning gender.
Numerous feminist movements and ideologies have developed over the years, representing different viewpoints and political aims. Traditionally, since the 19th century, first-wave liberal feminism, which sought political and legal equality through reforms within a liberal democratic framework, was contrasted with labour-based proletarian women's movements that over time developed into socialist and Marxist feminism based on class struggle theory. Since the 1960s, both of these traditions are also contrasted with the radical feminism that arose from the radical wing of second-wave feminism and that calls for a radical reordering of society to eliminate patriarchy. Liberal, socialist, and radical feminism are sometimes referred to as the "Big Three" schools of feminist thought.
Since the late 20th century, many newer forms of feminism have emerged. Some forms, such as white feminism and gender-critical feminism, have been criticized as taking into account only white, middle class, college-educated, heterosexual, or cisgender perspectives. These criticisms have led to the creation of ethnically specific or multicultural forms of feminism, such as black feminism and intersectional feminism. Some have argued that feminism often promotes misandry and the elevation of women's interests above men's, and criticize radical feminist positions as harmful to both men and women.
History
Terminology
Mary Wollstonecraft is seen by many as a founder of feminism due to her 1792 book titled A Vindication of the Rights of Woman in which she argues that class and private property are the basis of discrimination against women, and that women as much as men needed equal rights. Charles Fourier, a utopian socialist and French philosopher, is credited with having coined the word "féminisme" in 1837. but no trace of the word have been found in his works. The word "féminisme" ("feminism") first appeared in France in 1871 in a medicine thesis about men suffering from tuberculosis and having developed, according to the author Ferdinand-Valère Faneau de la Cour, feminine traits. The word "féministe" ("feminist"), inspired by its medical use, was coined by Alexandre Dumas fils in a 1872 essay, referring to men who supported women rights. In both cases, the use of the word was very negative and reflected a criticism of a so-called "confusion of the sexes" by women who refused to abide by the sexual division of society and challenged the inequalities between sexes.
The concepts appeared in the Netherlands in 1872, Great Britain in the 1890s, and the United States in 1910. The Oxford English Dictionary dates the first appearance in English in this meaning back to 1895. Depending on the historical moment, culture and country, feminists around the world have had different causes and goals. Most western feminist historians contend that all movements working to obtain women's rights should be considered feminist movements, even when they did not (or do not) apply the term to themselves. Other historians assert that the term should be limited to the modern feminist movement and its descendants. Those historians use the label "protofeminist" to describe earlier movements.
Waves
The history of the modern western feminist movement is divided into multiple "waves".
The first comprised women's suffrage movements of the 19th and early-20th centuries, promoting women's right to vote. The second wave, the women's liberation movement, began in the 1960s and campaigned for legal and social equality for women. In or around 1992, a third wave was identified, characterized by a focus on individuality and diversity. Additionally, some have argued for the existence of a fourth wave, starting around 2012, which has used social media to combat sexual harassment, violence against women and rape culture; it is best known for the Me Too movement.
19th and early 20th centuries
First-wave feminism was a period of activity during the 19th and early-20th centuries. In the UK and US, it focused on the promotion of equal contract, marriage, parenting, and property rights for women. New legislation included the Custody of Infants Act 1839 in the UK, which introduced the tender years doctrine for child custody and gave women the right of custody of their children for the first time. Other legislation, such as the Married Women's Property Act 1870 in the UK and extended in the 1882 Act, became models for similar legislation in other British territories. Victoria passed legislation in 1884 and New South Wales in 1889; the remaining Australian colonies passed similar legislation between 1890 and 1897. With the turn of the 19th century, activism focused primarily on gaining political power, particularly the right of women's suffrage, though some feminists were active in campaigning for women's sexual, reproductive, and economic rights too.
Women's suffrage (the right to vote and stand for parliamentary office) began in Britain's Australasian colonies at the end of the 19th century, with the self-governing colony of New Zealand granting women the right to vote in 1893; South Australia followed suit with the Constitutional Amendment (Adult Suffrage) Act 1894 in 1894. This was followed by Australia granting female suffrage in 1902.
In Britain, the suffragettes and suffragists campaigned for the women's vote, and in 1918 the Representation of the People Act was passed granting the vote to women over the age of 30 who owned property. In 1928, this was extended to all women over 21. Emmeline Pankhurst was the most notable activist in England. Time named her one of the 100 Most Important People of the 20th Century, stating: "she shaped an idea of women for our time; she shook society into a new pattern from which there could be no going back." In the US, notable leaders of this movement included Lucretia Mott, Elizabeth Cady Stanton, and Susan B. Anthony, who each campaigned for the abolition of slavery before championing women's right to vote. These women were influenced by the Quaker theology of spiritual equality, which asserts that men and women are equal under God. In the US, first-wave feminism is considered to have ended with the passage of the Nineteenth Amendment to the United States Constitution (1919), granting women the right to vote in all states. The term first wave was coined retroactively when the term second-wave feminism came into use.
In Germany, feminists like Clara Zetkin was very interested in women's politics, including the fight for equal opportunities and women's suffrage, through socialism. She helped to develop the social-democratic women's movement in Germany. From 1891 to 1917, she edited the SPD women's newspaper Die Gleichheit (Equality). In 1907 she became the leader of the newly founded "Women's Office" at the SPD. She also contributed to International Women's Day (IWD).
During the late Qing period and reform movements such as the Hundred Days' Reform, Chinese feminists called for women's liberation from traditional roles and Neo-Confucian gender segregation. Later, the Chinese Communist Party created projects aimed at integrating women into the workforce, and claimed that the revolution had successfully achieved women's liberation.
According to Nawar al-Hassan Golley, Arab feminism was closely connected with Arab nationalism. In 1899, Qasim Amin, considered the "father" of Arab feminism, wrote The Liberation of Women, which argued for legal and social reforms for women. He drew links between women's position in Egyptian society and nationalism, leading to the development of Cairo University and the National Movement. In 1923 Hoda Shaarawi founded the Egyptian Feminist Union, became its president and a symbol of the Arab women's rights movement.
The Iranian Constitutional Revolution in 1905 triggered the Iranian women's movement, which aimed to achieve women's equality in education, marriage, careers, and legal rights. However, during the Iranian revolution of 1979, many of the rights that women had gained from the women's movement were systematically abolished, such as the Family Protection Law.
Mid-20th century
By the mid-20th century, women still lacked significant rights.
In France, women obtained the right to vote only with the Provisional Government of the French Republic of 21 April 1944. The Consultative Assembly of Algiers of 1944 proposed on 24 March 1944 to grant eligibility to women but following an amendment by Fernard Grenier, they were given full citizenship, including the right to vote. Grenier's proposition was adopted 51 to 16. In May 1947, following the November 1946 elections, the sociologist Robert Verdier minimized the "gender gap", stating in Le Populaire that women had not voted in a consistent way, dividing themselves, as men, according to social classes. During the baby boom period, feminism waned in importance. Wars (both World War I and World War II) had seen the provisional emancipation of some women, but post-war periods signalled the return to conservative roles.
In Switzerland, women gained the right to vote in federal elections in 1971; but in the canton of Appenzell Innerrhoden women obtained the right to vote on local issues only in 1991, when the canton was forced to do so by the Federal Supreme Court of Switzerland. In Liechtenstein, women were given the right to vote by the women's suffrage referendum of 1984. Three prior referendums held in 1968, 1971 and 1973 had failed to secure women's right to vote.
Feminists continued to campaign for the reform of family laws which gave husbands control over their wives. Although by the 20th century coverture had been abolished in the UK and US, in many continental European countries married women still had very few rights. For instance, in France, married women did not receive the right to work without their husband's permission until 1965. Feminists have also worked to abolish the "marital exemption" in rape laws which precluded the prosecution of husbands for the rape of their wives. Earlier efforts by first-wave feminists such as Voltairine de Cleyre, Victoria Woodhull and Elizabeth Clarke Wolstenholme Elmy to criminalize marital rape in the late 19th century had failed; this was only achieved a century later in most Western countries, but is still not achieved in many other parts of the world.
French philosopher Simone de Beauvoir provided a Marxist solution and an existentialist view on many of the questions of feminism with the publication of Le Deuxième Sexe (The Second Sex) in 1949. The book expressed feminists' sense of injustice. Second-wave feminism is a feminist movement beginning in the early 1960s and continuing to the present; as such, it coexists with third-wave feminism. Second-wave feminism is largely concerned with issues of equality beyond suffrage, such as ending gender discrimination.
Second-wave feminists see women's cultural and political inequalities as inextricably linked and encourage women to understand aspects of their personal lives as deeply politicized and as reflecting sexist power structures. The feminist activist and author Carol Hanisch coined the slogan "The Personal is Political", which became synonymous with the second wave.
Second- and third-wave feminism in China has been characterized by a reexamination of women's roles during the communist revolution and other reform movements, and new discussions about whether women's equality has actually been fully achieved.
In 1956, President Gamal Abdel Nasser of Egypt initiated "state feminism", which outlawed discrimination based on gender and granted women's suffrage, but also blocked political activism by feminist leaders. During Sadat's presidency, his wife, Jehan Sadat, publicly advocated further women's rights, though Egyptian policy and society began to move away from women's equality with the new Islamist movement and growing conservatism. However, some activists proposed a new feminist movement, Islamic feminism, which argues for women's equality within an Islamic framework.
In Latin America, revolutions brought changes in women's status in countries such as Nicaragua, where feminist ideology during the Sandinista Revolution aided women's quality of life but fell short of achieving a social and ideological change.
In 1963, Betty Friedan's book The Feminine Mystique helped voice the discontent that American women felt. The book is widely credited with sparking the beginning of second-wave feminism in the United States. Within ten years, women made up over half the First World workforce. In 1970, Australian writer Germaine Greer published The Female Eunuch, which became a worldwide bestseller, reportedly driving up divorce rates. Greer posits that men hate women, that women do not know this and direct the hatred upon themselves, as well as arguing that women are devitalised and repressed in their role as housewives and mothers.
Late 20th and early 21st centuries
Third-wave feminism
Third-wave feminism is traced to the emergence of the riot grrrl feminist punk subculture in Olympia, Washington, in the early 1990s, and to Anita Hill's televised testimony in 1991—to an all-male, all-white Senate Judiciary Committee—that Clarence Thomas, nominated for the Supreme Court of the United States, had sexually harassed her. The term third wave is credited to Rebecca Walker, who responded to Thomas's appointment to the Supreme Court with an article in Ms. magazine, "Becoming the Third Wave" (1992). She wrote:
Third-wave feminism also sought to challenge or avoid what it deemed the second wave's essentialist definitions of femininity, which, third-wave feminists argued, overemphasized the experiences of upper middle-class white women. Third-wave feminists often focused on "micro-politics" and challenged the second wave's paradigm as to what was, or was not, good for women, and tended to use a post-structuralist interpretation of gender and sexuality. Feminist leaders rooted in the second wave, such as Gloria Anzaldúa, bell hooks, Chela Sandoval, Cherríe Moraga, Audre Lorde, Maxine Hong Kingston, and many other non-white feminists, sought to negotiate a space within feminist thought for consideration of race-related subjectivities. Third-wave feminism also contained internal debates between difference feminists, who believe that there are important psychological differences between the sexes, and those who believe that there are no inherent psychological differences between the sexes and contend that gender roles are due to social conditioning.
Standpoint theory
Standpoint theory is a feminist theoretical point of view stating that a person's social position influences their knowledge. This perspective argues that research and theory treat women and the feminist movement as insignificant and refuses to see traditional science as unbiased. Since the 1980s, standpoint feminists have argued that the feminist movement should address global issues (such as rape, incest, and prostitution) and culturally specific issues (such as female genital mutilation in some parts of Africa and Arab societies, as well as glass ceiling practices that impede women's advancement in developed economies) in order to understand how gender inequality interacts with racism, homophobia, classism and colonization in a "matrix of domination".
Fourth-wave feminism
Fourth-wave feminism is a proposed extension of third-wave feminism which corresponds to a resurgence in interest in feminism beginning around 2012 and associated with the use of social media. According to feminist scholar Prudence Chamberlain, the focus of the fourth wave is justice for women and opposition to sexual harassment and violence against women. Its essence, she writes, is "incredulity that certain attitudes can still exist".
Fourth-wave feminism is "defined by technology", according to Kira Cochrane, and is characterized particularly by the use of Facebook, Twitter, Instagram, YouTube, Tumblr, and blogs such as Feministing to challenge misogyny and further gender equality.
Issues that fourth-wave feminists focus on include street and workplace harassment, campus sexual assault and rape culture. Scandals involving the harassment, abuse, and murder of women and girls have galvanized the movement. These have included the 2012 Delhi gang rape, 2012 Jimmy Savile allegations, the Bill Cosby allegations, 2014 Isla Vista killings, 2016 trial of Jian Ghomeshi, 2017 Harvey Weinstein allegations and subsequent Weinstein effect, and the 2017 Westminster sexual scandals.
Examples of fourth-wave feminist campaigns include the Everyday Sexism Project, No More Page 3, Stop Bild Sexism, Mattress Performance, 10 Hours of Walking in NYC as a Woman, #YesAllWomen, Free the Nipple, One Billion Rising, the 2017 Women's March, the 2018 Women's March, and the #MeToo movement. In December 2017, Time magazine chose several prominent female activists involved in the #MeToo movement, dubbed "the silence breakers", as Person of the Year.
Decolonial feminism
Decolonial feminism reformulates the coloniality of gender by critiquing the very formation of gender and its subsequent formations of patriarchy and the gender binary, not as universal constants across cultures, but as structures that have been instituted by and for the benefit of European colonialism. Marìa Lugones proposes that decolonial feminism speaks to how "the colonial imposition of gender cuts across questions of ecology, economics, government, relations with the spirit world, and knowledge, as well as across everyday practices that either habituate us to take care of the world or to destroy it." Decolonial feminists like Karla Jessen Williamson and Rauna Kuokkanen have examined colonialism as a force that has imposed gender hierarchies on Indigenous women that have disempowered and fractured Indigenous communities and ways of life.
Postfeminism
The term postfeminism is used to describe a range of viewpoints reacting to feminism since the 1980s. While not being "anti-feminist", postfeminists believe that women have achieved second wave goals while being critical of third- and fourth-wave feminist goals. The term was first used to describe a backlash against second-wave feminism, but it is now a label for a wide range of theories that take critical approaches to previous feminist discourses and includes challenges to the second wave's ideas. Other postfeminists say that feminism is no longer relevant to today's society. Amelia Jones has written that the postfeminist texts which emerged in the 1980s and 1990s portrayed second-wave feminism as a monolithic entity. Dorothy Chunn describes a "blaming narrative" under the postfeminist moniker, where feminists are undermined for continuing to make demands for gender equality in a "post-feminist" society, where "gender equality has (already) been achieved". According to Chunn, "many feminists have voiced disquiet about the ways in which rights and equality discourses are now used against them".
Theory
Feminist theory is the extension of feminism into theoretical or philosophical fields. It encompasses work in a variety of disciplines, including anthropology, sociology, economics, women's studies, literary criticism, art history, psychoanalysis, and philosophy. Feminist theory aims to understand gender inequality and focuses on gender politics, power relations, and sexuality. While providing a critique of these social and political relations, much of feminist theory also focuses on the promotion of women's rights and interests. Themes explored in feminist theory include discrimination, stereotyping, objectification (especially sexual objectification), oppression, and patriarchy.
In the field of literary criticism, Elaine Showalter describes the development of feminist theory as having three phases. The first she calls "feminist critique", in which the feminist reader examines the ideologies behind literary phenomena. The second Showalter calls "gynocriticism", in which the "woman is producer of textual meaning". The last phase she calls "gender theory", in which the "ideological inscription and the literary effects of the sex/gender system are explored".
This was paralleled in the 1970s by French feminists, who developed the concept of écriture féminine (which translates as "female or feminine writing"). Hélène Cixous argues that writing and philosophy are phallocentric and along with other French feminists such as Luce Irigaray emphasize "writing from the body" as a subversive exercise. The work of Julia Kristeva, a feminist psychoanalyst and philosopher, and Bracha Ettinger, artist and psychoanalyst, has influenced feminist theory in general and feminist literary criticism in particular. However, as the scholar Elizabeth Wright points out, "none of these French feminists align themselves with the feminist movement as it appeared in the Anglophone world".
Movements and ideologies
Many overlapping feminist movements and ideologies have developed over the years. Feminism is often divided into three main traditions called liberal, radical and socialist/Marxist feminism, sometimes known as the "Big Three" schools of feminist thought. Since the late 20th century, newer forms of feminisms have also emerged. Some branches of feminism track the political leanings of the larger society to a greater or lesser degree, or focus on specific topics, such as the environment.
Liberal feminism
Liberal feminism, also known under other names such as reformist, mainstream, or historically as bourgeois feminism, arose from 19th-century first-wave feminism, and was historically linked to 19th-century liberalism and progressivism, while 19th-century conservatives tended to oppose feminism as such. Liberal feminism seeks equality of men and women through political and legal reform within a liberal democratic framework, without radically altering the structure of society; liberal feminism "works within the structure of mainstream society to integrate women into that structure". During the 19th and early 20th centuries liberal feminism focused especially on women's suffrage and access to education. Former Norwegian supreme court justice and former president of the liberal Norwegian Association for Women's Rights, Karin Maria Bruzelius, has described liberal feminism as "a realistic, sober, practical feminism".
Susan Wendell argues that "liberal feminism is an historical tradition that grew out of liberalism, as can be seen very clearly in the work of such feminists as Mary Wollstonecraft and John Stuart Mill, but feminists who took principles from that tradition have developed analyses and goals that go far beyond those of 18th and 19th century liberal feminists, and many feminists who have goals and strategies identified as liberal feminist ... reject major components of liberalism" in a modern or party-political sense; she highlights "equality of opportunity" as a defining feature of liberal feminism.
Liberal feminism is a very broad term that encompasses many, often diverging modern branches and a variety of feminist and general political perspectives; some historically liberal branches are equality feminism, social feminism, equity feminism, difference feminism, individualist/libertarian feminism and some forms of state feminism, particularly the state feminism of the Nordic countries. The broad field of liberal feminism is sometimes confused with the more recent and smaller branch known as libertarian feminism, which tends to diverge significantly from mainstream liberal feminism. For example, "libertarian feminism does not require social measures to reduce material inequality; in fact, it opposes such measures ... in contrast, liberal feminism may support such requirements and egalitarian versions of feminism insist on them."
Catherine Rottenberg notes that the raison d'être of classic liberal feminism was "to pose an immanent critique of liberalism, revealing the gendered exclusions within liberal democracy's proclamation of universal equality, particularly with respect to the law, institutional access, and the full incorporation of women into the public sphere." Rottenberg contrasts classic liberal feminism with modern neoliberal feminism which "seems perfectly in sync with the evolving neoliberal order." According to Zhang and Rios, "liberal feminism tends to be adopted by 'mainstream' (i.e., middle-class) women who do not disagree with the current social structure." They found that liberal feminism with its focus on equality is viewed as the dominant and "default" form of feminism.
Some modern forms of feminism that historically grew out of the broader liberal tradition have more recently also been described as conservative in relative terms. This is particularly the case for libertarian feminism which conceives of people as self-owners and therefore as entitled to freedom from coercive interference.
Radical feminism
Radical feminism arose from the radical wing of second-wave feminism and calls for a radical reordering of society to eliminate male supremacy. It considers the male-controlled capitalist hierarchy as the defining feature of women's oppression and the total uprooting and reconstruction of society as necessary. Separatist feminism does not support heterosexual relationships. Lesbian feminism is thus closely related. Other feminists criticize separatist feminism as sexist.
Materialist ideologies
Rosemary Hennessy and Chrys Ingraham say that materialist forms of feminism grew out of Western Marxist thought and have inspired a number of different (but overlapping) movements, all of which are involved in a critique of capitalism and are focused on ideology's relationship to women. Marxist feminism argues that capitalism is the root cause of women's oppression, and that discrimination against women in domestic life and employment is an effect of capitalist ideologies. Socialist feminism distinguishes itself from Marxist feminism by arguing that women's liberation can only be achieved by working to end both the economic and cultural sources of women's oppression. Anarcha-feminists believe that class struggle and anarchy against the state require struggling against patriarchy, which comes from involuntary hierarchy.
Other modern feminisms
Ecofeminism
Ecofeminists see men's control of land as responsible for the oppression of women and destruction of the natural environment. Ecofeminism has been criticized for focusing too much on a mystical connection between women and nature.
Black and postcolonial ideologies
Sara Ahmed argues that Black and postcolonial feminisms pose a challenge "to some of the organizing premises of Western feminist thought". During much of its history, feminist movements and theoretical developments were led predominantly by middle-class white women from Western Europe and North America. However, women of other races have proposed alternative feminisms. This trend accelerated in the 1960s with the civil rights movement in the United States and the end of Western European colonialism in Africa, the Caribbean, parts of Latin America, and Southeast Asia. Since that time, women in developing nations and former colonies and who are of colour or various ethnicities or living in poverty have proposed additional feminisms. Womanism emerged after early feminist movements were largely white and middle-class. Postcolonial feminists argue that colonial oppression and Western feminism marginalized postcolonial women but did not turn them passive or voiceless. Third-world feminism and indigenous feminism are closely related to postcolonial feminism. These ideas also correspond with ideas in African feminism, motherism, Stiwanism, negofeminism, femalism, transnational feminism, and Africana womanism.
Social constructionist ideologies
In the late 20th century various feminists began to argue that gender roles are socially constructed, and that it is impossible to generalize women's experiences across cultures and histories. Post-structural feminism draws on the philosophies of post-structuralism and deconstruction in order to argue that the concept of gender is created socially and culturally through discourse. Postmodern feminists also emphasize the social construction of gender and the discursive nature of reality; however, as Pamela Abbott et al. write, a postmodern approach to feminism highlights "the existence of multiple truths (rather than simply men and women's standpoints)".
Transgender people
Third-wave feminists tend to view the struggle for trans rights as an integral part of intersectional feminism. Fourth-wave feminists also tend to be trans-inclusive. The American National Organization for Women (NOW) president Terry O'Neill said the struggle against transphobia is a feminist issue and NOW has affirmed that "trans women are women, trans girls are girls." Several studies have found that people who identify as feminists tend to be more accepting of trans people than those who do not.
An ideology variously known as trans-exclusionary radical feminism (or its acronym, TERF) or gender-critical feminism is critical of concepts of gender identity and transgender rights, holding that biological sex characteristics are an immutable determination of gender or supersede the importance of gender identity, that trans women are not women, and that trans men are not men. These views have been described as transphobic by many other feminists.
Cultural movements
Riot grrrls took an anti-corporate stance of self-sufficiency and self-reliance. Riot grrrl's emphasis on universal female identity and separatism often appears more closely allied with second-wave feminism than with the third wave. The movement encouraged and made "adolescent girls' standpoints central", allowing them to express themselves fully. Lipstick feminism is a cultural feminist movement that attempts to respond to the backlash of second-wave radical feminism of the 1960s and 1970s by reclaiming symbols of "feminine" identity such as make-up, suggestive clothing and having a sexual allure as valid and empowering personal choices.
Demographics
According to 2014 Ipsos poll covering 15 developed countries, 53 percent of respondents identified as feminists, and 87 percent agreed that "women should be treated equally to men in all areas based on their competency, not their gender". However, only 55 percent of women agreed that they have "full equality with men and the freedom to reach their full dreams and aspirations". Taken together, these studies reflect the importance differentiating between claiming a "feminist identity" and holding "feminist attitudes or beliefs".
According to a 2015 poll, 18 percent of Americans use the label of "feminist" to describe themselves, while 85 percent are feminists in practice as they reported they believe in "equality for women". The poll found that 52 percent did not identify as feminist, 26 percent were unsure, and 4 percent provided no response.
Sociological research shows that, in the US, increased educational attainment is associated with greater support for feminist issues. In addition, politically liberal people are more likely to support feminist ideals compared to those who are conservative.
According to a 2016 Survation poll for the Fawcett Society, 7 percent of Britons use the label of "feminist" to describe themselves, while 83 percent say they support equality of opportunity for women – this included higher support from men (86%) than women (81%).
Sexuality
Feminist views on sexuality vary, and have differed by historical period and by cultural context. Feminist attitudes to female sexuality have taken a few different directions. Matters such as the sex industry, sexual representation in the media, and issues regarding consent to sex under conditions of male dominance have been particularly controversial among feminists. This debate has culminated in the late 1970s and the 1980s, in what came to be known as the feminist sex wars, which pitted anti-pornography feminism against sex-positive feminism, and parts of the feminist movement were deeply divided by these debates. Feminists have taken a variety of positions on different aspects of the sexual revolution from the 1960s and 70s. Over the course of the 1970s, a large number of influential women accepted lesbian and bisexual women as part of feminism.
Sex industry
Opinions on the sex industry are diverse. Feminists who are critical of the sex industry generally see it as the exploitative result of patriarchal social structures which reinforce sexual and cultural attitudes complicit in rape and sexual harassment. Alternately, feminists who support at least part of the sex industry argue that it can be a medium of feminist expression and reflect a woman's right to control and define her own sexuality.
Individualist feminists support the existence of a sex industry on the grounds that adult women have the right to consent to sexual acts as they choose and should have access to labor rights, to earn money how they choose. In this view, banning the sex industry effectively strips women of their right to work and earn money on their own terms, treating them as children who cannot make decisions for themselves. In this view, women who consider the sex industry degrading do not have to partake in it. Women who do choose to work in the sex industry however should not be banned from doing so, given that they are doing so willingly. Libertarian Feminist Zine, Reclaim, has argued that sex work has helped more women (including students, freelancers, and women in poverty) achieve financial independence than all government grants combined.
Feminist views of pornography range from condemnation of pornography as a form of violence against women, to an embracing of some forms of pornography as a medium of feminist expression and a legitimate career. Similarly, feminists' views on prostitution vary, ranging from critical to supportive.
Affirming female sexual autonomy
For feminists, a woman's right to control her own sexuality is a key issue and one that is heavily contested between different branches of feminism. Radical feminists such as Catharine MacKinnon argue that women have very little control over their own bodies, with female sexuality being largely controlled and defined by men in patriarchal societies. Radical feminists argue that sexual violence committed by men is often rooted in ideologies of male sexual entitlement and that these systems grant women very few legitimate options to refuse sexual advances. Some radical feminists have argued that women should not engage in heterosexual sex, and choose lesbianism as a lifestyle and political choice, a view that has fallen out of favor, as sexuality is seen as largely biologically influenced rather than a choice one can make for political reasons.
Some radical feminists argue that all cultures are, in one way or another, dominated by ideologies that deny women's right to sexual expression, because men under a patriarchy define sex on their own terms. This entitlement can take different forms, depending on the culture. In some conservative and religious cultures marriage is regarded as an institution which requires a wife to be sexually available at all times, virtually without limit; thus, forcing or coercing sex on a wife is not considered a crime or even an abusive behaviour.
In 1968, radical feminist Anne Koedt argued in her essay The Myth of the Vaginal Orgasm that women's biology and the clitoral orgasm had not been properly analyzed and popularized, because men "have orgasms essentially by friction with the vagina" and not the clitoral area.
Other branches of feminism such as individualist feminism consider themselves sex-positive, and see women's expression of their own sexuality as a right. In this view, what is or is not "degrading" is subjective, and each person has a right to decide for themselves what sexual acts they find degrading and if they want to participate in them or not. Individualist feminist, Wendy McElroy wrote in her book, XXX: A Woman's Right to Pornography, "let's examine [...] the idea that pornography is degrading to women. Degrading is a subjective term. Personally, I find detergent commercials in which women become orgasmic over soapsuds to be tremendously degrading to women. I find movies in which prostitutes are treated like ignorant drug addicts to be slander against women. Every woman has the right—the need!—to define degradation for herself."
According to this view, part of sexual autonomy is the right to define one's boundaries, desires and limits around their sexuality rather than accept a narrative in which all women are victims of men during a sex act.
Science
Sandra Harding says that the "moral and political insights of the women's movement have inspired social scientists and biologists to raise critical questions about the ways traditional researchers have explained gender, sex and relations within and between the social and natural worlds." Some feminists, such as Ruth Hubbard and Evelyn Fox Keller, criticize traditional scientific discourse as being historically biased towards a male perspective. A part of the feminist research agenda is the examination of the ways in which power inequities are created or reinforced in scientific and academic institutions. Physicist Lisa Randall, appointed to a task force at Harvard by then-president Lawrence Summers after his controversial discussion of why women may be underrepresented in science and engineering, said, "I just want to see a whole bunch more women enter the field so these issues don't have to come up anymore."
Lynn Hankinson Nelson writes that feminist empiricists find fundamental differences between the experiences of men and women. Thus, they seek to obtain knowledge through the examination of the experiences of women and to "uncover the consequences of omitting, misdescribing, or devaluing them" to account for a range of human experience. Another part of the feminist research agenda is the uncovering of ways in which power inequities are created or reinforced in society and in scientific and academic institutions. Furthermore, despite calls for greater attention to be paid to structures of gender inequity in the academic literature, structural analyses of gender bias rarely appear in highly cited psychological journals, especially in the commonly studied areas of psychology and personality.
One criticism of feminist epistemology is that it allows social and political values to influence its findings. Susan Haack also points out that feminist epistemology reinforces traditional stereotypes about women's thinking (as intuitive and emotional, etc.); Meera Nanda further cautions that this may in fact trap women within "traditional gender roles and help justify patriarchy".
Biology and gender
Modern feminism challenges the essentialist view of gender as biologically intrinsic. For example, Anne Fausto-Sterling's book, Myths of Gender, explores the assumptions embodied in scientific research that support a biologically essentialist view of gender. In Delusions of Gender, Cordelia Fine disputes scientific evidence that suggests that there is an innate biological difference between men's and women's minds, asserting instead that cultural and societal beliefs are the reason for differences between individuals that are commonly perceived as sex differences.
Feminist psychology
Feminism in psychology emerged as a critique of the dominant male outlook on psychological research where only male perspectives were studied with all male subjects. As women earned doctorates in psychology, women and their issues were introduced as legitimate topics of study. Feminist psychology emphasizes social context, lived experience, and qualitative analysis. Projects such as Psychology's Feminist Voices have emerged to catalogue the influence of feminist psychologists on the discipline.
Culture
Design
There is a long history of feminist activity in design disciplines like industrial design, graphic design and fashion design. This work has explored topics like beauty, DIY, feminine approaches to design and community-based projects. Some iconic writing includes Cheryl Buckley's essays on design and patriarchy and Joan Rothschild's Design and Feminism: Re-Visioning Spaces, Places, and Everyday Things. More recently, Isabel Prochner's research explored how feminist perspectives can support positive change in industrial design, helping to identify systemic social problems and inequities in design and guiding socially sustainable and grassroots design solutions.
Businesses
Feminist activists have established a range of feminist businesses, including feminist bookstores, credit unions, presses, mail-order catalogs and restaurants. These businesses flourished as part of the second and third waves of feminism in the 1970s, 1980s, and 1990s.
Visual arts
Corresponding with general developments within feminism, and often including such self-organizing tactics as the consciousness-raising group, the movement began in the 1960s and flourished throughout the 1970s. Jeremy Strick, director of the Museum of Contemporary Art in Los Angeles, described the feminist art movement as "the most influential international movement of any during the postwar period", and Peggy Phelan says that it "brought about the most far-reaching transformations in both artmaking and art writing over the past four decades". Feminist artist Judy Chicago, who created The Dinner Party, a set of vulva-themed ceramic plates in the 1970s, said in 2009 to ARTnews, "There is still an institutional lag and an insistence on a male Eurocentric narrative. We are trying to change the future: to get girls and boys to realize that women's art is not an exception—it's a normal part of art history." A feminist approach to the visual arts has most recently developed through cyberfeminism and the posthuman turn, giving voice to the ways "contemporary female artists are dealing with gender, social media and the notion of embodiment".
Literature
The feminist movement produced feminist fiction, feminist non-fiction, and feminist poetry, which created new interest in women's writing. It also prompted a general reevaluation of women's historical and academic contributions in response to the belief that women's lives and contributions have been underrepresented as areas of scholarly interest. There has also been a close link between feminist literature and activism, with feminist writing typically voicing key concerns or ideas of feminism in a particular era.
Much of the early period of feminist literary scholarship was given over to the rediscovery and reclamation of texts written by women. In Western feminist literary scholarship, Studies like Dale Spender's Mothers of the Novel (1986) and Jane Spencer's The Rise of the Woman Novelist (1986) were ground-breaking in their insistence that women have always been writing.
Commensurate with this growth in scholarly interest, various presses began the task of reissuing long-out-of-print texts. Virago Press began to publish its large list of 19th- and early-20th-century novels in 1975 and became one of the first commercial presses to join in the project of reclamation. In the 1980s, Pandora Press, responsible for publishing Spender's study, issued a companion line of 18th-century novels written by women. More recently, Broadview Press continues to issue 18th- and 19th-century novels, many hitherto out of print, and the University of Kentucky has a series of republications of early women's novels.
Particular works of literature have come to be known as key feminist texts. A Vindication of the Rights of Woman (1792) by Mary Wollstonecraft, is one of the earliest works of feminist philosophy. A Room of One's Own (1929) by Virginia Woolf, is noted in its argument for both a literal and figural space for women writers within a literary tradition dominated by patriarchy.
The widespread interest in women's writing is related to a general reassessment and expansion of the literary canon. Interest in post-colonial literatures, gay and lesbian literature, writing by people of colour, working people's writing, and the cultural productions of other historically marginalized groups has resulted in a whole scale expansion of what is considered "literature", and genres hitherto not regarded as "literary", such as children's writing, journals, letters, travel writing, and many others are now the subjects of scholarly interest. Most genres and subgenres have undergone a similar analysis, so literary studies have entered new territories such as the "female gothic" or women's science fiction.
According to Elyce Rae Helford, "Science fiction and fantasy serve as important vehicles for feminist thought, particularly as bridges between theory and practice." Feminist science fiction is sometimes taught at the university level to explore the role of social constructs in understanding gender. Notable texts of this kind are Ursula K. Le Guin's The Left Hand of Darkness (1969), Joanna Russ' The Female Man (1970), Octavia Butler's Kindred (1979) and Margaret Atwood's Handmaid's Tale (1985).
Feminist nonfiction has played an important role in voicing concerns about women's lived experiences. For example, Maya Angelou's I Know Why the Caged Bird Sings was extremely influential, as it represented the specific racism and sexism experienced by black women growing up in the United States.
In addition, many feminist movements have embraced poetry as a vehicle through which to communicate feminist ideas to public audiences through anthologies, poetry collections, and public readings.
Moreover, historical pieces of writing by women have been used by feminists to speak about what women's lives were like in the past while demonstrating the power that they held and the impact they had in their communities. An important figure in the history of women's literature is Hrotsvitha (–973), a canoness who was an early female poet in the German lands. As a historian, Hrotsvitha is one of the few writers to address women's lives from a woman's perspective during the Middle Ages. Hrotsvitha's six short dramas are considered to be her magnum opus. She has been called "the most remarkable woman of her time" and an important figure in the history of women.
Music
Women's music (or womyn's music or wimmin's music) is the music by women, for women, and about women. The genre emerged as a musical expression of the second-wave feminist movement as well as the labour, civil rights, and peace movements. The movement was started by lesbians such as Cris Williamson, Meg Christian, and Margie Adam, African-American women activists such as Bernice Johnson Reagon and her group Sweet Honey in the Rock, and peace activist Holly Near. Women's music also refers to the wider industry of women's music that goes beyond the performing artists to include studio musicians, producers, sound engineers, technicians, cover artists, distributors, promoters, and festival organizers who are also women.
Riot grrrl is an underground feminist hardcore punk movement described in the cultural movements section of this article.
Feminism became a principal concern of musicologists in the 1980s as part of the New Musicology. Prior to this, in the 1970s, musicologists were beginning to discover women composers and performers, and had begun to review concepts of canon, genius, genre and periodization from a feminist perspective. In other words, the question of how women musicians fit into traditional music history was now being asked. Through the 1980s and 1990s, this trend continued as musicologists like Susan McClary, Marcia Citron and Ruth Solie began to consider the cultural reasons for the marginalizing of women from the received body of work. Concepts such as music as gendered discourse; professionalism; reception of women's music; examination of the sites of music production; relative wealth and education of women; popular music studies in relation to women's identity; patriarchal ideas in music analysis; and notions of gender and difference are among the themes examined during this time.
While the music industry has long been open to having women in performance or entertainment roles, women are much less likely to have positions of authority, such as being the leader of an orchestra. In popular music, while there are many women singers recording songs, there are very few women behind the audio console acting as music producers, the individuals who direct and manage the recording process.
Cinema
Feminist cinema, advocating or illustrating feminist perspectives, arose largely with the development of feminist film theory in the late 1960s and early 1970s. Women who were radicalized during the 1960s by political debate and sexual liberation; but the failure of radicalism to produce substantive change for women galvanized them to form consciousness-raising groups and set about analysing, from different perspectives, dominant cinema's construction of women. Differences were particularly marked between feminists on either side of the Atlantic. 1972 saw the first feminist film festivals in the U.S. and U.K. as well as the first feminist film journal, Women & Film. Trailblazers from this period included Claire Johnston and Laura Mulvey, who also organized the Women's Event at the Edinburgh Film Festival. Other theorists making a powerful impact on feminist film include Teresa de Lauretis, Anneke Smelik and Kaja Silverman. Approaches in philosophy and psychoanalysis fuelled feminist film criticism, feminist independent film and feminist distribution.
It has been argued that there are two distinct approaches to independent, theoretically inspired feminist filmmaking. 'Deconstruction' concerns itself with analysing and breaking down codes of mainstream cinema, aiming to create a different relationship between the spectator and dominant cinema. The second approach, a feminist counterculture, embodies feminine writing to investigate a specifically feminine cinematic language. Bracha L. Ettinger invented a field of notions and concepts that serve the research of cinema from feminine perspective: The Matrixial Gaze. Ettinger's language include original concepts to discover feminine perspectives. Many writers in the fields of film theory and contemporary art are using the Ettingerian matrixial sphere (matricial sphere).
During the 1930s–1950s heyday of the big Hollywood studios, the status of women in the industry was abysmal. Since then female directors such as Sally Potter, Catherine Breillat, Claire Denis and Jane Campion have made art movies, and directors like Kathryn Bigelow and Patty Jenkins have had mainstream success. This progress stagnated in the 1990s, and men outnumber women five to one in behind the camera roles.
Politics
Feminism had complex interactions with the major political movements of the 20th century.
Socialism
Since the late 19th century, some feminists have allied with socialism, whereas others have criticized socialist ideology for being insufficiently concerned about women's rights. August Bebel, an early activist of the German Social Democratic Party (SPD), published his work Die Frau und der Sozialismus, juxtaposing the struggle for equal rights between sexes with social equality in general. In 1907 there was an International Conference of Socialist Women in Stuttgart where suffrage was described as a tool of class struggle. Clara Zetkin of the SPD called for women's suffrage to build a "socialist order, the only one that allows for a radical solution to the women's question".
In Britain, the women's movement was allied with the Labour party. In the U.S., Betty Friedan emerged from a radical background to take leadership. Radical Women is the oldest socialist feminist organization in the U.S. and is still active. During the Spanish Civil War, Dolores Ibárruri (La Pasionaria) led the Communist Party of Spain. Although she supported equal rights for women, she opposed women fighting on the front and clashed with the anarcha-feminist Mujeres Libres.
Feminists in Ireland in the early 20th century included the revolutionary Irish Republican, suffragette and socialist Constance Markievicz who in 1918 was the first woman elected to the British House of Commons. However, in line with Sinn Féin abstentionist policy, she would not take her seat in the House of Commons. She was re-elected to the Second Dáil in the elections of 1921. She was also a commander of the Irish Citizens Army, which was led by the socialist and self-described feminist Irish leader James Connolly, during the 1916 Easter Rising.
Fascism
Fascism has been prescribed dubious stances on feminism by its practitioners and by women's groups. Amongst other demands concerning social reform presented in the Fascist manifesto in 1919 was expanding the suffrage to all Italian citizens of age 18 and above, including women (accomplished only in 1946, after the defeat of fascism) and eligibility for all to stand for office from age 25. This demand was particularly championed by special Fascist women's auxiliary groups such as the fasci femminilli and only partly realized in 1925, under pressure from dictator Benito Mussolini's more conservative coalition partners.
Cyprian Blamires states that although feminists were among those who opposed the rise of Adolf Hitler, feminism has a complicated relationship with the Nazi movement as well. While Nazis glorified traditional notions of patriarchal society and its role for women, they claimed to recognize women's equality in employment. However, Hitler and Mussolini declared themselves as opposed to feminism, and after the rise of Nazism in Germany in 1933, there was a rapid dissolution of the political rights and economic opportunities that feminists had fought for during the pre-war period and to some extent during the 1920s. Georges Duby et al. write that in practice fascist society was hierarchical and emphasized male virility, with women maintaining a largely subordinate position. Blamires also writes that neofascism has since the 1960s been hostile towards feminism and advocates that women accept "their traditional roles".
Civil rights movement and anti-racism
The civil rights movement has influenced and informed the feminist movement and vice versa. Many American feminists adapted the language and theories of black equality activism and drew parallels between women's rights and the rights of non-white people. Despite the connections between the women's and civil rights movements, some tensions arose during the late 1960s and the 1970s as non-white women argued that feminism was predominantly white, straight, and middle class, and did not understand and was not concerned with issues of race and sexuality. Similarly, some women argued that the civil rights movement had sexist and homophobic elements and did not adequately address minority women's concerns. These criticisms created new feminist social theories about identity politics and the intersections of racism, classism, and sexism; they also generated new feminisms such as black feminism and Chicana feminism in addition to making large contributions to lesbian feminism and other integrations of queer of colour identity.
Neoliberalism
Neoliberalism has been criticized by feminist theory for having a negative effect on the female workforce population across the globe, especially in the global south. Masculinist assumptions and objectives continue to dominate economic and geopolitical thinking. Women's experiences in non-industrialized countries reveal often deleterious effects of modernization policies and undercut orthodox claims that development benefits everyone.
Proponents of neoliberalism have theorized that by increasing women's participation in the workforce, there will be heightened economic progress, but feminist critics have stated that this participation alone does not further equality in gender relations. Neoliberalism has failed to address significant problems such as the devaluation of feminized labour, the structural privileging of men and masculinity, and the politicization of women's subordination in the family and the workplace. The "feminization of employment" refers to a conceptual characterization of deteriorated and devalorized labour conditions that are less desirable, meaningful, safe and secure. Employers in the global south have perceptions about feminine labour and seek workers who are perceived to be undemanding, docile and willing to accept low wages. Social constructs about feminized labour have played a big part in this, for instance, employers often perpetuate ideas about women as 'secondary income earners to justify their lower rates of pay and not deserving of training or promotion.
Societal impact
The feminist movement has effected change in Western society, including women's suffrage; greater access to education; more equal payment to men; the right to initiate divorce proceedings; the right of women to make individual decisions regarding pregnancy (including access to contraceptives and abortion); and the right to own property.
Civil rights
From the 1960s on, the campaign for women's rights was met with mixed results in the U.S. and the U.K. Other countries of the EEC agreed to ensure that discriminatory laws would be phased out across the European Community.
Some feminist campaigning also helped reform attitudes to child sexual abuse. The view that young girls cause men to have sexual intercourse with them was replaced by that of men's responsibility for their own conduct, the men being adults.
In the U.S., the National Organization for Women (NOW) began in 1966 to seek women's equality, including through the Equal Rights Amendment (ERA), which did not pass, although some states enacted their own. Reproductive rights in the U.S. centred on the court decision in Roe v. Wade enunciating a woman's right to choose whether to carry a pregnancy to term.
The division of labour within households was affected by the increased entry of women into workplaces in the 20th century. Sociologist Arlie Russell Hochschild found that, in two-career couples, men and women, on average, spend about equal amounts of time working, but women still spend more time on housework, although Cathy Young responded by arguing that women may prevent equal participation by men in housework and parenting. Judith K. Brown writes, "Women are most likely to make a substantial contribution when subsistence activities have the following characteristics: the participant is not obliged to be far from home; the tasks are relatively monotonous and do not require rapt concentration and the work is not dangerous, can be performed in spite of interruptions, and is easily resumed once interrupted."
In international law, the Convention on the Elimination of All Forms of Discrimination Against Women (CEDAW) is an international convention adopted by the United Nations General Assembly and described as an international bill of rights for women. It came into force in those nations ratifying it.
Jurisprudence
Feminist jurisprudence is a branch of jurisprudence that examines the relationship between women and law. It addresses questions about the history of legal and social biases against women and about the enhancement of their legal rights.
Feminist jurisprudence signifies a reaction to the philosophical approach of modern legal scholars, who typically see the law as a process for interpreting and perpetuating a society's universal, gender-neutral ideals. Feminist legal scholars claim that this fails to acknowledge women's values or legal interests or the harms that they may anticipate or experience.
Language
Proponents of gender-neutral language argue that the use of gender-specific language often implies male superiority or reflects an unequal state of society. According to The Handbook of English Linguistics, generic masculine pronouns and gender-specific job titles are instances "where English linguistic convention has historically treated men as prototypical of the human species."
Merriam-Webster chose "feminism" as its 2017 Word of the Year, noting that "Word of the Year is a quantitative measure of interest in a particular word."
Theology
Feminist theology is a movement that reconsiders the traditions, practices, scriptures, and theologies of religions from a feminist perspective. Some of the goals of feminist theology include increasing the role of women among the clergy and religious authorities, reinterpreting male-dominated imagery and language about God, determining women's place in relation to career and motherhood, and studying images of women in the religion's sacred texts.
Christian feminism is a branch of feminist theology which seeks to interpret and understand Christianity in light of the equality of women and men, and that this interpretation is necessary for a complete understanding of Christianity. While there is no standard set of beliefs among Christian feminists, most agree that God does not discriminate on the basis of sex, and are involved in issues such as the ordination of women, male dominance and the balance of parenting in Christian marriage, claims of moral deficiency and inferiority of women compared to men, and the overall treatment of women in the church.
Islamic feminists advocate women's rights, gender equality, and social justice grounded within an Islamic framework. Advocates seek to highlight the deeply rooted teachings of equality in the Quran and encourage a questioning of the patriarchal interpretation of Islamic teaching through the Quran, hadith (sayings of Muhammad), and sharia (law) towards the creation of a more equal and just society. Although rooted in Islam, the movement's pioneers have also used secular and Western feminist discourses and recognize the role of Islamic feminism as part of an integrated global feminist movement.
Buddhist feminism is a movement that seeks to improve the religious, legal, and social status of women within Buddhism. It is an aspect of feminist theology which seeks to advance and understand the equality of men and women morally, socially, spiritually, and in leadership from a Buddhist perspective. The Buddhist feminist Rita Gross describes Buddhist feminism as "the radical practice of the co-humanity of women and men".
Jewish feminism is a movement that seeks to improve the religious, legal, and social status of women within Judaism and to open up new opportunities for religious experience and leadership for Jewish women. The main issues for early Jewish feminists in these movements were the exclusion from the all-male prayer group or minyan, the exemption from positive time-bound mitzvot, and women's inability to function as witnesses and to initiate divorce. Many Jewish women have become leaders of feminist movements throughout their history.
Dianic Wicca is a feminist-centred thealogy.
Secular or atheist feminists have engaged in feminist criticism of religion, arguing that many religions have oppressive rules towards women and misogynistic themes and elements in religious texts.
Patriarchy
Patriarchy is a social system in which society is organized around male authority figures. In this system, fathers have authority over women, children, and property. It implies the institutions of male rule and privilege and is dependent on female subordination. Most forms of feminism characterize patriarchy as an unjust social system that is oppressive to women. Carole Pateman argues that the patriarchal distinction "between masculinity and femininity is the political difference between freedom and subjection." In feminist theory the concept of patriarchy often includes all the social mechanisms that reproduce and exert male dominance over women. Feminist theory typically characterizes patriarchy as a social construction, which can be overcome by revealing and critically analyzing its manifestations. Some radical feminists have proposed that because patriarchy is too deeply rooted in society, separatism is the only viable solution. Other feminists have criticized these views as being anti-men.
Men and masculinity
Feminist theory has explored the social construction of masculinity and its implications for the goal of gender equality. The social construct of masculinity is seen by feminism as problematic because it associates males with aggression and competition, and reinforces patriarchal and unequal gender relations. Patriarchal cultures are criticized for "limiting forms of masculinity" available to men and thus narrowing their life choices. Some feminists are engaged with men's issues activism, such as bringing attention to male rape and spousal battery and addressing negative social expectations for men.
Male participation in feminism is generally encouraged by feminists and is seen as an important strategy for achieving full societal commitment to gender equality. Many male feminists and pro-feminists are active in both women's rights activism, feminist theory, and masculinity studies. However, some argue that while male engagement with feminism is necessary, it is problematic because of the ingrained social influences of patriarchy in gender relations. The consensus today in feminist and masculinity theories is that men and women should cooperate to achieve the larger goals of feminism.
Reactions
Different groups of people have responded to feminism, and both men and women have been among its supporters and critics. Among American university students, for both men and women, support for feminist ideas is more common than self-identification as a feminist. The US media tends to portray feminism negatively and feminists "are less often associated with day-to-day work/leisure activities of regular women". However, as recent research has demonstrated, as people are exposed to self-identified feminists and to discussions relating to various forms of feminism, their own self-identification with feminism increases.
Pro-feminism
Pro-feminism is the support of feminism without implying that the supporter is a member of the feminist movement. The term is most often used in reference to men who are actively supportive of feminism. The activities of pro-feminist men's groups include anti-violence work with boys and young men in schools, offering sexual harassment workshops in workplaces, running community education campaigns, and counselling male perpetrators of violence. Pro-feminist men also may be involved in men's health, activism against pornography including anti-pornography legislation, men's studies, and the development of gender equity curricula in schools. This work is sometimes in collaboration with feminists and women's services, such as domestic violence and rape crisis centres.
Anti-feminism and criticism of feminism
Anti-feminism is opposition to feminism in some or all of its forms.
In the 19th century, anti-feminism was mainly focused on opposition to women's suffrage. Later, opponents of women's entry into institutions of higher learning argued that education was too great a physical burden on women. Other anti-feminists opposed women's entry into the labour force, or their right to join unions, to sit on juries, or to obtain birth control and control of their sexuality.
Some people have opposed feminism on the grounds that they believe it is contrary to traditional values or religious beliefs. Some anti-feminists argue, for example, that social acceptance of divorce and non-married women is wrong and harmful, and that men and women are fundamentally different and thus their different traditional roles in society should be maintained. Other anti-feminists oppose women's entry into the workforce, political office, and the voting process, as well as the lessening of male authority in families.
Writers such as Camille Paglia, Christina Hoff Sommers, Jean Bethke Elshtain, Elizabeth Fox-Genovese, and Daphne Patai oppose some forms of feminism, though they identify as feminists. They argue, for example, that feminism often promotes misandry and the elevation of women's interests above men's, and criticize radical feminist positions as harmful to both men and women. Daphne Patai and Noretta Koertge argue that the term "anti-feminist" is used to silence academic debate about feminism. A meta-analysis in 2023 published in the journal Psychology of Women Quarterly investigated the stereotype of feminists' attitudes to men and concluded that feminist views of men were no different to that of non-feminists or men towards men and titled the phenomenon the misandry myth, based on "evidence that it is false and widespread".
Secular humanism
Secular humanism is an ethical framework that attempts to dispense with any unreasoned dogma, pseudoscience, and superstition. Critics of feminism sometimes ask "Why feminism and not humanism?". Some humanists argue, however, that the goals of feminists and humanists largely overlap, and the distinction is only in motivation. For example, a humanist may consider abortion in terms of a utilitarian ethical framework, rather than considering the motivation of any particular woman in getting an abortion. In this respect, it is possible to be a humanist without being a feminist, but this does not preclude the existence of feminist humanism. Humanism played a significant role in protofeminism during the Renaissance period in such that humanists made educated women popular figures despite the challenge of the patriarchal organization of society.
See also
Anti-subordination principle
Black feminism
Decolonial feminism
Feminism and racism
Feminist Studies
Feminist peace research
Index of feminism articles
Indigenous feminism
Lesbian erasure
List of feminist parties
List of queens regnant
Masculism
Matriarchy
Matrilineality
Men's rights movement
Multiracial feminist theory
Straw feminism
White feminism
Explanatory notes
References
Bibliography
Further reading
Mitchell, Brian (1998). Women in the Military: Flirting with Disaster. Washington, D.C.: Regnery Publishing. xvii, 390 p. .
Posted 13 December 2011. Pdf.
Feminist.com
Psychology's Feminist Voices
"Topics in Feminism", at the Stanford Encyclopedia of Philosophy
External links
Articles
Active research
Feminist Perspectives Scale : An academic survey to determine acceptance or rejection of feminist ideas from:
Multimedia and documents
Early Video on the Emancipation of Women , documentary filmed , which includes footage from the 1890s
Documents from the Women's Liberation Movement, Special Collections Library, Duke University
History of feminism at Heritage Calling, Historic England
1830s neologisms
Social theories | 0.770892 | 0.999485 | 0.770495 |
Post-postmodernism | Post-postmodernism is a wide-ranging set of developments in critical theory, philosophy, architecture, art, literature, and culture which are emerging from and reacting to postmodernism.
Periodization
Most scholars would agree that modernism began around 1900 and continued on as the dominant cultural force in the intellectual circles of Western culture well into the mid-twentieth century. Like all eras, modernism encompasses many competing individual directions and is impossible to define as a discrete unity or totality. However, its chief general characteristics are often thought to include an emphasis on "radical aesthetics, technical experimentation, spatial or rhythmic, rather than chronological form, [and] self-conscious reflexiveness" as well as the search for authenticity in human relations, abstraction in art, and utopian striving. These characteristics are normally lacking in postmodernism or are treated as objects of irony.
Postmodernism arose after World War II as a reaction to the perceived failings of modernism, whose radical artistic projects had come to be associated with totalitarianism or had been assimilated into mainstream culture. The basic features of what is now called postmodernism can be found as early as the 1940s, most notably in the work of Jorge Luis Borges. However, most scholars today would agree that postmodernism began to compete with modernism in the late 1950s and gained ascendancy over it in the 1960s. Since then, postmodernism has been a dominant, though not undisputed, force in art, literature, film, music, drama, architecture, history, and continental philosophy. Salient features of postmodernism are normally thought to include the ironic play with styles, citations and narrative levels, a metaphysical skepticism or nihilism towards a "grand narrative" of Western culture, a preference for the virtual at the expense of the real (or more accurately, a fundamental questioning of what "the real" constitutes) and a "waning of affect" on the part of the subject, who is caught up in the free interplay of virtual, endlessly reproducible signs inducing a state of consciousness similar to schizophrenia.
Since the late 1990s, there has been a small but growing feeling both in popular culture and in academia that postmodernism "has gone out of fashion." However, there have been few formal attempts to define and name the era succeeding postmodernism, and none of the proposed designations has yet become part of mainstream usage.
Definitions
Consensus on what constitutes an era can not be easily achieved while that era is still in its early stages. However, a common theme of current attempts to define post-postmodernism is emerging as one where faith, trust, dialogue, performance, and sincerity can work to transcend postmodern irony. The following definitions, which vary widely in depth, focus, and scope, are listed in the chronological order of their appearance.
Turner's post-postmodernism
In 1995, the landscape architect and urban planner Tom Turner issued a book-length call for a post-postmodern turn in urban planning. Turner criticizes the postmodern credo of "anything goes" and suggests that "the built environment professions are witnessing the gradual dawn of a post-Postmodernism that seeks to temper reason with faith." In particular, Turner argues for the use of timeless organic and geometrical patterns in urban planning. As sources of such patterns he cites, among others, the Taoist-influenced work of the American architect Christopher Alexander, gestalt psychology and the psychoanalyst Carl Jung's concept of archetypes. Regarding terminology, Turner urges people to "embrace post-Postmodernism – and pray for a better name."
Epstein's trans-postmodernism
In his 1999 book on Russian postmodernism, the Russian-American Slavist Mikhail Epstein suggested that postmodernism "is ... part of a much larger historical formation," which he calls "postmodernity". Epstein believes that postmodernist aesthetics will eventually become entirely conventional and provide the foundation for a new, non-ironic kind of poetry, which he describes using the prefix "trans-":
As an example Epstein cites the work of the contemporary Russian poet Timur Kibirov.
Kirby's pseudo-modernism or digimodernism
In his 2006 paper The Death of Postmodernism and Beyond, the British scholar Alan Kirby formulated a socio-cultural assessment of post-postmodernism that he calls "pseudo-modernism". Kirby associates pseudo-modernism with the triteness and shallowness resulting from the instantaneous, direct, and superficial participation in culture made possible by the internet, mobile phones, interactive television and similar means: "In pseudo-modernism one phones, clicks, presses, surfs, chooses, moves, downloads."
Pseudo-modernism's "typical intellectual states" are furthermore described as being "ignorance, fanaticism and anxiety" and it is said to produce a "trance-like state" in those participating in it. The net result of this media-induced shallowness and instantaneous participation in trivial events is a "silent autism" superseding "the neurosis of modernism and the narcissism of postmodernism." Kirby sees no aesthetically valuable works coming out of "pseudo-modernism". As examples of its triteness he cites reality TV, interactive news programs, "the drivel found ... on some Wikipedia pages", docu-soaps, and the essayistic cinema of Michael Moore or Morgan Spurlock. In a book published in September 2009 titled Digimodernism: How New Technologies Dismantle the Postmodern and Reconfigure our Culture, Kirby developed further and nuanced his views on culture and textuality in the aftermath of postmodernism.
Vermeulen and van den Akker's metamodernism
In 2010, the cultural theorists Timotheus Vermeulen and Robin van den Akker introduced the term metamodernism as an intervention in the post-postmodernism debate. In their article "Notes on Metamodernism" they assert that the 2000s are characterized by the emergence of a sensibility that oscillates between, and must be situated beyond, modern positions and postmodern strategies. As examples of the metamodern sensibility Vermeulen and van den Akker cite the "informed naivety", "pragmatic idealism" and "moderate fanaticism" of the various cultural responses to, among others, climate change, the financial crisis, and (geo)political instability.
The prefix 'meta' here refers not to some reflective stance or repeated rumination, but to Plato's metaxy, which intends a movement between opposite poles as well as beyond.
See also
Altermodern
Cold War
Dogme 95
Excessivism
Integral theory (Ken Wilber)
Kitsch movement
Maximalism
Metamodernism
Neo-minimalism
New Puritans
New Sincerity
New Urbanism
Post-irony
Post-truth
Pseudorealism
Radical orthodoxy
Remodernism
Stuckism
Transmodernism
References
External links
Essay by Alan Kirby on theories of post-postmodernism
Essay by Mikhail Epstein on The Place of Postmodernism in Postmodernity
Introduction to "Digimodernism: How New Technologies Dismantle the Postmodern and Reconfigure Our Culture" by Alan Kirby
Notes on metamodernism
Performatism.de (Resource site for performatism and theories of post-postmodernism)
Post-post-modernism known as Authenticism
Post-postmodern novel by Patrick J. F. Quere
Post-postmodernism known as Hyperhybridism
Theories of aesthetics
Architectural theory
Visual arts genres
Contemporary art
Postmodern theory
Critical theory
Criticism of postmodernism
Modernism
Modernist architecture
New Urbanism
+
+
21st century in the arts | 0.775069 | 0.994091 | 0.770489 |
Metal Ages | The Metal Ages is a term for the period of human civilization beginning about 6,000 years ago during which metallurgy rapidly advanced, and human populations started using metals such as copper, tin, bronze and finally iron to make tools and weapons. By heating and shaping metals in hot furnaces, humanity also learned to use precious metals such as gold and silver to make intricate ornaments.
With these technological adaptions, human society became more productive and human settlements became larger and more prosperous, but also more violent. The Metal Ages are divided into three stages: the Copper Age, the Bronze Age, and the Iron Age.
References
Bibliography
General
Dawn of the metal age: Technology and society during the Levantine Chalcolithic
Flint‐working in the metal age
About the chronology of the beginning of the metal ages
Specific
Beginning of the metal age in the central Balkans according to the results of the archeometallurgy
The Dongson Culture and cultural centers in the Metal Age in Vietnam
From the Mousterian to the Metal Ages: Long-term change in the human diet of northern Spain | 0.774186 | 0.995084 | 0.77038 |
Oligarchy | Oligarchy (; ) is a conceptual form of power structure in which power rests with a small number of people. These people may or may not be distinguished by one or several characteristics, such as nobility, fame, wealth, education, or corporate, religious, political, or military control.
Throughout history, power structures considered to be oligarchies have often been viewed as coercive, relying on public obedience or oppression to exist. Aristotle pioneered the use of the term as meaning rule by the rich, contrasting it with aristocracy, arguing that oligarchy was the perverted form of aristocracy.
Types
Minority rule
The consolidation of power by a dominant religious or ethnic minority can be considered a form of oligarchy. Examples include South Africa during apartheid, Liberia under Americo-Liberians, the Sultanate of Zanzibar, and Rhodesia. In these cases, oligarchic rule was often tied to the legacy of colonialism.
In the early 20th century, Robert Michels expanded on this idea in his Iron Law of Oligarchy He argued that even democracies, like all large organizations, tend to become oligarchic due to the necessity of dividing labor, which ultimately results in a ruling class focused on maintaining its power.
Putative oligarchies
Business groups may be considered oligarchies if they meet the following criteria:
They are the largest private owners in the country.
They possess sufficient political power to influence their own interests.
The owners control multiple businesses, coordinating activities across sectors.
Intellectual oligarchies
George Bernard Shaw coined the concept of an intellectual oligarchy in his play Major Barbara (1907). In the play, Shaw criticizes the control of society by intellectual elites and expresses a desire for the empowerment of the common people:I now want to give the common man weapons against the intellectual man. I love the common people. I want to arm them against the lawyer, the doctor, the priest, the literary man, the professor, the artist, and the politician, who, once in authority, is the most dangerous, disastrous, and tyrannical of all the fools, rascals, and impostors. I want a democratic power strong enough to force the intellectual oligarchy to use its genius for the general good or else perish.
Countries perceived as oligarchies
Jeffrey A. Winters and Benjamin I. Page have described Colombia, Indonesia, Russia, Singapore and the United States as oligarchies.
The Philippines
During the Presidency of Ferdinand Marcos from 1965 to 1986, several monopolies arose in the Philippines, primarily linked to the Marcos family and their close associates. Analysts have described this period, and even subsequent decades, as an era of oligarchy in the Philippines.
President Rodrigo Duterte, elected in 2016, promised to dismantle the oligarchy during his presidency. However, corporate oligarchy persisted throughout his tenure. While Duterte criticized prominent tycoons such as the Ayalas and Manny Pangilinan, corporate figures allied with Duterte, including Dennis Uy of Udenna Corporation, benefitted during his administration.
Russian Federation
Since the Dissolution of the Soviet Union in 1991 and the subsequent Privatization of state-owned assets, a class of Russian oligarchs emerged. These oligarchs gained control of significant portions of the economy, especially in the energy, metals, and natural resources sectors. Many of these individuals maintained close ties with government officials, particularly the president, leading some to characterize modern Russia as an oligarchy intertwined with the state.
Iran
The Islamic Republic of Iran, established after the 1979 Iranian Revolution, is sometimes described as a clerical oligarchy. Its ruling system, known as Velayat-e-Faqih (Governance of the Jurist), places power in the hands of a small group of high-ranking Shia clerics, led by the Supreme Leader. This group holds significant influence over the country's legislative, military, and economic affairs, and critics argue that this system concentrates power in a religious elite, marginalizing other voices within society.
Ukraine
Since Ukraine's independence in 1991, a powerful class of business elites, known as Ukrainian oligarchs, has played a significant role in the country's politics and economy. These oligarchs gained control of state assets during the rapid privatization that followed the collapse of the Soviet Union. By 2021, Ukraine passed a law aimed at curbing oligarchic influence on politics and the economy.
United States
Several commentators and scholars have suggested that the United States demonstrates characteristics of an oligarchy, particularly in relation to the concentration of wealth and political influence among a small elite, as exemplified by the list of top (political party) donors.
Economist Simon Johnson argued that the rise of an American financial oligarchy became particularly prominent following the 2008 financial crisis. This financial elite has been described as wielding significant power over both the economy and political decisions.
Former President Jimmy Carter in 2015 characterized the United States as an "oligarchy with unlimited political bribery" following the 2010 Citizens United v. FEC Supreme Court decision, which removed limits on donations to political campaigns .
In 2014, a study by political scientists Martin Gilens of Princeton University and Benjamin Page of Northwestern University argued that the United States' political system does not primarily reflect the preferences of its average citizens. Their analysis of policy outcomes between 1981 and 2002 suggested that wealthy individuals and business groups held substantial influence over political decisions, often sidelining the majority of Americans. While the United States maintains democratic features such as regular elections, freedom of speech, and widespread suffrage, the study noted that policy decisions are disproportionately influenced by economic elites.
However, the study received criticism from other scholars, who argued that the influence of average citizens should not be discounted and that the conclusions about oligarchic tendencies were overstated. Gilens and Page defended their research, reiterating that while they do not label the United States an outright oligarchy, they found substantial evidence of economic elites dominating certain areas of policy-making.
China
The National Geographic Society's online encyclopedia considers China to be an oligarchy.
See also
Aristocracy
Cacique democracy
Despotism
Dictatorship
Inverted totalitarianism
Iron law of oligarchy
Kleptocracy
Meritocracy
Military dictatorship
Minoritarianism
Nepotism
Netocracy
Oligopoly
Oligarchical collectivism
Parasitism
Plutocracy
Political family
Power behind the throne
The Power Elite (1956 book by C. Wright Mills)
Polyarchy
Stratocracy
Synarchism
Theocracy
Timocracy
References
Further reading
Ostwald, M. (2000), Oligarchia: The Development of a Constitutional Form in Ancient Greece (Historia Einzelschirften; 144). Stuttgart: Steiner, .
External links
Authoritarianism
Political culture | 0.770723 | 0.999499 | 0.770336 |
Human geography | Human geography or anthropogeography is the branch of geography which studies spatial relationships between human communities, cultures, economies, and their interactions with the environment, examples of which include urban sprawl and urban redevelopment. It analyzes spatial interdependencies between social interactions and the environment through qualitative and quantitative methods. This multidisciplinary approach draws from sociology, anthropology, economics, and environmental science, contributing to a comprehensive understanding of the intricate connections that shape lived spaces.
History
The Royal Geographical Society was founded in England in 1830. The first professor of geography in the United Kingdom was appointed in 1883, and the first major geographical intellect to emerge in the UK was Halford John Mackinder, appointed professor of geography at the London School of Economics in 1922.
The National Geographic Society was founded in the United States in 1888 and began publication of the National Geographic magazine which became, and continues to be, a great popularizer of geographic information. The society has long supported geographic research and education on geographical topics.
The Association of American Geographers was founded in 1904 and was renamed the American Association of Geographers in 2016 to better reflect the increasingly international character of its membership.
One of the first examples of geographic methods being used for purposes other than to describe and theorize the physical properties of the earth is John Snow's map of the 1854 Broad Street cholera outbreak. Though Snow was primarily a physician and a pioneer of epidemiology rather than a geographer, his map is probably one of the earliest examples of health geography.
The now fairly distinct differences between the subfields of physical and human geography developed at a later date. The connection between both physical and human properties of geography is most apparent in the theory of environmental determinism, made popular in the 19th century by Carl Ritter and others, and has close links to the field of evolutionary biology of the time. Environmental determinism is the theory that people's physical, mental and moral habits are directly due to the influence of their natural environment. However, by the mid-19th century, environmental determinism was under attack for lacking methodological rigor associated with modern science, and later as a means to justify racism and imperialism.
A similar concern with both human and physical aspects is apparent during the later 19th and first half of the 20th centuries focused on regional geography. The goal of regional geography, through something known as regionalisation, was to delineate space into regions and then understand and describe the unique characteristics of each region through both human and physical aspects. With links to possibilism and cultural ecology some of the same notions of causal effect of the environment on society and culture remain with environmental determinism.
By the 1960s, however, the quantitative revolution led to strong criticism of regional geography. Due to a perceived lack of scientific rigor in an overly descriptive nature of the discipline, and a continued separation of geography from its two subfields of physical and human geography and from geology, geographers in the mid-20th century began to apply statistical and mathematical models in order to solve spatial problems. Much of the development during the quantitative revolution is now apparent in the use of geographic information systems; the use of statistics, spatial modeling, and positivist approaches are still important to many branches of human geography. Well-known geographers from this period are Fred K. Schaefer, Waldo Tobler, William Garrison, Peter Haggett, Richard J. Chorley, William Bunge, and Torsten Hägerstrand.
From the 1970s, a number of critiques of the positivism now associated with geography emerged. Known under the term 'critical geography,' these critiques signaled another turning point in the discipline. Behavioral geography emerged for some time as a means to understand how people made perceived spaces and places and made locational decisions. The more influential 'radical geography' emerged in the 1970s and 1980s. It draws heavily on Marxist theory and techniques and is associated with geographers such as David Harvey and Richard Peet. Radical geographers seek to say meaningful things about problems recognized through quantitative methods, provide explanations rather than descriptions, put forward alternatives and solutions, and be politically engaged, rather than using the detachment associated with positivists. (The detachment and objectivity of the quantitative revolution was itself critiqued by radical geographers as being a tool of capital). Radical geography and the links to Marxism and related theories remain an important part of contemporary human geography (See: Antipode). Critical geography also saw the introduction of 'humanistic geography', associated with the work of Yi-Fu Tuan, which pushed for a much more qualitative approach in methodology.
The changes under critical geography have led to contemporary approaches in the discipline such as feminist geography, new cultural geography, settlement geography, and the engagement with postmodern and post-structural theories and philosophies.
Fields
The primary fields of study in human geography focus on the core fields of:
Cultures
Cultural geography is the study of cultural products and norms – their variation across spaces and places, as well as their relations. It focuses on describing and analyzing the ways language, religion, economy, government, and other cultural phenomena vary or remain constant from one place to another and on explaining how humans function spatially.
Subfields include: Social geography, Animal geographies, Language geography, Sexuality and space, Children's geographies, and Religion and geography.
Development
Development geography is the study of the Earth's geography with reference to the standard of living and the quality of life of its human inhabitants, study of the location, distribution and spatial organization of economic activities, across the Earth. The subject matter investigated is strongly influenced by the researcher's methodological approach.
Economies
Economic geography examines relationships between human economic systems, states, and other factors, and the biophysical environment.
Subfields include: Marketing geography and Transportation geography
Emotion
Food
Health
Medical or health geography is the application of geographical information, perspectives, and methods to the study of health, disease, and health care. Health geography deals with the spatial relations and patterns between people and the environment. This is a sub-discipline of human geography, researching how and why diseases are spread and contained.
Histories
Historical geography is the study of the human, physical, fictional, theoretical, and "real" geographies of the past. Historical geography studies a wide variety of issues and topics. A common theme is the study of the geographies of the past and how a place or region changes through time. Many historical geographers study geographical patterns through time, including how people have interacted with their environment, and created the cultural landscape.
Politics
Political geography is concerned with the study of both the spatially uneven outcomes of political processes and the ways in which political processes are themselves affected by spatial structures.
Subfields include: Electoral geography, Geopolitics, Strategic geography and Military geography.
Population
Population geography is the study of ways in which spatial variations in the distribution, composition, migration, and growth of populations are related to their environment or location.
Settlement
Settlement geography, including urban geography, is the study of urban and rural areas with specific regards to spatial, relational and theoretical aspects of settlement. That is the study of areas which have a concentration of buildings and infrastructure. These are areas where the majority of economic activities are in the secondary sector and tertiary sectors.
Urbanism
Urban geography is the study of cities, towns, and other areas of relatively dense settlement. Two main interests are site (how a settlement is positioned relative to the physical environment) and situation (how a settlement is positioned relative to other settlements). Another area of interest is the internal organization of urban areas with regard to different demographic groups and the layout of infrastructure. This subdiscipline also draws on ideas from other branches of Human Geography to see their involvement in the processes and patterns evident in an urban area.
Subfields include: Economic geography, Population geography, and Settlement geography. These are clearly not the only subfields that could be used to assist in the study of Urban geography, but they are some major players.
Philosophical and theoretical approaches
Within each of the subfields, various philosophical approaches can be used in research; therefore, an urban geographer could be a Feminist or Marxist geographer, etc.
Such approaches are:
Animal geographies
Behavioral geography
Cognitive geography
Critical geography
Feminist geography
Marxist geography
Non-representational theory
Positivism
Postcolonialism
Poststructuralist geography
Psychoanalytic geography
Psychogeography
Spatial analysis
Time geography
List of notable human geographers
Journals
As with all social sciences, human geographers publish research and other written work in a variety of academic journals. Whilst human geography is interdisciplinary, there are a number of journals that focus on human geography.
These include:
ACME: An International E-Journal for Critical Geographies
Antipode
Area
Dialogues in Human Geography
Economic geography
Environment and Planning
Geoforum
Geografiska Annaler
GeoHumanities
Global Environmental Change: Human and Policy Dimensions
Human Geography
Migration Letters
Progress in Human Geography
Southeastern Geographer
Social & Cultural Geography
Tijdschrift voor economische en sociale geografie
Transactions of the Institute of British Geographers
See also
References
Further reading
External links
Worldmapper – Mapping project using social data sets
Anthropology
Environmental social science | 0.772092 | 0.997723 | 0.770334 |
Cultural hegemony | In Marxist philosophy, cultural hegemony is the dominance of a culturally diverse society by the ruling class who shape the culture of that society—the beliefs and explanations, perceptions, values, and mores—so that the worldview of the ruling class becomes the accepted cultural norm. As the universal dominant ideology, the ruling-class worldview misrepresents the social, political, and economic status quo as natural, inevitable, and perpetual social conditions that benefit every social class, rather than as artificial social constructs that benefit only the ruling class.
In philosophy and in sociology, the denotations and the connotations of term cultural hegemony derive from the Ancient Greek word hegemonia (ἡγεμονία), which indicates the leadership and the régime of the hegemon. In political science, hegemony is the geopolitical dominance exercised by an empire, the hegemon (leader state) that rules the subordinate states of the empire by the threat of intervention, an implied means of power, rather than by threat of direct rule—military invasion, occupation, and territorial annexation.
Background
Historical
In 1848, Karl Marx proposed that the economic recessions and practical contradictions of a capitalist economy would provoke the working class to proletarian revolution, depose capitalism, restructure social institutions (economic, political, social) per the rational models of socialism, and thus begin the transition to a communist society. Therefore, the dialectical changes to the functioning of the economy of a society determine its social superstructures (culture and politics).
To that end, Antonio Gramsci proposed a strategic distinction between the politics for a War of Position and for a War of Manœuvre. The war of position is an intellectual and cultural struggle wherein the anti-capitalist revolutionary creates a proletarian culture whose native value system counters the cultural hegemony of the bourgeoisie. The proletarian culture will increase class consciousness, teach revolutionary theory and historical analysis, and thus further develop revolutionary organisation among the social classes. After winning the war of position, socialist leaders would then have the necessary political power and popular support to realise the war of manœuvre, the political praxis of revolutionary socialism.
Political economy
As Marxist philosophy, cultural hegemony analyses the functions of economic class within the base and superstructure, from which Gramsci developed the functions of social class within the social structures created for and by cultural domination. In the practise of imperialism, cultural hegemony occurs when the working and the peasant classes believe and accept that the prevailing cultural norms of a society (the dominant ideology imposed by the ruling class) realistically describes the natural order of things in society.
In the war for position, the working-class intelligentsia politically educate the working classes to perceive that the prevailing cultural norms are not natural and inevitable social conditions, and to recognize that the social constructs of bourgeois culture function as instruments of socio-economic domination, e.g. the institutions (state, church, and social strata), the conventions (custom and tradition), and beliefs (religions and ideologies), etc. That to realise their own working-class culture the workers and the peasants, by way of their own intellectuals, must perform the necessary analyses of their culture and national history in order for the proletariat to transcend the old ways of thinking about the order of things in a society under the cultural hegemony of an imperial power.
Social domination
Gramsci said that cultural and historical analyses of the "natural order of things in society" established by the dominant ideology, would allow common-sense men and women to intellectually perceive the social structures of bourgeois cultural hegemony. In each sphere of life (private and public) common sense is the intellectualism with which people cope with and explain their daily life within their social stratum within the greater social order; yet the limits of common sense inhibit a person's intellectual perception of the exploitation of labour made possible with cultural hegemony. Given the difficulty in perceiving the status quo hierarchy of bourgeois culture (social and economic classes), most people concern themselves with private matters, and so do not question the fundamental sources of their socio-economic oppression, individual and collective.
Intelligentsia
To perceive and combat ruling-class cultural hegemony, the working class and the peasant class depend upon the moral and political leadership of their native intelligentsia, the scholars, academics, and teachers, scientists, philosophers, administrators et al. from their specific social classes; thus Gramsci's political distinction between the intellectuals of the bourgeoisie and the intellectuals of the working class, respectively, the men and women who are the proponents and the opponents of the cultural status quo:
After Gramsci
German student movement
In 1967, regarding the politics and society of West Germany, the leader of the German Student Movement, Rudi Dutschke, applied Gramsci's analyses of cultural hegemony using the phrase the "Long March through the Institutions" to describe the ideological work necessary to realise the war of position. The allusion to the Long March (1934–35) of the Chinese People's Liberation Army indicates the great work required of the working-class intelligentsia to produce the working-class popular culture with which to replace the dominant ideology imposed by the cultural hegemony of the bourgeoisie.
State apparatuses of ideology
In Ideology and Ideological State Apparatuses (1970), Louis Althusser describes the complex of social relationships among the different organs of the State that transmit and disseminate the dominant ideology to the populations of a society. The ideological state apparatuses (ISA) are the sites of ideological conflict among the social classes of a society; and, unlike the military and police forces, the repressive state apparatuses (RSA), the ISA exist as a plurality throughout society.
Despite the ruling-class control of the RSA, the ideological apparatuses of the state are both the sites and the stakes (the objects) of class struggle, because the ISA are not monolithic social entities, and exist amongst society. As the public and the private sites of continual class struggle, the ideological apparatuses of the state (ISA) are overdetermined zones of society that are composed of elements of the dominant ideologies of previous modes of production, hence the continual political activity in:
the religious ISA (the clergy)
the educational ISA (the public and private school systems)
the family ISA (patriarchal family)
the legal ISA (police and legal, court and penal systems)
the political ISA (political parties)
the company union ISA
the mass communications ISA (print, radio, television, internet, cinema)
the cultural ISA (literature, the arts, sport, etc.)
The parliamentary structures of the State, by which elected politicians exercise the will of the people also are an ideological apparatus of the State, given the State's control of which populations are allowed to participate as political parties. In itself, the political system is an ideological apparatus, because citizens' participation involves intellectually accepting the ideological "fiction, corresponding to a 'certain' reality, that the component parts of the [political] system, as well as the principle of its functioning, are based on the ideology of the 'freedom' and 'equality' of the individual voters and the 'free choice' of the people's representatives, by the individuals that 'make up' the people".
See also
Domination and the Arts of Resistance: Hidden Transcripts (1990), by James C. Scott
Hegemony and Socialist Strategy (1985), by Ernesto Laclau and Chantal Mouffe
"Ideology and Ideological State Apparatuses" (1970), by Louis Althusser
Sheeple
The Structural Transformation of the Public Sphere: An Inquiry into a Category of Bourgeois Society (1962), by Jürgen Habermas
References
Further reading
Bessis, Sophie (2003) Western Supremacy: The Triumph of an Idea. Zed Books.
.
External links
.
.
.
.
.
.
Anti-corporate activism
Conflict theory
Antonio Gramsci
Marxist terminology
Marxist theory
Postcolonialism
Postmodern theory
Social concepts
Communism
Socialism | 0.772788 | 0.996788 | 0.770306 |
Europe, the Middle East and Africa | Europe, the Middle East and Africa, commonly known by its acronym EMEA among the North American business spheres, is a geographical region used by institutions, governments and global spheres of marketing, media and business when referring to this region. The acronym EMEA is a shorthand way of referencing the two continents (Africa and Europe) and the Middle Eastern sub-continent all at once.
As the name suggests, the region includes all of the countries found on the continents of Africa and Europe, as well as the countries that make up the Middle East. The region is generally accepted to include all European nations and all African nations, and extends east to Iran, including part of Russia. Typically, the acronym does not include overseas territories of mainland countries in the region, such as French Guiana. However, the term is not completely clear, and while it usually refers to Europe, the Middle East and Africa, it is not uncommon for businesses and other institutions to slightly tweak the countries they include under this umbrella term.
One of the reasons why the term is commonly used is because it is useful for business purposes, as most of the region falls within four time zones, which facilitates communication and travel.
The related term "EAA" refers to "Europe, Africa, and Asia".
Historical influence
The historical influence and interdependence of Europe on the Middle East and Africa in relation to trade routes contributed to the development of the term EMEA. The establishment of the Suez Canal in 1869 impacted international commerce. It directly linked Europe to the Indian Ocean and East Asian trade routes. The direct channel between Britain and India enabled Britain to gradually gain authority over Egypt. This authority was reinforced via the development and maintenance of the Pax Britannica which gave Britain naval power and control over the world's maritime trade routes during the late nineteenth century period of peace.
Related regions
Eastern Europe, Middle East and Africa (EEMEA). Some companies separate their Eastern European business from the rest of Europe, and refer to the EEMEA region separately from the Western/Central European (EU/EFTA) region
Southern Europe, Middle East and Africa (SEMEA)
Southeastern Europe, Middle East and Africa (SEEMEA)
Central and Eastern Europe (CEE)
Central Europe, Middle East and Africa (CEMEA)
The Middle East and Africa (MEA)
The Middle East and North Africa (MENA)
The Middle East, Turkey and Africa (META)
The Middle East, North Africa, Afghanistan and Pakistan (MENAP)
Europe and the Middle East (EME)
Europe, the Middle East and North Africa (EUMENA or EMENA)
Europe, the Middle East, India and Africa (EMEIA or EMIA)
Europe, the Middle East, Africa and Russia (EMEAR)
Europe, the Middle East, Africa and Commonwealth of Independent States (EMEACIS)
Europe, the Middle East, Africa and Caribbean (EMEAC)
The Commonwealth of Independent States (CIS), around the Black Sea and Caspian Sea
North Atlantic and Central Europe (NACE)
Central and Eastern Europe, the Middle East and Africa (CEMA)
Europe, Latin America, Africa, Arab world
Component areas
The EMEA region generally includes:
Europe
Central and Eastern Europe
Northern Europe
Southern Europe
Western Europe
MENA
Sub-Saharan Africa
Eastern Africa
Central Africa
Southern Africa
Western Africa
See also
Americas
Asia-Pacific
MENA
List of country groupings
References
Economic regions
Geographical neologisms
Afro-Eurasia | 0.772155 | 0.997554 | 0.770266 |
Discontinuity (Postmodernism) | Discontinuity and continuity according to Michel Foucault reflect the flow of history and the fact that some "things are no longer perceived, described, expressed, characterised, classified, and known in the same way" from one era to the next. (1994).
Explanation
In developing the theory of archaeology of knowledge, Foucault was trying to analyse the fundamental codes which a culture uses to construct the episteme or configuration of knowledge that determines the empirical orders and social practices of each particular historical era. He adopted discontinuity as a positive working tool. Some of the discourse would be regular and continuous over time as knowledge steadily accumulates and society gradually establishes what will constitute truth or reason for the time being. But, in a transition from one era to the next, there will be overlaps, breaks and discontinuities as society reconfigures the discourse to match the new environment.
The tool is given an expanded role in genealogy, the next phase of discourse analysis, where the intention is to grasp the total complexity of the use of power and the effects it produces. Foucault sees power as the means for constituting individuals’ identities and determining the limits of their autonomy. This reflects the symbiotic relationship between power (pouvoir) and knowledge (savoir). In his study of prisons and hospitals, he observed how the modern individual becomes both an object and subject of knowledge. Science emerges as a means of directing and shaping lives. Hence, the modern conception of sexuality emerges from Christian codes of morality, the science of psychology, the laws and enforcement strategies adopted by the police and judiciary, the way in which issues of sexuality are discussed in the public media, the education system, etc. These are covert forms of domination (if not oppression), and their influence is to be found not only in what is said, but more importantly, in what is not said: in all the silences and lacunae, in all the discontinuities. If one idea is discussed, then it is not discussed, whose interest is served by this change?
References
Foucault, M. The Order of Things: An Archaeology of the Human Sciences. Vintage; Reissue edition (1964)
Postmodern theory
Post-structuralism
Social philosophy
Structuralism
Michel Foucault | 0.800005 | 0.962758 | 0.770211 |
History of Asia | The history of Asia can be seen as the collective history of several distinct peripheral coastal regions such as East Asia, South Asia, Southeast Asia and the Middle East linked by the interior mass of the Eurasian steppe. See History of the Middle East and History of the Indian Subcontinent for further details on those regions.
The coastal periphery was the home to some of the world's earliest known civilizations and religions, with each of three regions developing early civilizations around fertile river valleys. These valleys were fertile because the soil there was rich and could bear many root crops. The civilizations in Mesopotamia, ancient India, and ancient China shared many similarities and likely exchanged technologies and ideas such as mathematics and the wheel. Other notions such as that of writing likely developed individually in each area. Cities, states, and then empires developed in these lowlands.
The steppe region had long been inhabited by mounted nomads, and from the central steppes, they could reach all areas of the Asian continent. The northern part of the continent, covering much of Siberia was also inaccessible to the steppe nomads due to the dense forests and the tundra. These areas in Siberia were very sparsely populated.
The centre and periphery were kept separate by mountains and deserts. The Caucasus, Himalaya, Karakum Desert, and Gobi Desert formed barriers that the steppe horsemen could only cross with difficulty. While technologically and culturally the city dwellers were more advanced, they could do little militarily to defend against the mounted hordes of the steppe. However, the lowlands did not have enough open grasslands to support a large horsebound force. Thus the nomads who conquered states in the Middle East were soon forced to adapt to the local societies.
The spread of Islam waved the Islamic Golden Age and the Timurid Renaissance, which later influenced the age of Islamic gunpowder empires.
Asia's history features major developments seen in other parts of the world, as well as events that have affected those other regions. These include the trade of the Silk Road, which spread cultures, languages, religions, and diseases throughout Afro-Eurasian trade. Another major advancement was the innovation of gunpowder in medieval China, later developed by the Gunpowder empires, mainly by the Mughals and Safavids, which led to advanced warfare through the use of guns.
Prehistory
A report by archaeologist Rakesh Tewari on Lahuradewa, India shows new C14 datings that range between 9000 and 8000 BC associated with rice, making Lahuradewa the earliest Neolithic site in entire South Asia. Settled life emerged on the subcontinent in the western margins of the Indus River alluvium approximately 9,000 years ago, evolving gradually into the Indus Valley Civilisation of the third millennium BC.
Göbekli Tepe is a Neolithic site in the Southeastern Anatolia Region of Turkey. Dated to the Pre-Pottery Neolithic, between 9500 and 8000 BC, the site comprises a number of large circular structures supported by massive stone pillars – the world's oldest known megaliths.
The prehistoric Beifudi site near Yixian in Hebei Province, China, contains relics of a culture contemporaneous with the Cishan and Xinglongwa cultures of about 8000–7000 BC, neolithic cultures east of the Taihang Mountains, filling in an archaeological gap between the two Northern Chinese cultures. The total excavated area is more than 1,200 square meters and the collection of neolithic findings at the site consists of two phases.
Around 5500 BC the Halafian culture appeared in Lebanon, Israel, Syria, Anatolia, and northern Mesopotamia, based upon dryland agriculture.
In southern Mesopotamia were the alluvial plains of Sumer and Elam. Since there was little rainfall, irrigation systems were necessary. The Ubaid culture flourished from 5500 BC.
Ancient
Bronze Age
The Chalcolithic period (or Copper Age) began about 4500 BC, then the Bronze Age began about 3500 BC, replacing the Neolithic cultures.
The Indus Valley civilization (IVC) was a Bronze Age civilization (3300–1300 BC; mature period 2600–1900 BC) which was centered mostly in the western part of the Indian Subcontinent; it is considered that an early form of Hinduism was performed during this civilization. Some of the great cities of this civilization include Harappa and Mohenjo-daro, which had a high level of town planning and arts. The cause of the destruction of these regions around 1700 BC is debatable, although evidence suggests it was caused by natural disasters (especially flooding). This era marks Vedic period in India, which lasted from roughly 1500 to 500 BC. During this period, the Sanskrit language developed and the Vedas were written, epic hymns that told tales of gods and wars. This was the basis for the Vedic religion, which would eventually sophisticate and develop into Hinduism.
China and Vietnam were also centres of metalworking. Dating back to the Neolithic Age, the first bronze drums, called the Dong Son drums have been uncovered in and around the Red River Delta regions of Vietnam and Southern China. These relate to the prehistoric Dong Son Culture of Vietnam.
In Ban Chiang, Thailand (Southeast Asia), bronze artifacts have been discovered dating to 2100 BC.
In Nyaunggan, Burma bronze tools have been excavated along with ceramics and stone artifacts. Dating is still currently broad (3500–500 BC).
Iron and Axial Age
The Iron Age saw the widespread use of iron tools, weaponry, and armor throughout the major civilizations of Asia.
Middle East
The Achaemenid dynasty of the Persian Empire, founded by Cyrus the Great, ruled an area from Greece and Turkey to the Indus River and Central Asia during the 6th to 4th centuries BC. Persian politics included a tolerance for other cultures, a highly centralized government, and significant infrastructure developments. Later, in Darius the Great's rule, the territories were integrated, a bureaucracy was developed, nobility were assigned military positions, tax collection was carefully organized, and spies were used to ensure the loyalty of regional officials. The primary religion of Persia at this time was Zoroastrianism, developed by the philosopher Zoroaster. It introduced an early form of monotheism to the area. The religion banned animal sacrifice and the use of intoxicants in rituals; and introduced the concept of spiritual salvation through personal moral action, an end time, and both general and Particular judgment with a heaven or hell. These concepts would heavily influence later emperors and the masses. It was itself heavily influenced by earlier much older ancient religious beliefs and practices dating to the beginning of known history and before. The Persian Empire was successful in establishing peace and stability throughout the Middle East and were a major influence in art, politics (affecting Hellenistic leaders), and religion.
Alexander the Great conquered this dynasty in the 4th century BC, creating the brief Hellenistic period. He was unable to establish stability and after his death, Persia broke into small, weak dynasties including the Seleucid Empire, followed by the Parthian Empire. By the end of the Classical age, Persia had been reconsolidated into the Sassanid Empire, also known as the second Persian Empire.
The Roman Empire would later control parts of Western Asia. The Seleucid, Parthian and Sassanid dynasties of Persia dominated Western Asia for centuries.
India
The Maurya and Gupta empires are called the Golden Age of India and were marked by extensive inventions and discoveries in science, technology, art, religion, and philosophy that crystallized the elements of what is generally known as Indian culture. The religions of Hinduism and Buddhism, which began in Indian sub-continent, were an important influence on South, East and Southeast Asia.
By 600 BC, India had been divided into 17 regional states that would occasionally feud amongst themselves. In 327 BC, Alexander the Great came to India with a vision of conquering the whole world. He crossed northwestern India and created the province Bactria but could not move further because his army wanted to go back to their family. Shortly prior, the soldier Chandragupta Maurya began to take control of the Ganges river and soon established the Maurya Empire. The Maurya Empire (Sanskrit: मौर्य राजवंश, Maurya Rājavaṃśa) was the geographically extensive and powerful empire in ancient India, ruled by the Mauryan dynasty from 321 to 185 BC. It was one of the world's largest empires in its time, stretching to the Himalayas in the north, what is now Assam in the east, probably beyond modern Pakistan in the west, and annexing Balochistan and much of what is now Afghanistan, at its greatest extent. South of Mauryan empire was the Tamilakam, an independent country dominated by three dynasties, the Pandyans, Cholas and Cheras. The government established by Chandragupta was led by an autocratic king, who primarily relied on the military to assert his power. It also applied the use of a bureaucracy and even sponsored a postal service. Chandragupta's grandson, Ashoka, greatly extended the empire by conquering most of modern-day India (save for the southern tip). He eventually converted to Buddhism, though, and began a peaceful life where he promoted the religion as well as humane methods throughout India. The Maurya Empire would disintegrate soon after Ashoka's death and was conquered by the Kushan invaders from the northwest, establishing the Kushan Empire. Their conversion to Buddhism caused the religion to be associated with foreigners and therefore a decline in its popularity occurred.
The Kushan Empire would fall apart by 220 AD, creating more political turmoil in India. Then in 320, the Gupta Empire (Sanskrit: गुप्त राजवंश, Gupta Rājavanśha) was established and covered much of the Indian Subcontinent. Founded by Maharaja Sri-Gupta, the dynasty was the model of a classical civilization. Gupta kings united the area primarily through negotiation of local leaders and families as well as strategical intermarriage. Their rule covered less land than the Maurya Empire, but established the greatest stability. In 535, the empire ended when India was overrun by the Hunas.
Classical China
Zhou dynasty
Since 1029 BC, the Zhou dynasty, had existed in China and it would continue to until 258 BC. The Zhou dynasty had been using a feudal system by giving power to local nobility and relying on their loyalty in order to control its large territory. As a result, the Chinese government at this time tended to be very decentralized and weak, and there was often little the emperor could do to resolve national issues. Nonetheless, the government was able to retain its position with the creation of the Mandate of Heaven, which could establish an emperor as divinely chosen to rule. The Zhou additionally discouraged the human sacrifice of the preceding eras and unified the Chinese language. Finally, the Zhou government encouraged settlers to move into the Yangtze River valley, thus creating the Chinese Middle Kingdom.
But by 500 BC, its political stability began to decline due to repeated nomadic incursions and internal conflict derived from the fighting princes and families. This was lessened by the many philosophical movements, starting with the life of Confucius. His philosophical writings (called Confucianism) concerning the respect of elders and of the state would later be popularly used in the Han dynasty. Additionally, Laozi's concepts of Taoism, including yin and yang and the innate duality and balance of nature and the universe, became popular throughout this period. Nevertheless, the Zhou dynasty eventually disintegrated as the local nobles began to gain more power and their conflict devolved into the Warring States period, from 402 to 201 BC.
Qin dynasty
One leader eventually came on top, Qin Shi Huang (, Shǐ Huángdì), who overthrew the last Zhou emperor and established the Qin dynasty. The Qin dynasty (Chinese: 秦朝; pinyin: Qín Cháo) was the first ruling dynasty of Imperial China, lasting from 221 to 207 BC. The new Emperor abolished the feudal system and directly appointed a bureaucracy that would rely on him for power. Huang's imperial forces crushed any regional resistance, and they furthered the Chinese empire by expanding down to the South China Sea and northern Vietnam. Greater organization brought a uniform tax system, a national census, regulated road building (and cart width), standard measurements, standard coinage, and an official written and spoken language. Further reforms included new irrigation projects, the encouragement of silk manufacturing, and (most famously) the beginning of the construction of the Great Wall of China—designed to keep out the nomadic raiders who'd constantly badger the Chinese people. However, Shi Huang was infamous for his tyranny, forcing laborers to build the Wall, ordering heavy taxes, and severely punishing all who opposed him. He oppressed Confucians and promoted Legalism, the idea that people were inherently evil, and that a strong, forceful government was needed to control them. Legalism was infused with realistic, logical views and rejected the pleasures of educated conversation as frivolous. All of this made Shi Huang extremely unpopular with the people. As the Qin began to weaken, various factions began to fight for control of China.
Han dynasty
The Han dynasty (; 206 BC – 220 AD) was the second imperial dynasty of China, preceded by the Qin dynasty and succeeded by the Three Kingdoms (220–265 AD). Spanning over four centuries, the period of the Han dynasty is considered a golden age in Chinese history. One of the Han dynasty's greatest emperors, Emperor Wu of Han, established a peace throughout China comparable to the Pax Romana seen in the Mediterranean a hundred years later. To this day, China's majority ethnic group refers to itself as the "Han people". The Han dynasty was established when two peasants succeeded in rising up against Shi Huang's significantly weaker successor-son. The new Han government retained the centralization and bureaucracy of the Qin, but greatly reduced the repression seen before. They expanded their territory into Korea, Vietnam, and Central Asia, creating an even larger empire than the Qin.
The Han developed contacts with the Persian Empire in the Middle East and the Romans, through the Silk Road, with which they were able to trade many commodities—primarily silk. Many ancient civilizations were influenced by the Silk Road, which connected China, India, the Middle East and Europe. Han emperors like Wu also promoted Confucianism as the national "religion" (although it is debated by theologians as to whether it is defined as such or as a philosophy). Shrines devoted to Confucius were built and Confucian philosophy was taught to all scholars who entered the Chinese bureaucracy. The bureaucracy was further improved with the introduction of an examination system that selected scholars of high merit. These bureaucrats were often upper-class people educated in special schools, but whose power was often checked by the lower-class brought into the bureaucracy through their skill. The Chinese imperial bureaucracy was very effective and highly respected by all in the realm and would last over 2,000 years. The Han government was highly organized and it commanded the military, judicial law (which used a system of courts and strict laws), agricultural production, the economy, and the general lives of its people. The government also promoted intellectual philosophy, scientific research, and detailed historical records.
However, despite all of this impressive stability, central power began to lose control by the turn of the Common Era. As the Han dynasty declined, many factors continued to pummel it into submission until China was left in a state of chaos. By 100 AD, philosophical activity slowed, and corruption ran rampant in the bureaucracy. Local landlords began to take control as the scholars neglected their duties, and this resulted in heavy taxation of the peasantry. Taoists began to gain significant ground and protested the decline. They started to proclaim magical powers and promised to save China with them; the Taoist Yellow Turban Rebellion in 184 (led by rebels in yellow scarves) failed but was able to weaken the government. The aforementioned Huns combined with diseases killed up to half of the population and officially ended the Han dynasty by 220. The ensuing period of chaos was so terrible it lasted for three centuries, where many weak regional rulers and dynasties failed to establish order in China. This period of chaos and attempts at order is commonly known as that of the Six Dynasties. The first part of this included the Three Kingdoms which started in 220 and describes the brief and weak successor "dynasties" that followed the Han. In 265, the Jin dynasty of China was started and this soon split into two different empires in control of northwestern and southeastern China. In 420, the conquest and abdication of those two dynasties resulted in the first of the Southern and Northern dynasties. The Northern and Southern dynasties passed through until finally, by 557, the Northern Zhou dynasty ruled the north and the Chen dynasty ruled the south.
Medieval
During this period, the Eastern world empires continued to expand through trade, migration and conquests of neighboring areas. Gunpowder was widely used as early as the 11th century and they were using moveable type printing five hundred years before Gutenberg created his press. Buddhism, Taoism, Confucianism were the dominant philosophies of the Far East during the Middle Ages. Marco Polo was not the first Westerner to travel to the Orient and return with amazing stories of this different culture, but his accounts published in the late 13th and early 14th centuries were the first to be widely read throughout Europe.
Western Asia (Middle East)
The Arabian peninsula and the surrounding Middle East and Near East regions saw dramatic change during the Medieval era caused primarily by the spread of Islam and the establishment of the Arabian Empires.
In the 5th century, the Middle East was separated into small, weak states; the two most prominent were the Sassanian Empire of the Persians in what is now Iran and Iraq, and the Byzantine Empire in Anatolia (modern-day Turkey). The Byzantines and Sassanians fought with each other continually, a reflection of the rivalry between the Roman Empire and the Persian Empire seen during the previous five hundred years. The fighting weakened both states, leaving the stage open to a new power. Meanwhile, the nomadic Bedouin tribes who dominated the Arabian desert saw a period of tribal stability, greater trade networking and a familiarity with Abrahamic religions or monotheism.
While the Byzantine Roman and Sassanid Persian empires were both weakened by the Byzantine–Sasanian War of 602–628, a new power in the form of Islam grew in the Middle East under Muhammad in Medina. In a series of rapid Muslim conquests, the Rashidun army, led by the Caliphs and skilled military commanders such as Khalid ibn al-Walid, swept through most of the Middle East, taking more than half of Byzantine territory in the Arab–Byzantine wars and completely engulfing Persia in the Muslim conquest of Persia. It would be the Arab Caliphates of the Middle Ages that would first unify the entire Middle East as a distinct region and create the dominant ethnic identity that persists today. These Caliphates included the Rashidun Caliphate, Umayyad Caliphate, Abbasid Caliphate, and later the Seljuq Empire.
After Muhammad introduced Islam, it jump-started Middle Eastern culture into an Islamic Golden Age, inspiring achievements in architecture, the revival of old advances in science and technology, and the formation of a distinct way of life. Muslims saved and spread Greek advances in medicine, algebra, geometry, astronomy, anatomy, and ethics that would later finds it way back to Western Europe.
The dominance of the Arabs came to a sudden end in the mid-11th century with the arrival of the Seljuq Turks, migrating south from the Turkic homelands in Central Asia. They conquered Persia, Iraq (capturing Baghdad in 1055), Syria, Palestine, and the Hejaz. This was followed by a series of Christian Western Europe invasions. The fragmentation of the Middle East allowed joined forces, mainly from England, France, and the emerging Holy Roman Empire, to enter the region. In 1099 the knights of the First Crusade captured Jerusalem and founded the Kingdom of Jerusalem, which survived until 1187, when Saladin retook the city. Smaller crusader fiefdoms survived until 1291. In the early 13th century, a new wave of invaders, the armies of the Mongol Empire, swept through the region, sacking Baghdad in the Siege of Baghdad (1258) and advancing as far south as the border of Egypt in what became known as the Mongol conquests. The Mongols eventually retreated in 1335, but the chaos that ensued throughout the empire deposed the Seljuq Turks. In 1401, the region was further plagued by the Turko-Mongol, Timur, and his ferocious raids. By then, another group of Turks had arisen as well, the Ottomans.
Central Asia
Mongol Empire
The Mongol Empire conquered a large part of Asia in the 13th century, an area extending from China to Europe. Medieval Asia was the kingdom of the Khans. Never before had any person controlled as much land as Genghis Khan. He built his power unifying separate Mongol tribes before expanding his kingdom south and west. He and his grandson, Kublai Khan, controlled lands in China, Burma, Central Asia, Russia, Iran, the Middle East, and Eastern Europe. Genghis Khan was a Khagan who tolerated nearly every religion.
South Asia/Indian Subcontinent
India
The Indian early medieval age, 600 to 1200, is defined by regional kingdoms and cultural diversity. When Harsha of Kannauj, who ruled much of the Indo-Gangetic Plain from 606 to 647, attempted to expand southwards, he was defeated by the Chalukya ruler of the Deccan. When his successor attempted to expand eastwards, he was defeated by the Pala king of Bengal. When the Chalukyas attempted to expand southwards, they were defeated by the Pallavas from farther south, who in turn were opposed by the Pandyas and the Cholas from still farther south. The Cholas could under the rule of Raja Raja Chola defeat their rivals and rise to a regional power. Cholas expanded northward and defeated Eastern Chalukya, Kalinga and the Pala. Under Rajendra Chola the Cholas created the first notable navy of Indian subcontinent. The Chola navy extended the influence of Chola empire to southeast asia. During this time, pastoral peoples whose land had been cleared to make way for the growing agricultural economy were accommodated within caste society, as were new non-traditional ruling classes.
The Muslim conquest in the Indian subcontinent mainly took place from the 12th century onwards, though earlier Muslim conquests include the limited inroads into modern Afghanistan and Pakistan and the Umayyad campaigns in India, during the time of the Rajput kingdoms in the 8th century.
Major economic and military powers like the Delhi Sultanate and Bengal Sultanate, were seen to be established. The search of their wealth led the Voyages of Christopher Columbus.
The Vijayanagara Empire based in the Deccan Plateau region of South India, was established in 1336 by the brothers Harihara I and Bukka Raya I of the Sangama dynasty, patronized by saint Vidyaranya, the 12th Shankaracharya of Sringeri in Karnataka. The empire rose to prominence as a result of attempts by the southern powers to resist and ward off Turkic Islamic invasions by the end of the 13th century. At its peak, it subjugated almost all of South India's rulers and pushed the sultans of the Deccan beyond the Tungabhadra-Krishna river region. After annexing modern day Odisha (ancient Kalinga) from the Gajapati Empire, became a notable power. The empire lasted until 1646 after a major military defeat in the Battle of Talikota in 1565 by the combined armies of the Deccan sultanates.
East Asia
China
China saw the rise and fall of the Sui, Tang, Song, and Yuan dynasties and therefore improvements in its bureaucracy, the spread of Buddhism, and the advent of Neo-Confucianism. It was an unsurpassed era for Chinese ceramics and painting. Medieval architectural masterpieces the Great South Gate in Todaiji, Japan, and the Tien-ning Temple in Peking, China are some of the surviving constructs from this era.
Sui dynasty
A new powerful dynasty began to rise in the 580s, amongst the divided factions of China. This was started when an aristocrat named Yang Jian married his daughter into the Northern Zhou dynasty. He proclaimed himself Emperor Wen of Sui and appeased the nomadic military by abandoning the Confucian scholar-gentry. Emperor Wen soon led the conquest of the southern Chen dynasty and united China once more under the Sui dynasty. The emperor lowered taxes and constructed granaries that he used to prevent famine and control the market. Later Wen's son would murder him for the throne and declare himself Emperor Yang of Sui. Emperor Yang revived the Confucian scholars and the bureaucracy, much to anger of the aristocrats and nomadic military leaders. Yang became an excessive leader who overused China's resources for personal luxury and perpetuated exhaustive attempts to conquer Goguryeo. His military failures and neglect of the empire forced his own ministers to assassinate him in 618, ending the Sui dynasty.
Tang dynasty
Fortunately, one of Yang's most respectable advisors, Li Yuan, was able to claim the throne quickly, preventing a chaotic collapse. He proclaimed himself Emperor Gaozu, and established the Tang dynasty in 623. The Tang saw expansion of China through conquest to Tibet in the west, Vietnam in the south, and Manchuria in the north. Tang emperors also improved the education of scholars in the Chinese bureaucracy. A Ministry of Rites was established and the examination system was improved to better qualify scholars for their jobs. In addition, Buddhism became popular in China with two different strains between the peasantry and the elite, the Pure Land and Zen strains, respectively. Greatly supporting the spread of Buddhism was Empress Wu, who additionally claimed an unofficial "Zhou dynasty" and displayed China's tolerance of a woman ruler, which was rare at the time. However, Buddhism would also experience some backlash, especially from Confucianists and Taoists. This would usually involve criticism about how it was costing the state money, since the government was unable to tax Buddhist monasteries, and additionally sent many grants and gifts to them.
The Tang dynasty began to decline under the rule of Emperor Xuanzong, who began to neglect the economy and military and caused unrest amongst the court officials due to the excessive influence of his concubine, Yang Guifei, and her family. This eventually sparked a revolt in 755. Although the revolt failed, subduing it required involvement with the unruly nomadic tribes outside of China and distributing more power to local leaders—leaving the government and economy in a degraded state. The Tang dynasty officially ended in 907 and various factions led by the aforementioned nomadic tribes and local leaders would fight for control of China in the Five Dynasties and Ten Kingdoms period.
Liao, Song and Jin dynasties
By 960, most of China proper had been reunited under the Song dynasty, although it lost territories in the north and could not defeat one of the nomadic tribes there—the Liao dynasty of the highly sinicized Khitan people. From then on, the Song would have to pay tribute to avoid invasion and thus set the precedent for other nomadic kingdoms to oppress them. The Song also saw the revival of Confucianism in the form of Neo-Confucianism. This had the effect of putting the Confucian scholars at a higher status than aristocrats or Buddhists and also intensified the reduction of power in women. The infamous practice of foot binding developed in this period as a result. Eventually the Liao dynasty in the north was overthrown by the Jin dynasty of the Manchu-related Jurchen people. The new Jin kingdom invaded northern China, leaving the Song to flee farther south and creating the Southern Song dynasty in 1126. There, cultural life flourished.
Yuan dynasty
By 1227, the Mongols had conquered the Western Xia kingdom northwest of China. Soon the Mongols incurred upon the Jin empire of the Jurchens. Chinese cities were soon besieged by the Mongol hordes that showed little mercy for those who resisted and the Southern Song Chinese were quickly losing territory. In 1271 the current great khan, Kublai Khan, claimed himself Emperor of China and officially established the Yuan dynasty. By 1290, all of China was under control of the Mongols, marking the first time they were ever completely conquered by a foreign invader; the new capital was established at Khanbaliq (modern-day Beijing). Kublai Khan segregated Mongol culture from Chinese culture by discouraging interactions between the two peoples, separating living spaces and places of worship, and reserving top administrative positions to Mongols, thus preventing Confucian scholars to continue the bureaucratic system. Nevertheless, Kublai remained fascinated with Chinese thinking, surrounding himself with Chinese Buddhist, Taoist, or Confucian advisors.
Mongol women displayed a contrasting independent nature compared to the Chinese women who continued to be suppressed. Mongol women often rode out on hunts or even to war. Kublai's wife, Chabi, was a perfect example of this; Chabi advised her husband on several political and diplomatic matters; she convinced him that the Chinese were to be respected and well-treated in order to make them easier to rule. However, this was not enough to affect Chinese women's position, and the increasingly Neo-Confucian successors of Kublai further repressed Chinese and even Mongol women.
The Black Death, which would later ravage Western Europe, had its beginnings in Asia, where it wiped out large populations in China in 1331.
Japan
Asuka period
Japan's medieval history began with the Asuka period, from around 600 to 710. The time was characterized by the Taika Reform and imperial centralization, both of which were a direct result of growing Chinese contact and influences. In 603, Prince Shōtoku of the Yamato dynasty began significant political and cultural changes. He issued the Seventeen-article constitution in 604, centralizing power towards the emperor (under the title tenno, or heavenly sovereign) and removing the power to levy taxes from provincial lords. Shōtoku was also a patron of Buddhism and he encouraged building temples competitively.
Nara period
Shōtoku's reforms transitioned Japan to the Nara period (c. 710 to c. 794), with the moving of the Japanese capital to Nara in Honshu. This period saw the culmination of Chinese-style writing, etiquette, and architecture in Japan along with Confucian ideals to supplement the already present Buddhism. Peasants revered both Confucian scholars and Buddhist monks. However, in the wake of the 735–737 Japanese smallpox epidemic, Buddhism gained the status of state religion and the government ordered the construction of numerous Buddhist temples, monasteries, and statues. The lavish spending combined with the fact that many aristocrats did not pay taxes, put a heavy burden on peasantry that caused poverty and famine. Eventually the Buddhist position got out of control, threatening to seize imperial power and causing Emperor Kanmu to move the capital to Heian-kyō to avoid a Buddhist takeover. This marked the beginning of the Heian period and the end of Taika reform.
Heian period
With the Heian period (from 794 to 1185) came a decline of imperial power. Chinese influence also declined, as a result of its correlation with imperial centralization and the heavenly mandate, which came to be regarded as ineffective. By 838, the Japanese court discontinued its embassies in China; only traders and Buddhist monks continued to travel to China. Buddhism itself came to be considered more Japanese than Chinese, and persisted to be popular in Japan. Buddhists monks and monasteries continued their attempts to gather personal power in courts, along with aristocrats. One particular noble family that dominated influence in the imperial bureaucracy was the Fujiwara clan. During this time cultural life in the imperial court flourished. There was a focus on beauty and social interaction and writing and literature was considered refined. Noblewomen were cultured the same as noblemen, dabbling in creative works and politics. A prime example of both Japanese literature and women's role in high-class culture at this time was The Tale of Genji, written by the lady-in-waiting Murasaki Shikibu. Popularization of wooden palaces and shōji sliding doors amongst the nobility also occurred.
Loss of imperial power also led to the rise of provincial warrior elites. Small lords began to function independently. They administered laws, supervised public works projects, and collected revenue for themselves instead of the imperial court. Regional lords also began to build their own armies. These warriors were loyal only their local lords and not the emperor, although the imperial government increasingly called them in to protect the capital. The regional warrior class developed into the samurai, which created its own culture: including specialized weapons such as the katana and a form of chivalry, bushido. The imperial government's loss of control in the second half of the Heian period allowed banditry to grow, requiring both feudal lords and Buddhist monasteries to procure warriors for protection. As imperial control over Japan declined, feudal lords also became more independent and seceded from the empire. These feudal states squandered the peasants living in them, reducing the farmers to an almost serfdom status. Peasants were also rigidly restricted from rising to the samurai class, being physically set off by dress and weapon restrictions. As a result of their oppression, many peasants turned to Buddhism as a hope for reward in the afterlife for upright behavior.
With the increase of feudalism, families in the imperial court began to depend on alliances with regional lords. The Fujiwara clan declined from power, replaced by a rivalry between the Taira clan and the Minamoto clan. This rivalry grew into the Genpei War in the early 1180s. This war saw the use of both samurai and peasant soldiers. For the samurai, battle was ritual and they often easily cut down the poorly trained peasantry. The Minamoto clan proved successful due to their rural alliances. Once the Taira was destroyed, the Minamoto established a military government called the shogunate (or bakufu), centered in Kamakura.
Kamakura period
The end of the Genpei War and the establishment of the Kamakura shogunate marked the end of the Heian period and the beginning of the Kamakura period in 1185, solidifying feudal Japan.
Korea
Three Kingdoms of Korea
The three Kingdoms of Korea involves Goguryeo in north, Baekje in southwest, and Silla in southeast Korean peninsula. These three kingdoms act as a bridge of cultures between China and Japan. Prince Shōtoku of Japan had been taught by two teachers. One was from Baekje, the other was from Goguryeo. Once Japan invaded Silla, Goguryeo helped Silla to defeat Japan. Baekje met the earliest heyday of them. Its heyday was the 5th century AD. Its capital was Seoul. During its heyday, the kingdom made colonies overseas. Liaodong, China and Kyushu, Japan were the colonies of Baekje during its short heyday. Goguryeo was the strongest kingdom of all. They sometimes called themselves as an Empire. Its heyday was 6th century. King Gwanggaeto widened its territory to north. So Goguryeo dominated from Korean peninsula to Manchuria. And his son, King Jangsu widened its territory to south. He occupied Seoul, and moved its capital to Pyeongyang. Goguryeo almost occupied three quarters of South Korean peninsula thanks to king Jangsu who widened the kingdom's territory to south. Silla met the latest heyday. King Jinheung went north and occupiedSeoul. But it was short. Baekje became stronger and attacked Silla. Baekje occupied more than 40 cities of Silla. So Silla could hardly survive.
China's Sui dynasty invaded Goguryeo and Goguryeo–Sui War occurred between Korea and China. Goguryeo won against China and Sui dynasty fell. After then, Tang dynasty reinvaded Goguryeo and helped Silla to unify the peninsula. Goguryeo, Baekje, and Japan helped each other against Tang-Silla alliance, but Baekje and Goguryeo fell. Unfortunately, Tang dynasty betrayed Silla and invaded Korean peninsula in order to occupy the whole Korean peninsula (Silla-Tang war). Silla advocated 'Unification of Three Korea', so people of fallen Baekje and Goguryeo helped Silla against Chinese invasion. Eventually Silla could beat China and unified the peninsula. This war helped Korean people to unite mentally.
North-South States Period
The rest of Goguryeo people established Balhae and won the war against Tang in later 7th century AD. Balhae is the north state, and Later Silla was the south state. Balhae was a quite strong kingdom as their ancestor Goguryeo did. Finally, the Emperor of Tang dynasty admits Balhae as 'A strong country in the East'. They liked to trade with Japan, China, and Silla. Balhae and Later Silla sent a lot of international students to China. And Arabian merchants came into Korean peninsula, so Korea became known as 'Silla' in the western countries. Silla improved Korean writing system called Idu letters. Idu affected Katakana of Japan. Liao dynasty invaded Balhae in early 10th century, so Balhae fell.
Later Three Kingdoms of Korea
The unified Korean kingdom, Later Silla divided into three kingdoms again because of the corrupt central government. It involves Later Goguryeo (also as known as "Taebong"), Later Baekje, and Later Silla. The general of Later Goguryeo, Wang Geon took the throne and changed the name of kingdom into Goryeo, which was derived by the ancient strong kingdom, Goguryeo, and Goryeo reunified the peninsula.
Goryeo
Goryeo reunited the Korean peninsula during the later three kingdoms period and named itself as 'Empire'. But nowadays, Goryeo is known as a kingdom. The name 'Goryeo' was derived from Goguryeo, and the name Korea was derived from Goryeo. Goryeo adopted people from fallen Balhae. They also widened their territory to north by defending Liao dynasty and attacking the Jurchen people. Goryeo developed a splendid culture. The first metal type printed book Jikji was also from Korea. The Goryeo ware is one of the most famous legacies of this kingdom. Goryeo imported Chinese government system and developed into their own ways.
During this period, laws were codified and a civil service system was introduced. Buddhism flourished and spread throughout the peninsula. The Tripitaka Koreana is 81,258 books total. It was made to keep Korea safe against the Mongolian invasion. It is now a UNESCO world heritage. Goryeo won the battle against Liao dynasty. Then, the Mongolian Empire invaded Goryeo. Goryeo did not disappear but it had to obey Mongolians. After 80 years, in 14th century, the Mongolian dynasty Yuan lost power, King Gongmin tried to free themselves against Mongol although his wife was also Mongolian. At the 14th century, Ming dynasty wanted Goryeo to obey China. But Goryeo didn't. They decided to invade China. Going to China, the general of Goryeo, Lee Sung-Gae came back and destroyed Goryeo. Then, in 1392, he established new dynasty, Joseon. And he became Taejo of Joseon, which means the first king of Joseon.
Southeast Asia
Khmers
In 802, Jayavarman II consolidated his rule over neighboring peoples and declared himself chakravartin, or "universal ruler". The Khmer Empire effectively dominated all Mainland Southeast Asia from the early 9th until the 15th century, during which time they developed a sophisticated monumental architecture of most exquisite expression and mastery of composition at Angkor.
Vietnam
The history of Vietnam can be traced back to around 20,000 years ago, as the first modern humans arrived and settled on this land, known as the Hoabinhians, which can be traced back to the modern-day Negritos. Archaeological findings from 1965, which are still under research, show the remains of two hominins closely related to the Sinanthropus, dating as far back as the Middle Pleistocene era, roughly half a million years ago.
Pre-historic Vietnam was home to some of the world's earliest civilizations and societies—making them one of the world's first people who had practiced agriculture. The Red River valley formed a natural geographic and economic unit, bounded to the north and west by mountains and jungles, to the east by the sea and to the south by the Red River Delta. The need to have a single authority to prevent floods of the Red River, to cooperate in constructing hydraulic systems, trade exchange, and to repel invaders, led to the creation of the first legendary Vietnamese states approximately 2879 BC. While in the later times, ongoing research from archaeologists have suggested that the Vietnamese Đông Sơn culture were traceable back to Northern Vietnam, Guangxi and Laos around 700 BC.
Vietnam's long coastal and narrowed lands, rugged mountainous terrains, with two major deltas, were soon home to several different ancient cultures and civilizations. In the north, the Đông Sơn culture and its indigenous chiefdoms of Văn Lang and Âu Lạc started to flourish by 500 BC. In Central, Sa Huỳnh culture of Austronesian Chamic peoples also thrived. Both were swept by the Chinese Han dynasty expansion from the north - the Han conquest of Nanyue brought parts of Vietnam under the Chinese rule in 111 BC. Traditional Chinese became the official script as well as the later developed independent Nôm script of Vietnamese.
In 40 BC, the Trưng Sisters led the first uprising of indigenous tribes and peoples against Chinese domination. The rebellion was however defeated, but as the Han dynasty began to weaken by late 2nd century and China (中国) started to descend into state of turmoil, the indigenous peoples of Vietnam rose again and some became free. In 192 AD, the Chams of Central Vietnam revolted against the Chinese and subsequently became independent Kingdom of Champa, while the Red River Delta saw loosening Northern control. At that time, with the introduction of Buddhism and Hinduism by the second century AD, Vietnam was the first place in Southeast Asia which shared influences of both Indian and Sino cultures, and the rise of first Indianized kingdoms Champa and Funan.
During these 1,000 years there were many uprisings against Chinese domination, and at certain periods Vietnam was independently governed under the Trưng Sisters, Early Lý, Khúc and Dương Đình Nghệ—although their triumphs and reigns were temporary.
When Ngô Quyền (Emperor of Vietnam, 938–944) restored sovereign power in the country with the victory at The battle of Bạch Đằng River (938), the next millennium was advanced by the accomplishments of successive local dynasties: Ngô, Đinh, Early Lê, Lý, Trần, Hồ, Later Trần, Later Lê, Mạc, Revival Lê, Tây Sơn and Nguyễn. Nôm script (Chữ Nôm) of the Vietnamese started to develop and become more sophisticated, with literature being published and written in Nôm. At various points during the imperial dynasties, Vietnam was ravaged and divided by civil wars and witnessed interventions by the Song, Yuan, Cham, Ming, Siamese, Qing, French, and Empire of Japan.
The Ming Empire conquered the Red River valley for a while before native Vietnamese regained control and the French Empire reduced Vietnam to a French dependency for nearly a century, followed by brief but brutal occupation by the Japanese Empire. During the French period, widespread brutality, inequality and cultural remnants of Hán-Nôm were being destroyed, with the French wishing to rid the Vietnamese of their Confucian legacy from the 1880s. French was the official language during this period. The Vietnamese Latin script, seen to be a Latin transliteration of Hán-Nôm, superseded the Hán-Nôm logographic scripts and became the main mode of written as well as spoken language since the 20th century.
Japan invaded in 1940, creating deep resentment that fuelled resistance to post-World War II military-political efforts by the returning power of France, and the United States who had viewed themselves as fighters for liberty and democracy against the red waves of communism. In the Vietnam War, the United States or the Western Bloc supported South Vietnam and the Soviet Union or the Eastern Bloc supported North Vietnam. Political upheaval, a period of intense fighting and war, followed by Communist insurrection and victory further put an end to the monarchy after World War II, and the country was proclaimed a Socialist Republic. Vietnam suffered heavy sanctions as well as political and economic isolation following brutal wars with China and Cambodia in the successive years. Following that era, the Đổi Mới (renovation/innovation) reformations were enacted. The forces of market liberalisation and globalisation has shaped Vietnam's economic and political circumstances since.
Early modern
The Russian Empire began to expand into Asia from the 17th century, and would eventually take control of all of Siberia and most of Central Asia by the end of the 19th century. The Ottoman Empire controlled Anatolia, the Middle East, North Africa and the Balkans from the 16th century onwards. In the 17th century, the Manchu conquered China and established the Qing dynasty. In the 16th century, the Mughal Empire controlled much of India and initiated the second golden age for India. China was the largest economy in the world for much of the time, followed by India until the 18th century.
Ming China
By 1368, Zhu Yuanzhang had claimed himself Hongwu Emperor and established the Ming dynasty of China. Immediately, the new emperor and his followers drove the Mongols and their culture out of China and beyond the Great Wall. The new emperor was somewhat suspicious of the scholars that dominated China's bureaucracy, for he had been born a peasant and was uneducated. Nevertheless, Confucian scholars were necessary to China's bureaucracy and were reestablished as well as reforms that would improve the exam systems and make them more important in entering the bureaucracy than ever before. The exams became more rigorous, cut down harshly on cheating, and those who excelled were more highly appraised. Finally, Hongwu also directed more power towards the role of emperor so as to end the corrupt influences of the bureaucrats.
Society and economy
The Hongwu emperor, perhaps for his sympathy of the common-folk, had built many irrigation systems and other public projects that provided help for the peasant farmers. They were also allowed to cultivate and claim unoccupied land without having to pay any taxes and labor demands were lowered. However, none of this was able to stop the rising landlord class that gained many privileges from the government and slowly gained control of the peasantry. Moneylenders foreclosed on peasant debt in exchange for mortgages and bought up farmer land, forcing them to become the landlords' tenants or to wander elsewhere for work. Also during this time, Neo-Confucianism intensified even more than the previous two dynasties (the Song and Yuan). Focus on the superiority of elders over youth, men over women, and teachers over students resulted in minor discrimination of the "inferior" classes. The fine arts grew in the Ming era, with improved techniques in brush painting that depicted scenes of court, city or country life; people such as scholars or travelers; or the beauty of mountains, lakes, or marshes. The Chinese novel fully developed in this era, with such classics written such as Water Margin, Journey to the West, and Jin Ping Mei.
Economics grew rapidly in the Ming dynasty as well. The introduction of American crops such as maize, sweet potatoes, and peanuts allowed for cultivation of crops in infertile land and helped prevent famine. The population boom that began in the Song dynasty accelerated until China's population went from 80 or 90 million to 150 million in three centuries, culminating in 1600. This paralleled the market economy that was growing both internally and externally. Silk, tea, ceramics, and lacquer-ware were produced by artisans that traded them in Asia and to Europeans. Westerners began to trade (with some Chinese-assigned limits), primarily in the port-towns of Macau and Canton. Although merchants benefited greatly from this, land remained the primary symbol of wealth in China and traders' riches were often put into acquiring more land. Therefore, little of these riches were used in private enterprises that could've allowed for China to develop the market economy that often accompanied the highly-successful Western countries.
Foreign interests
In the interest of national glory, the Chinese began sending impressive junk ships across the South China Sea and the Indian Ocean. From 1403 to 1433, the Yongle Emperor commissioned expeditions led by the admiral Zheng He, a Muslim eunuch from China. Chinese junks carrying hundreds of soldiers, goods, and animals for zoos, traveled to Southeast Asia, Persia, southern Arabia, and east Africa to show off Chinese power. Their prowess exceeded that of current Europeans at the time, and had these expeditions not ended, the world economy may be different from today. In 1433, the Chinese government decided that the cost of a navy was an unnecessary expense. The Chinese navy was slowly dismantled and focus on interior reform and military defense began. It was China's longstanding priority that they protect themselves from nomads and they have accordingly returned to it. The growing limits on the Chinese navy would leave them vulnerable to foreign invasion by sea later on.
As was inevitable, Westerners arrived on the Chinese east coast, primarily Jesuit missionaries which reached the mainland in 1582. They attempted to convert the Chinese people to Christianity by first converting the top of the social hierarchy and allowing the lower classes to subsequently convert. To further gain support, many Jesuits adopted Chinese dress, customs, and language. Some Chinese scholars were interested in certain Western teachings and especially in Western technology. By the 1580s, Jesuit scholars like Matteo Ricci and Adam Schall amazed the Chinese elite with technological advances such as European clocks, improved calendars and cannons, and the accurate prediction of eclipses. Although some the scholar-gentry converted, many were suspicious of the Westerners whom they called "barbarians" and even resented them for the embarrassment they received at the hand of Western correction. Nevertheless, a small group of Jesuit scholars remained at the court to impress the emperor and his advisors.
Decline
Near the end of the 1500s, the extremely centralized government that gave so much power to the emperor had begun to fail as more incompetent rulers took the mantle. Along with these weak rulers came increasingly corrupt officials who took advantage of the decline. Once more the public projects fell into disrepair due to neglect by the bureaucracy and resulted in floods, drought, and famine that rocked the peasantry. The famine soon became so terrible that some peasants resorted to selling their children to slavery to save them from starvation, or to eating bark, the feces of geese, or other people. Many landlords abused the situation by building large estates where desperate farmers would work and be exploited. In turn, many of these farmers resorted to flight, banditry, and open rebellion.
All of this corresponded with the usual dynastic decline of China seen before, as well as the growing foreign threats. In the mid-16th century, Japanese and ethnic Chinese pirates began to raid the southern coast, and neither the bureaucracy nor the military were able to stop them. The threat of the northern Manchu people also grew. The Manchu were an already large state north of China, when in the early 17th century a local leader named Nurhaci suddenly united them under the Eight Banners—armies that the opposing families were organized into. The Manchus adopted many Chinese customs, specifically taking after their bureaucracy. Nevertheless, the Manchus still remained a Chinese vassal. In 1644 Chinese administration became so weak, the 16th and last emperor, the Chongzhen Emperor, did not respond to the severity of an ensuing rebellion by local dissenters until the enemy had invaded the Forbidden City (his personal estate). He soon hanged himself in the imperial gardens. For a brief amount of time, the Shun dynasty was claimed, until a loyalist Ming official called support from the Manchus to put down the new dynasty. The Shun dynasty ended within a year and the Manchu were now within the Great Wall. Taking advantage of the situation, the Manchus marched on the Chinese capital of Beijing. Within two decades all of China belonged to the Manchu and the Qing dynasty was established.
Korea: Joseon dynasty (1392–1897)
In early-modern Korea, the 500-year-old kingdom, Goryeo fell and new dynasty Joseon rose in August 5, 1392. Taejo of Joseon changed the country's name from Goryeo to Joseon. Sejong the Great created Hangul, the modern Korean alphabet, in 1443; likewise the Joseon dynasty saw several improvements in science and technology, like Sun Clocks, Water Clocks, Rain-Measuring systems, Star Maps, and detailed records of Korean small villages. The ninth king, Seongjong accomplished the first complete Korean law code in 1485. So the culture and people's lives were improved again.
In 1592, Japan under Toyotomi Hideyoshi invaded Korea. That war is Imjin war. Before that war, Joseon was in a long peace like PAX ROMANA. So Joseon was not ready for the war. Joseon had lost again and again. Japanese army conquered Seoul. The whole Korean peninsula was in danger. But Yi Sun-sin, the most renowned general of Korea, defeated Japanese fleet in southern Korea coast even 13 ships VS 133 ships. This incredible battle is called "Battle of Myeongnyang". After that, Ming dynasty helped Joseon, and Japan lost the battle. So Toyotomi Hideyoshi's campaign in Korea failed, and the Tokugawa Shogunate has later began. Korea was hurt a lot at Imjin war. Not long after, Manchurian people invaded Joseon again. It is called Qing invasion of Joseon. The first invasion was for sake. Because Qing was at war between Ming, so Ming's alliance with Joseon was threatening. And the second invasion was for Joseon to obey Qing. After that, Qing defeated Ming and took the whole Chinese territories. Joseon also had to obey Qing because Joseon lose the second war against Qing.
After the Qing invasion, the princes of the Joseon dynasty lived their childhood in China. The son of King Injo met Adam Schall in Beijing. So he wanted to introduce western technologies to Korean people when he becomes a king. He died before he could take the throne. After then, the alternative prince became the 17th king of the Joseon dynasty, Hyojong, trying to revenge for his kingdom and fallen Ming dynasty to Qing. Later kings such as Yeongjo and Jeongjo tried to improve their people's lives and stop the governors' unreasonable competition. From the 17th century to the 18th century, Joseon sent diplomats and artists to Japan more than 10 times. This group was called 'Tongshinsa'. They were sent to Japan to teach Japan about advanced Korean culture. Japanese people liked to receive poems from Korean nobles. At that time, Korea was more powerful than Japan. But that relationship between Joseon and Japan was reversed after the 19th century. Because Japan became more powerful than Korea and China, either. So Joseon sent diplomats called 'Sooshinsa' to learn Japanese advanced technologies. After king Jeongjo's death, some noble families controlled the whole kingdom in the early 19th century. At the end of that period, Western people invaded Joseon. In 1876, Joseon was set free from Qing so they did not have to obey Qing. But Japanese Empire was happy because Joseon became a perfect independent kingdom. So Japan could intervene in the kingdom more. After this, Joseon traded with the United States and sent 'Sooshinsa' to Japan, 'Youngshinsa' to Qing, and 'Bobingsa' to the US and Europe. These groups took many modern things to the Korean peninsula.
Japan: Tokugawa or Edo period (1603–1867)
In early-modern Japan following the Sengoku period of "warring states", central government had been largely reestablished by Oda Nobunaga and Toyotomi Hideyoshi during the Azuchi–Momoyama period. After the Battle of Sekigahara in 1600, central authority fell to Tokugawa Ieyasu who completed this process and received the title of shōgun in 1603.
Society in the Japanese "Tokugawa period" (see Edo society), unlike the shogunates before it, was based on the strict class hierarchy originally established by Toyotomi Hideyoshi. The daimyōs (feudal lords) were at the top, followed by the warrior-caste of samurai, with the farmers, artisans, and merchants ranking below. The country was strictly closed to foreigners with few exceptions with the Sakoku policy. Literacy rose in the two centuries of isolation.
In some parts of the country, particularly smaller regions, daimyōs and samurai were more or less identical, since daimyōs might be trained as samurai, and samurai might act as local lords. Otherwise, the largely inflexible nature of this social stratification system unleashed disruptive forces over time. Taxes on the peasantry were set at fixed amounts which did not account for inflation or other changes in monetary value. As a result, the tax revenues collected by the samurai landowners were worth less and less over time. This often led to numerous confrontations between noble but impoverished samurai and well-to-do peasants. None, however, proved compelling enough to seriously challenge the established order until the arrival of foreign powers.
India
In the Indian subcontinent, the Mughal Empire ruled most of India in the early 18th century. During emperor Shah Jahan and his son Aurangzeb's Islamic sharia reigns, the empire reached its architectural and economic zenith, and became the world's largest economy, worth over 25% of world GDP. In the mid-18th century it was a major proto-industrializing region.
Following major events such as the Nader Shah's invasion of the Mughal Empire, Battle of Plassey, Battle of Buxar and the long Anglo-Mysore Wars, most of South Asia was colonised and governed by the British Empire, thus establishing the British Raj. The "classic period" ended with the death of Mughal Emperor Aurangzeb, although the dynasty continued for another 150 years. During this period, the Empire was marked by a highly centralized administration connecting the different regions. All the significant monuments of the Mughals, their most visible legacy, date to this period which was characterised by the expansion of Persian cultural influence in the Indian subcontinent, with brilliant literary, artistic, and architectural results. The Maratha Empire was located in the south west of present-day India and expanded greatly under the rule of the Peshwas, the prime ministers of the Maratha empire. In 1761, the Maratha army lost the Third Battle of Panipat against Ahmad shah Durrani king of Afghanistan which halted imperial expansion and the empire was then divided into a confederacy of Maratha states.
British and Dutch colonization
The European economic and naval powers pushed into Asia, first to do trading, and then to take over major colonies. The Dutch led the way followed by the British. Portugal had arrived first, but was too weak to maintain its small holdings and was largely pushed out, retaining only Goa and Macau. The British set up a private organization, the East India Company, which handled both trade and Imperial control of much of India.
The commercial colonization of India commenced in 1757, after the Battle of Plassey, when the Nawab of Bengal surrendered his dominions to the British East India Company, in 1765, when the company was granted the diwani, or the right to collect revenue, in Bengal and Bihar, or in 1772, when the company established a capital in Calcutta, appointed its first Governor-General, Warren Hastings, and became directly involved in governance.
The Maratha states, following the Anglo-Maratha wars, eventually lost to the British East India Company in 1818 with the Third Anglo-Maratha War. The rule lasted until 1858, when, after the Indian rebellion of 1857 and consequent of the Government of India Act 1858, the British government assumed the task of directly administering India in the new British Raj. In 1819 Stamford Raffles established Singapore as a key trading post for Britain in their rivalry with the Dutch. However, their rivalry cooled in 1824 when an Anglo-Dutch treaty demarcated their respective interests in Southeast Asia. From the 1850s onwards, the pace of colonization shifted to a significantly higher gear.
The Dutch East India Company (1800) and British East India Company (1858) were dissolved by their respective governments, who took over the direct administration of the colonies. Only Thailand was spared the experience of foreign rule, although, Thailand itself was also greatly affected by the power politics of the Western powers. Colonial rule had a profound effect on Southeast Asia. While the colonial powers profited much from the region's vast resources and large market, colonial rule did develop the region to a varying extent.
Late modern
Central Asia: The Great Game, Russia vs Great Britain
The Great Game was a political and diplomatic confrontation between Great Britain and Russia over Afghanistan and neighbouring territories in Central and South Asia. It lasted from 1828 to 1907. There was no war, but there were many threats. Russia was fearful of British commercial and military inroads into Central Asia, and Britain was fearful of Russia threatening its largest and most important possession, India. This resulted in an atmosphere of distrust and the constant threat of war between the two empires. Britain made it a high priority to protect all the approaches to India, and the "great game" is primarily how the British did this in terms of a possible Russian threat. Historians with access to the archives have concluded that Russia had no plans involving India, as the Russians repeatedly stated.
The Great Game began in 1838 when Britain decided to gain control over the Emirate of Afghanistan and make it a protectorate, and to use the Ottoman Empire, the Persian Empire, the Khanate of Khiva, and the Emirate of Bukhara as buffer states between both empires. This would protect India and also key British sea trade routes by stopping Russia from gaining a port on the Persian Gulf or the Indian Ocean. Russia proposed Afghanistan as the neutral zone, and the final result was diving up Afghanistan with a neutral zone in the middle between Russian areas in the north and British in the South. Important episodes included the failed First Anglo-Afghan War of 1838, the First Anglo-Sikh War of 1845, the Second Anglo-Sikh War of 1848, the Second Anglo-Afghan War of 1878, and the annexation of Kokand by Russia. The 1901 novel Kim by Rudyard Kipling made the term popular and introduced the new implication of great power rivalry. It became even more popular after the 1979 advent of the Soviet–Afghan War.
Qing China
By 1644, the northern Manchu people had conquered Ming dynasty and established a foreign dynasty—the Qing dynasty—once more. The Manchu Qing emperors, especially Confucian scholar Kangxi, remained largely conservative—retaining the bureaucracy and the scholars within it, as well as the Confucian ideals present in Chinese society. However, changes in the economy and new attempts at resolving certain issues occurred too. These included increased trade with Western countries that brought large amounts of silver into the Chinese economy in exchange for tea, porcelain, and silk textiles. This allowed for a new merchant-class, the compradors, to develop. In addition, repairs were done on existing dikes, canals, roadways, and irrigation works. This, combined with the lowering of taxes and government-assigned labor, was supposed to calm peasant unrest. However, the Qing failed to control the growing landlord class which had begun to exploit the peasantry and abuse their position.
By the late 18th century, both internal and external issues began to arise in Qing China's politics, society, and economy. The exam system with which scholars were assigned into the bureaucracy became increasingly corrupt; bribes and other forms of cheating allowed for inexperienced and inept scholars to enter the bureaucracy and this eventually caused rampant neglect of the peasantry, military, and the previously mentioned infrastructure projects. Poverty and banditry steadily rose, especially in rural areas, and mass migrations looking for work throughout China occurred. The perpetually conservative government refused to make reforms that could resolve these issues.
Opium War
China saw its status reduced by what it perceived as parasitic trade with Westerners. Originally, European traders were at a disadvantage because the Chinese cared little for their goods, while European demand for Chinese commodities such as tea and porcelain only grew. In order to tip the trade imbalance in their favor, British merchants began to sell Indian opium to the Chinese. Not only did this sap Chinese bullion reserves, it also led to widespread drug addiction amongst the bureaucracy and society in general. A ban was placed on opium as early as 1729 by the Yongzheng Emperor, but little was done to enforce it. By the early 19th century, under the new Daoguang Emperor, the government began serious efforts to eradicate opium from Chinese society. Leading this endeavour were respected scholar-officials including Imperial Commissioner Lin Zexu.
After Lin destroyed more than 20,000 chests of opium in the summer of 1839, Europeans demanded compensation for what they saw as unwarranted Chinese interference in their affairs. When it was not paid, the British declared war later the same year, starting what became known as the First Opium War. The outdated Chinese junks were no match for the advanced British gunboats, and soon the Yangzi River region came under threat of British bombardment and invasion. The emperor had no choice but to sue for peace, resulting in the exile of Lin and the making of the Treaty of Nanking, which ceded the British control of Hong Kong and opened up trade and diplomacy with other European countries, including Germany, France, and the USA.
Manchuria
Manchuria/Northeast China came under influence of Russia with the building of the Chinese Eastern Railway through Harbin to Vladivostok. The Empire of Japan replaced Russian influence in the region as a result of the Russo-Japanese War in 1904–1905, and Japan laid the South Manchurian Railway in 1906 to Port Arthur. During the Warlord Era in China, Zhang Zuolin established himself in Northeast China, but was murdered by the Japanese for being too independent. The former Chinese emperor, Puyi, was then placed on the throne to lead a Japanese puppet state of Manchukuo. In August 1945, the Soviet Union invaded the region. From 1945 to 1948, Northeast China was a base area for Mao Zedong's People's Liberation Army in the Chinese Civil War. With the encouragement of the Kremlin, the area was used as a staging ground during the Civil War for the Chinese Communists, who were victorious in 1949 and have controlled ever since.
Joseon
When it became the 19th century, the king of Joseon was powerless. Because the noble family of the king's wife got the power and ruled the country by their way. The 26th king of Joseon dynasty, Gojong's father, Heungseon Daewongun wanted the king be powerful again. Even he wasn't the king. As the father of young king, he destroyed noble families and corrupt organizations. So the royal family got the power again. But he wanted to rebuild Gyeongbokgung palace in order to show the royal power to people. So he was criticized by people because he spent enormous money and inflation occurred because of that. So his son, the real king Gojong got power.
Korean Empire
By the Treaty of Shimonoseki article 1 of the first Sino-Japanese war, Korea was independented from China. The 26th king of Joseon, Gojong changed the nation's name to Daehan Jeguk (Korean Empire). And he also promoted himself as an emperor. The new empire accepted more western technology and strengthened military power. And Korean Empire was going to become a neutral nation. Unfortunately, in the Russo-Japanese war, Japan ignored this, and eventually Japan won against Russian Empire, and started to invade Korea. Japan first stole the right of diplomacy from Korean Empire illegally. But every western country ignored this invasion because they knew Japan became a strong country as they defeated Russian Empire. So emperor Gojong sent diplomats to a Dutch city known as The Hague to let everyone know that Japan stole the Empire's right illegally. But it was failed. Because the diplomats couldn't go into the conference room. Japan kicked Gojong off on the grounds that this reason. 3 years after, In 1910, Korean Empire became a part of Empire of Japan. It was the first time ever after invasion of Han dynasty in 108 BC.
Contemporary
The European powers had control of other parts of Asia by the early 20th century, such as British India, French Indochina, Spanish East Indies, and Portuguese Macau and Goa. The Great Game between Russia and Britain was the struggle for power in the Central Asian region in the nineteenth century. The Trans-Siberian Railway, crossing Asia by train, was complete by 1916. Parts of Asia remained free from European control, although not influence, such as Persia, Thailand and most of China. In the twentieth century, Imperial Japan expanded into China and Southeast Asia during World War II. After the war, many Asian countries became independent from European powers. During the Cold War, the northern parts of Asia were communist controlled with the Soviet Union and People's Republic of China, while western allies formed pacts such as CENTO and SEATO. Conflicts such as the Korean War, Vietnam War and Soviet invasion of Afghanistan were fought between communists and anti-communists. In the decades after the Second World War, a massive restructuring plan drove Japan to become the world's second-largest economy, a phenomenon known as the Japanese post-war economic miracle. The Arab–Israeli conflict has dominated much of the recent history of the Middle East. After the Soviet Union's collapse in 1991, there were many new independent nations in Central Asia.
China
Prior to World War II, China faced a civil war between Mao Zedong's Communist party and Chiang Kai-shek's nationalist party; the nationalists appeared to be in the lead. However, once the Japanese invaded in 1937, the two parties were forced to form a temporary cease-fire in order to defend China. The nationalists faced many military failures that caused them to lose territory and subsequently, respect from the Chinese masses. In contrast, the communists' use of guerilla warfare (led by Lin Biao) proved effective against the Japanese's conventional methods and put the Communist Party on top by 1945. They also gained popularity for the reforms they were already applying in controlled areas, including land redistribution, education reforms, and widespread health care. For the next four years, the nationalists would be forced to retreat to the small island east of Fujian province, known as Taiwan (formerly known as Formosa), where they remain today. In mainland China, People's Republic of China was established by the Communist Party, with Mao Zedong as its state chairman.
The communist government in China was defined by the party cadres. These hard-line officers controlled the People's Liberation Army, which itself controlled large amounts of the bureaucracy. This system was further controlled by the Central Committee, which additionally supported the state chairman who was considered the head of the government. The People's Republic's foreign policies included the repressing of secession attempts in Mongolia and Tibet and supporting of North Korea and North Vietnam in the Korean War and Vietnam War, respectively. By 1960 China and the USSR became adversaries, battling worldwide for control of local communist movements.
Today China plays important roles in world economics and politics. China today is the world's second largest economy and the second fastest growing economy.
Indian Subcontinent
From the mid-18th century to the mid-19th century, large regions of India were gradually annexed by the East India Company, a chartered company acting as a sovereign power on behalf of the British government. Dissatisfaction with company rule in India led to the Indian Rebellion of 1857, which rocked parts of north and central India, and led to the dissolution of the company. India was afterwards ruled directly by the British Crown, in the British Raj. After World War I, a nationwide struggle for independence was launched by the Indian National Congress, led by Mahatma Gandhi, and noted for nonviolence. Later, the All-India Muslim League would advocate for a separate Muslim-majority nation state.
In August 1947, the British Indian Empire was partitioned into the Union of India and Dominion of Pakistan. In particular, the partition of Punjab and Bengal led to rioting between Hindus, Muslims, and Sikhs in these provinces and spread to other nearby regions, leaving some 500,000 dead. The police and army units were largely ineffective. The British officers were gone, and the units were beginning to tolerate if not actually indulge in violence against their religious enemies. Also, this period saw one of the largest mass migrations anywhere in modern history, with a total of 12 million Hindus, Sikhs and Muslims moving between the newly created nations of India and Pakistan (which gained independence on 15 and 14 August 1947 respectively). In 1971, Bangladesh, formerly East Pakistan and East Bengal, seceded from Pakistan through an armed conflict sparked by the rise of the Bengali nationalist and self-determination movement.
Korea
During the period when the Korean War occurred, Korea divided into North and South. Syngman Rhee became the first president of South Korea, and Kim Il Sung became the supreme leader of North Korea. After the war, the president of South Korea, Syngman Rhee tries to become a dictator. So the April Revolution occurred, eventually Syngman Rhee was exiled from his country.
In 1963, Park Chung Hee was empowered with a military coup d'état. He dispatched Republic of Korea Army to Vietnam War. And during this age, the economy of South Korea outran that of North Korea.
Although Park Chung Hee improved the nation's economy, he was a dictator, so people didn't like him. Eventually, he was murdered by Kim Jae-gyu. In 1979, Chun Doo-hwan was empowered by another coup d’état by military. He oppressed the resistances in the city of Gwangju. That event is called 'Gwangju Uprising'. Despite the Gwangju Uprising, Chun Doo-hwan became the president. But the people resisted again in 1987. This movement is called 'June Struggle'. As a result of Gwangju Uprising and June Struggle, South Korea finally became a democratic republic in 1987.
Roh Tae-woo (1988–93), Kim Young-sam (1993–98), Kim Dae-jung (1998–2003), Roh Moo-hyun (2003–2008), Lee Myung-bak (2008–2013), Park Geun-hye (2013–2017), Moon Jae-in (2017–) were elected as a president in order after 1987. In 1960, North Korea was far wealthier than South Korea. But in 1970, South Korea begins to outrun the North Korean economy. In 2018, South Korea is ranked #10 in world GDP ranking.
See also
Ancient Asian history
History of Southeast Asia
Prehistoric Asia
References
Bibliography
Cotterell, Arthur. Asia: A Concise History (2011)
Cotterell, Arthur. Western Power in Asia: Its Slow Rise and Swift Fall, 1415 - 1999 (2009) popular history; excerpt
Curtin, Philip D. The World and the West: The European Challenge and the Overseas Response in the Age of Empire (2002)
Embree, Ainslie T., and Carol Gluck, eds. Asia in Western and World History: A Guide for Teaching (M.E. Sharpe, 1997).
Embree, Ainslie T., ed. Encyclopedia of Asian history (1988)
vol. 1 online; vol 2 online; vol 3 online; vol 4 online
Fairbank, John K., Edwin O. Reischauer. A History of East Asian Civilization: Volume One : East Asia the Great Tradition and A History of East Asian Civilization: Volume Two : East Asia the Modern transformation (1966) Online free to borrow
Moffett, Samuel Hugh. A History of Christianity in Asia, Vol. II: 1500–1900 (2003) excerpt
Murphey, Rhoads. A History of Asia (8th ed, 2019) excerpt also Online
Paine, S. C. M. The Wars for Asia, 1911-1949 (2014) excerpt
Stearns, Peter N., and William L. Langer. The Encyclopedia of World History: Ancient, Medieval, and Modern (2001)
Regions
Adshead, Samuel Adrian Miles. Central Asia in world history (Springer, 2016).
Best, Antony. The International History of East Asia, 1900-1968: Trade, Ideology and the Quest for Order (2010) online
Catchpole, Brian. A map history of modern China (1976), new maps & diagrams
Clyde, Paul Herbert. International-Rivalries-In-Manchuria-1689-1928 (2nd ed. 1928) online free
Clyde, Paul H, and Burton H. Beers. The Far East, a history of the Western impact and the Eastern response, 1830-1975 (6th ed. 1975) 575pp
Clyde, Paul Hibbert. The Far East: A History of the Impact of the West on Eastern Asia (3rd ed. 1948) online free; 836pp
Ebrey, Patricia Buckley, Anne Walthall and James Palais. East Asia: A Cultural, Social, and Political History (2006); 639pp; also in 2-vol edition split at 1600.
Fenby, Jonatham The Penguin History of Modern China: The Fall and Rise of a Great Power 1850 to the Present (3rd ed. 2019) popular history.
Gilbert, Marc Jason. South Asia in World History (Oxford UP, 2017)
Goldin, Peter B. Central Asia in World History (Oxford UP, 2011)
Holcombe, Charles. A History of East Asia: From the Origins of Civilization to the Twenty-First Century (2010).
Huffman, James L. Japan in World History (Oxford, 2010)
Jansen, Marius B. Japan and China: From War to Peace, 1894-1972 (1975)
Karl, Rebecca E. "Creating Asia: China in the world at the beginning of the twentieth century." American Historical Review 103.4 (1998): 1096–1118. online
Lockard, Craig. Southeast Asia in world history (Oxford UP, 2009).
Ludden, David. India and South Asia: A Short History (2013).
Mansfield, Peter, and Nicolas Pelham, A History of the Middle East (4th ed, 2013).
Park, Hye Jeong. "East Asian Odyssey Towards One Region: The Problem of East Asia as a Historiographical Category." History Compass 12.12 (2014): 889–900. online
Ropp, Paul S. China in World History (Oxford UP, 2010)
Economic history
Allen, G.C. A Short Economic History Of Modern Japan 1867-1937 (1945) online; also 1981 edition free to borrow
Cowan, C.D. ed. The economic development of China and Japan: studies in economic history and political economy (1964) online free to borrow
Hansen, Valerie. The Silk Road: A New History (Oxford University Press, 2012).
Jones, Eric. The European miracle: environments, economies and geopolitics in the history of Europe and Asia. (Cambridge UP, 2003).
Lockwood, William W. The economic development of Japan; growth and structural change (1970) online free to borrow
Pomeranz, Kenneth. The Great Divergence: China, Europe, and the Making of the Modern World Economy. (2001)
Schulz-Forberg, Hagen, ed. A Global Conceptual History of Asia, 1860–1940 (2015)
Smith, Alan K. Creating a World Economy: Merchant Capital, Colonialism, and World Trade, 1400-1825 (Routledge, 2019).
Von Glahn, Richard. The Economic History of China (2016)
Relations with Europe
Belk, Russell. "China’s global trade history: A western perspective." Journal of China Marketing 6.1 (2016): 1-22 [1 online].
Hoffman, Philip T. Why did Europe conquer the world? (Princeton UP, 2017).\
Ji, Fengyuan. "The West and China: discourses, agendas and change." Critical Discourse Studies 14.4 (2017): 325–340.
Lach, Donald F. Asia in the Making of Europe (3 vol. U of Chicago Press, 1994).
Lach, Donald F. Southeast Asia in the eyes of Europe: the sixteenth century (U of Chicago Press, 1968).
Lach, Donald F., and Edwin J. Van Kley. "Asia in the eyes of Europe: the seventeenth century." The Seventeenth Century 5.1 (1990): 93–109.
Lach, Donald F. China in the eyes of Europe: the Sixteenth Century (U of Chicago Press, 1968).
Lee, Christina H., ed. Western visions of the Far East in a Transpacific Age, 1522-1657 (Routledge, 2016).
Nayar, Pramod K. "Marvelous excesses: English travel writing and India, 1608–1727." Journal of British Studies 44.2 (2005): 213–238.
Pettigrew, William A., and Mahesh Gopalan, eds. The East India Company, 1600-1857: Essays on Anglo-Indian Connection (Routledge, 2016).
Smith, Alan K. Creating a World Economy: Merchant Capital, Colonialism, and World Trade, 1400-1825 (Routledge, 2019).
Steensgaard, Niels. "European shipping to Asia 1497–1700." Scandinavian Economic History Review 18.1 (1970): 1–11. | 0.774736 | 0.994149 | 0.770203 |
Nativism (politics) | Nativism is the political policy of promoting or protecting the interests of native-born or indigenous people over those of immigrants, including the support of anti-immigration and immigration-restriction measures. In the United States, nativism does not refer to a movement led by Native Americans, also referred to as American Indians.
Definition
According to Cas Mudde, a University of Georgia professor, nativism is a largely American notion that is rarely debated in Western Europe or Canada; the word originated with mid-nineteenth-century political parties in the United States, most notably the Know Nothing party, which saw Catholic immigration from nations such as Germany and Ireland as a serious threat to native-born Protestant Americans.
Causes
According to Joel S. Fetzer, opposition to immigration commonly arises in many countries because of issues of national, cultural, and religious identity. The phenomenon has especially been studied in Australia, Canada, New Zealand, the United Kingdom, and the United States, as well as in continental Europe. Thus, nativism has become a general term for opposition to immigration which is based on fears that immigrants will "distort or spoil" existing cultural values. In situations where immigrants greatly outnumber the original inhabitants, nativists seek to prevent cultural change.
Beliefs that contribute to anti-immigration sentiment include:
Economic
Employment: The belief that immigrants acquire jobs that would have otherwise been available to native citizens, limiting native employment, and the belief that immigrants also create a surplus of labor that results in lowered wages.
Government expense: The belief that immigrants do not pay enough taxes to cover the cost of the services they require.
Welfare: The belief that immigrants make heavy use of the social welfare systems.
Housing: The belief that immigrants reduce vacancies, causing rent increases.
Cultural
Language: The belief that immigrants isolate themselves in their own communities and refuse to learn the local language.
Culture: The belief that immigrants will outnumber the native population and replace its culture with theirs.
Crime: The belief that immigrants are more prone to crime than the native population.
Patriotism: The belief that immigrants damage a nation's sense of community based on ethnicity and nationality.
Environmental
Environment: The belief that immigrants increase the consumption of limited resources.
Overpopulation: The belief that immigration contributes to overpopulation.
Decolonization: The belief immigrants are colonizing the native or indigenous people.
Hans-Georg Betz examines three facets of nativism: economic, welfare, and symbolic. Economic nativism preaches that good jobs ought to be reserved for native citizens. Welfare nativism insists that native citizens should have absolute priority in access to governmental benefits. Symbolic nativism calls on the society and government to defend and promote the nation's cultural heritage. Betz argues that economic and welfare themes were historically dominant, but that since the 1990s symbolic nativism has become the focus of radical right-wing populist mobilization.
By country and region
Asia-Pacific
Australia
Many Australians opposed the influx of Chinese immigrants at time of the nineteenth-century gold rushes. When the separate Australian colonies formed the Commonwealth of Australia in 1901, the new nation adopted "White Australia" as one of its founding principles. Under the White Australia policy, entry of Chinese and other Asians remained controversial until well after World War II, although the country remained home to many long-established Chinese families dating from before the adoption of White Australia. By contrast, most Pacific Islanders were deported soon after the policy was adopted, while the remainder were forced out of the canefields where they had worked for decades.
Antipathy of native-born white Australians toward British and Irish immigrants in the late 19th century was manifested in a new party, the Australian Natives' Association.
Since early 2000, opposition has mounted to asylum seekers arriving in boats from Indonesia.
South Korea
The Democratic Party of Korea has been described as nativist by scholars due to its support for Korean nationalism and opposition to immigration.
Pakistan
The Pakistani province of Sindh has seen nativist movements, promoting control for the Sindhi people over their homeland. After the 1947 Partition of India, large numbers of Muhajir people migrating from India entered the province, becoming a majority in the provincial capital city of Karachi, which formerly had an ethnically Sindhi majority. Sindhis have also voiced opposition to the promotion of Urdu, as opposed to their native tongue, Sindhi.
These nativist movements are expressed through Sindhi nationalism and the Sindhudesh separatist movement. Nativist and nationalist sentiments increased greatly after the independence of Bangladesh from Pakistan in 1971.
Taiwan
After the Chinese Civil War, Taiwan became a sanctuary for Chinese nationalists who followed a Western ideology, fleeing from communists. The new arrivals governed through the Chinese Nationalist Party (Kuomintang) until the 1970s. Taiwanese identity constructed through literature in the post-civil war period led to the gradual acceptance of Taiwan's unique political destiny. This led to a peaceful transition of power from the Kuomintang to the Democratic Progressive Party in the 2000s. A-chin Hsiau (Author of Politics and Cultural Nativism) claims the origins of Taiwanese national identity to the 1970s, when youth activism transformed society, politics and culture which some are still present.
Americas
Brazil
The Brazilian elite desired the racial whitening of the country, similarly to Argentina and Uruguay. The country encouraged European immigration, but non-white immigration always faced considerable backlash. On July 28, 1921, representatives Andrade Bezerra and Cincinato Braga proposed a law whose Article 1 provided: "The immigration of individuals from the black race to Brazil is prohibited." On October 22, 1923, representative Fidélis Reis produced another bill on the entry of immigrants, whose fifth article was as follows: "The entry of settlers from the black race into Brazil is prohibited. For Asian [immigrants] there will be allowed each year a number equal to 5% of those residing in the country.(...)".
In the 19th and 20th centuries, there were negative feelings toward the communities of German, Italian, Japanese, and Jewish immigrants, who conserved their languages and cultures instead of adopting Portuguese and Brazilian habits (so that nowadays, Brazil has the most communities in the Americas of Venetian speakers, and the second-most of German), and were seen as particularly likely to form ghettos and to have high rates of endogamy (in Brazil, it is regarded as usual for people of different backgrounds to marry), among other concerns.
It affected the Japanese more harshly, because they were Asian, and thus seen as an obstacle to the whitening of Brazil. Oliveira Viana, a Brazilian jurist, historian and sociologist described the Japanese immigrants as follows: "They (Japanese) are like sulfur: insoluble". The Brazilian magazine O Malho' in its edition of December 5, 1908 issued criticised the Japanese immigrants in the following quote: "The government of São Paulo is stubborn. After the failure of the first Japanese immigration, it contracted 3,000 yellow people. It insists on giving Brazil a race diametrically opposite to ours". In 1941 the Brazilian minister of justice, Francisco Campos, defended the ban on the admission of 400 Japanese immigrants into São Paulo writing: "their despicable standard of living is a brutal competition with the country's worker; their selfishness, their bad faith, their refractory character, make them a huge ethnic and cultural cyst located in the richest regions of Brazil".
Years before World War II, the government of President Getúlio Vargas initiated a process of forced assimilation of people of immigrant origin in Brazil. In 1933, a constitutional amendment was approved by a large majority and established immigration quotas without mentioning race or nationality and prohibited the population concentration of immigrants. According to the text, Brazil could not receive more than 2% of the total number of entrants of each nationality that had been received in the last 50 years. Only the Portuguese were excluded. The measures did not affect the immigration of Europeans such as Italians and Spaniards, who had already entered in large numbers and whose migratory flow was downward. However, immigration quotas, which remained in force until the 1980s, restricted Japanese immigration, as well as Korean and Chinese immigration.
During World War II they were seen as more loyal to their countries of origin than to Brazil. In fact, there were violent revolts in the Japanese community of the states of São Paulo and Paraná when Emperor Hirohito declared the Japanese surrender and stated that he was not really a deity, which news was seen as a conspiracy perpetrated in order to hurt Japanese honour and strength. Nevertheless, it followed hostility from the government. The Japanese Brazilian community was strongly marked by restrictive measures when Brazil declared war against Japan in August 1942. Japanese Brazilians could not travel the country without safe conduct issued by the police; over 200 Japanese schools were closed and radio equipment was seized to prevent transmissions on short wave from Japan. The goods of Japanese companies were confiscated and several companies of Japanese origin suffered restrictions, including the use of the newly founded Banco América do Sul. Japanese Brazilians were prohibited from driving motor vehicles (even if they were taxi drivers), buses or trucks on their property. The drivers employed by Japanese had to have permission from the police. Thousands of Japanese immigrants were arrested or expelled from Brazil on suspicion of espionage. There were many anonymous denunciations because of "activities against national security" arising from disagreements between neighbours, recovery of debts and even fights between children. Japanese Brazilians were arrested for "suspicious activity" when they were in artistic meetings or picnics. On July 10, 1943, approximately 10,000 Japanese and German immigrants who lived in Santos had 24 hours to close their homes and businesses and move away from the Brazilian coast. The police acted without any notice. About 90% of people displaced were Japanese. To reside in Baixada Santista, the Japanese had to have a safe conduct. In 1942, the Japanese community who introduced the cultivation of pepper in Tomé-Açu, in Pará, was virtually turned into a "concentration camp" (expression of the time) from which no Japanese could leave. This time, the Brazilian ambassador in Washington, D.C., Carlos Martins Pereira e Sousa, encouraged the government of Brazil to transfer all the Japanese Brazilians to "internment camps" without the need for legal support, in the same manner as was done with the Japanese residents in the United States. No single suspicion of activities of Japanese against "national security" was confirmed.
Nowadays, nativism in Brazil affects primarily migrants from elsewhere in the Third World, such as the new wave of Levantine Arabs (this time, mostly Muslims from Palestine instead of overwhelmingly Christian from Syria and Lebanon), South and East Asians (primarily Mainland Chinese), Spanish-speakers and Amerindians from neighbouring South American countries and, especially, West Africans and Haitians. Following the 2010 Haiti earthquake and considerable illegal immigration to northern Brazil and São Paulo, a subsequent debate in the population was concerned with the reasons why Brazil has such lax laws and enforcement concerning illegal immigration.
According to the 1988's Brazilian Constitution, it is an unbailable crime to address someone in an offensive racist way, and it is illegal to discriminate against someone on the basis of his or her race, skin colour, national or regional origin or nationality; thus, nativism and opposition to multiculturalism would be too much of a polemic and delicate topic to be openly discussed as a basic ideology for even the most right-leaning modern political parties.
Canada
Throughout the 19th century, well into the 20th, the Orange Order in Canada attacked and tried to politically defeat the Irish Catholics. In the British Empire, traditions of anti-Catholicism in Britain led to fears that Catholics were a threat to the national (British) values. In Canada, the Orange Order campaigned vigorously against the Catholics throughout the 19th century, often with violent confrontations. Both sides were immigrants from Ireland and neither side claimed loyalty to Canada.
The Ku Klux Klan spread in the mid-1920s from the U.S. to parts of Canada, especially Saskatchewan, where it helped topple the Liberal government. The Klan creed was, historian Martin Robin argues, in the mainstream of Protestant Canadian sentiment, for it was based on "Protestantism, separation of Church and State, pure patriotism, restrictive and selective immigration, one national public school, one flag and one language—English."Martin Robin, Shades of Right: Nativist and Fascist Politics in Canada, 1920–1940 (1991), quote on pp. 23–24. Robin p 86, notes the Klan in Canada was not violent.
In World War I, Canadian naturalized citizens of German or Austrian origins were stripped of their right to vote, and tens of thousands of Ukrainians (who were born in the Austro-Hungarian Empire) were rounded up and put in internment camps.
Hostility to the Chinese and other Asians was intense, and involved provincial laws that hindered immigration of Chinese and Japanese and blocked their economic mobility. In 1942 Japanese Canadians were forced into detention camps in response to Japanese aggression in World War II.
Hostility of native-born Canadians to competition from English immigrants in the early 20th century was expressed in signs that read, "No English Need Apply!" The resentment came because the immigrants identified more with England than with Canada.
United States
According to the American historian John Higham, nativism is:an intense opposition to an internal minority on the grounds of its foreign (i.e., “un-American”) connections. Specific nativist antagonisms may and do, vary widely in response to the changing character of minority irritants and the shifting conditions of the day; but through each separate hostility runs the connecting, energizing force of modern nationalism. While drawing on much broader cultural antipathies and ethnocentric judgments, nativism translates them into zeal to destroy the enemies of a distinctively American way of life.
Colonial era
There was nativism in the colonial era shown by English colonists against the Palatine German immigrants in the Pennsylvania Colony. Benjamin Franklin questioned about allowing Palatine refugees to settle in Pennsylvania. He was concerned about the potential consequences of their arrival, particularly regarding the preservation of Pennsylvania's English identity and heritage. He questioned whether it was prudent for a colony established by English settlers to be overwhelmed by newcomers who might not integrate into English culture and language.
Early republic
Nativism was a political factor in the 1790s and in the 1830s–1850s. Nativism became a major issue in the late 1790s, when the Federalist Party expressed its strong opposition to the French Revolution by trying to strictly limit immigration, and stretching the time to 14 years for citizenship. At the time of the Quasi-War with the French First Republic in 1798, the Federalists and Congress passed the Alien and Sedition Acts, including the Alien Act, the Naturalization Act and the Sedition Act. Thomas Jefferson and James Madison fought against the new laws by drafting the Virginia and Kentucky Resolutions. In 1800, Jefferson was elected president, and removed most of the anti-immigrant legislation.
1830–1860
The term "nativism" was first used by 1844: "Thousands were Naturalized expressly to oppose Nativism, and voted the Polk ticket mainly to that end." Nativism gained its name from the "Native American" parties of the 1840s and 1850s. In this context "Native" does not mean Indigenous Americans or American Indians but rather immigrant descendants of those descended from the inhabitants of the original Thirteen Colonies. It impacted politics in the mid-19th century because of the large inflows of immigrants after 1845 from cultures that were different from the existing American culture. Nativists objected primarily to Irish Roman Catholics because of their loyalty to the Pope and also because of their supposed rejection of republicanism as an American ideal.
Nativist movements included the Know Nothing or "American Party" of the 1850s, the Immigration Restriction League of the 1890s, the anti-Asian movements in the Western states, resulting in the Chinese Exclusion Act of 1882 and the "Gentlemen's Agreement of 1907", by which the government of Imperial Japan stopped emigration to the United States. Labor unions were strong supporters of Chinese exclusion and limits on immigration, because of fears that they would lower wages and make it harder for workers to organize unions.
Nativist outbursts occurred in the Northeast from the 1830s to the 1850s, primarily in response to a surge of Irish Catholic immigration. In 1836, Samuel Morse ran unsuccessfully for Mayor of New York City on a nativist ticket, receiving 1,496 votes. In New York City, an Order of United Americans was founded as a nativist fraternity, following the Philadelphia Nativist Riots of the preceding spring and summer, in December 1844. The American historian Eric Kaufmann has suggested that American nativism has been explained primarily in psychological and economic terms due to the neglect of a crucial cultural and ethnic dimension. Furthermore, Kauffman claims that American nativism cannot be understood without reference to an American ethnic group which took shape prior to the large-scale immigration of the mid-19th century.
The nativists went public in 1854 when they formed the "American Party", which was especially hostile to the immigration of Irish Catholics, and campaigned for laws to require longer wait time between immigration and naturalization; these laws never passed. It was at this time that the term "nativist" first appeared, as their opponents denounced them as "bigoted nativists". Former President Millard Fillmore ran on the American Party ticket for the presidency in 1856. Henry Winter Davis, an active Know-Nothing, was elected on the American Party ticket to Congress from Maryland. He told Congress the un-American Irish Catholic immigrants were to blame for the recent election of Democrat James Buchanan as president, stating: The recent election has developed in an aggravated form every evil against which the American party protested. Foreign allies have decided the government of the country -- men naturalized in thousands on the eve of the election. Again in the fierce struggle for supremacy, men have forgotten the ban which the Republic puts on the intrusion of religious influence on the political arena. These influences have brought vast multitudes of foreign-born citizens to the polls, ignorant of American interests, without American feelings, influenced by foreign sympathies, to vote on American affairs; and those votes have, in point of fact, accomplished the present result.
The American Party also included many former Whigs who ignored nativism, and included (in the South) a few Roman Catholics whose families had long lived in North America. Conversely, much of the opposition to Roman Catholics came from Protestant Irish immigrants and German Lutheran immigrants, who were not native at all and can hardly be called "nativists."
This form of American nationalism is often identified with xenophobia and anti-Catholic sentiment. In Charlestown, Massachusetts, a nativist mob attacked and burned down a Roman Catholic convent in 1834 (no one was injured). In the 1840s, small scale riots between Roman Catholics and nativists took place in several American cities. In Philadelphia, Pennsylvania in 1844, for example, a series of nativist assaults on Roman Catholic churches and community centers resulted in the loss of lives and the professionalization of the police force. In Louisville, Kentucky, election-day rioters killed at least 22 people in attacks on German and Irish Catholics on 6 August 1855, in what became known as "Bloody Monday."
The new Republican Party kept its nativist element quiet during the 1860s, since immigrants were urgently needed for the Union Army. European immigrants from England, Scotland, and Scandinavia favored the Republicans during the Third Party System (1854–1896), while others especially Irish Catholics and Germans, were usually Democratic. Hostility toward Asians was very strong in the Western region from the 1860s to the 1940s. Anti-Catholicism experienced a revival in the 1890s in the American Protective Association. It was led by Protestant Irish immigrants hostile to the Irish Catholics.
Anti-German nativism
From the 1840s to the 1920s, German Americans were often distrusted because of their separatist social structure, their German-language schools, their attachment to their native tongue over English, and their neutrality during World War I.
The Bennett Law caused a political uproar in Wisconsin in 1890, as the state government passed a law that threatened to close down hundreds of German-language elementary schools. Catholic and Lutheran Germans rallied to defeat Governor William D. Hoard. Hoard attacked German American culture and religion:
"We must fight alienism and selfish ecclesiasticism.... The parents, the pastors and the church have entered into a conspiracy to darken the understanding of the children, who are denied by cupidity and bigotry the privilege of even the free schools of the state."
Hoard, a Republican, was defeated by the Democrats. A similar campaign in Illinois regarding the "Edwards Law" led to a Republican defeat there in 1890.
In 1917–1918, a wave of nativist sentiment due to American entry into World War I led to the suppression of German cultural activities in the United States, Canada, and Australia. There was little violence, but many places and streets had their names changed (The city of "Berlin" in Ontario was renamed "Kitchener" after a British hero), churches switched to English for their services, and German Americans were forced to buy war bonds to show their patriotism. In Australia thousands of Germans were put into internment camps.
Anti-Chinese nativism
In the 1870s and 1880s in the Western states, ethnic White immigrants, especially Irish Americans and German Americans, targeted violence against Chinese workers, driving them out of smaller towns. Denis Kearney, an immigrant from Ireland, led a mass movement in San Francisco in the 1870s that incited attacks on the Chinese there and threatened public officials and railroad owners. The Chinese Exclusion Act of 1882 was the first of many nativist acts of Congress which attempted to limit the flow of immigrants into the U.S.. The Chinese responded to it by filing false claims of American birth, enabling thousands of them to immigrate to California. The exclusion of the Chinese caused the western railroads to begin importing Mexican railroad workers in greater numbers ("traqueros").
20th century
In the 1890s–1920s era, nativists and labor unions campaigned for immigration restriction following the waves of workers and families from Southern and Eastern Europe, including the Kingdom of Italy, the Balkans, Congress Poland, Austria-Hungary, and the Russian Empire. A favorite plan was the literacy test to exclude workers who could not read or write their own foreign language. Congress passed literacy tests, but presidents—responding to business needs for workers—vetoed them. Senator Henry Cabot Lodge argued the need for literacy tests, and described its implication on the new immigrants:
Responding to these demands, opponents of the literacy test called for the establishment of an immigration commission to focus on immigration as a whole. The United States Immigration Commission, also known as the Dillingham Commission, was created and tasked with studying immigration and its effect on the United States. The findings of the commission further influenced immigration policy and upheld the concerns of the nativist movement. Following World War I, nativists in the 1920s focused their attention on Southern and Eastern Europeans due to their Roman Catholic and Jewish faith, and realigned their beliefs behind racial and religious nativism.
Between the 1920s and the 1930s, the Ku Klux Klan developed an explicitly nativist, pro-Anglo-Saxon Protestant, anti-Catholic, anti-Irish, anti-Italian, and anti-Jewish stance in relation to the growing political, economic, and social uncertainty related to the arrival of European immigrants on the American soil, predominantly composed of Irish people, Italians, and Eastern European Jews. The racial concern of the anti-immigration movement was linked closely to the eugenics movement that was sweeping in the United States during the same period. Led by Madison Grant's book, The Passing of the Great Race nativists grew more concerned with the racial purity of the United States. In his book, Grant argued that the American racial stock was being diluted by the influx of new immigrants from the Mediterranean, Ireland, the Balkans, and the ghettos. The Passing of the Great Race reached wide popularity among Americans and influenced immigration policy in the 1920s. In the 1920s, a wide national consensus sharply restricted the overall inflow of immigrants from southern and eastern Europe. The Second Ku Klux Klan, which flourished in the United States during the 1920s, used strong nativist, anti-Catholic, and anti-Jewish rhetoric, but the Catholics led a counterattack, such as in Chicago in 1921, where ethnic Irish residents hanged a Klan member in front of 3,000 people.
After intense lobbying from the nativist movement, the United States Congress passed the Emergency Quota Act in 1921. This bill was the first to place numerical quotas on immigration. It capped the inflow of immigrations to 357,803 for those arriving outside of the western hemisphere. However, this bill was only temporary, as Congress began debating a more permanent bill. The Emergency Quota Act was followed with the Immigration Act of 1924, a more permanent resolution. This law reduced the number of immigrants able to arrive from 357,803, the number established in the Emergency Quota Act, to 164,687. Though this bill did not fully restrict immigration, it considerably curbed the flow of immigration into the United States, especially from Southern and Eastern Europe. During the late 1920s, an average of 270,000 immigrants were allowed to arrive, mainly because of the exemption of Canada and Latin American countries. Fear of low-skilled Southern and Eastern European immigrants flooding the labor market was an issue in the 1920s, the 1930s, and the first decade of the 21st century (focused on immigrants from Mexico and Central America).
An immigration reductionism movement formed in the 1970s and continues to the present day. Prominent members often press for massive, sometimes total, reductions in immigration levels. American nativist sentiment experienced a resurgence in the late 20th century, this time directed at undocumented workers, largely Mexican, resulting in the passage of new penalties against illegal immigration in 1996. Most immigration reductionists see illegal immigration, principally from across the United States–Mexico border, as the more pressing concern. Authors such as Samuel Huntington have also seen recent Hispanic immigration as creating a national identity crisis and presenting insurmountable problems for US social institutions.
Despite the fact that, Mexican people descend from actual natives to the region, when noting Mexican immigration in the Southwest, the European-American Cold-War diplomat George F. Kennan wrote in 2002 he saw "unmistakable evidences of a growing differentiation between the cultures, respectively, of large southern and southwestern regions of this country, on the one hand", and those of "some northern regions". In the former, he warned:
David Mayers argues that Kennan represented the "tradition of militant nativism" that resembled or even exceeded the Know Nothings of the 1850s.
21st century
By late 2014, the "Tea Party movement" had turned its focus away from economic issues, spending, and Obamacare, and towards President Barack Obama's immigration policies, which it saw as a threat to transform American society. It planned to defeat leading Republicans who supported immigration programs, such as Senator John McCain. A typical slogan appeared in the Tea Party Tribune: "Amnesty for Millions, Tyranny for All." The New York Times reported:
What started five years ago as a groundswell of conservatives committed to curtailing the reach of the federal government, cutting the deficit and countering the Wall Street wing of the Republican Party has become a movement largely against immigration overhaul. The politicians, intellectual leaders and activists who consider themselves part of the Tea Party have redirected their energy from fiscal austerity and small government to stopping any changes that would legitimize people who are here illegally, either through granting them citizenship or legal status.
Political scientist and pollster Darrell Bricker, CEO of Ipsos Public Affairs, argues nativism is the root cause of the early 21st century wave of populism.
[T]he jet fuel that’s really feeding the populist firestorm is nativism, the strong belief among an electorally important segment of the population that governments and other institutions should honour and protect the interests of their native-born citizens against the cultural changes being brought about by immigration. This, according to the populists, is about protecting the “Real America” (or “Real Britain” or “Real Poland” or “Real France” or “Real Hungary”) from imported influences that are destroying the values and cultures that have made their countries great.
Importantly, it’s not just the nativists who are saying this is a battle over values and culture. Their strongest opponents believe this too, and they are not prepared to concede the high ground on what constitutes a “real citizen” to the populists. For them, this is a battle about the rule of law, inclusiveness, open borders, and global participation.
In his 2016 bid for the presidency, Republican presidential candidate Donald Trump was accused of introducing nativist themes via his controversial stances on temporarily banning foreign Muslims from six specific countries entering the United States, and erecting a substantial wall between the US-Mexico border to halt illegal immigration. Journalist John Cassidy wrote in The New Yorker that Trump was transforming the GOP into a populist, nativist party:
Trump has been drawing on a base of alienated white working-class and middle-class voters, seeking to remake the G.O.P. into a more populist, nativist, avowedly protectionist, and semi-isolationist party that is skeptical of immigration, free trade, and military interventionism.
Donald Brand, a professor of political science, argues:
Donald Trump's nativism is a fundamental corruption of the founding principles of the Republican Party. Nativists champion the purported interests of American citizens over those of immigrants, justifying their hostility to immigrants by the use of derogatory stereotypes: Mexicans are rapists; Muslims are terrorists.
Language
American nativists have promoted English and deprecated the use of German and Spanish. English Only proponents in the late 20th century proposed an English Language Amendment (ELA), a Constitutional Amendment making English the official language of the United States, but it received limited political support.
Europe
In recent decades distrust of immigrating populations and populism have become major themes in considering political tensions in Europe. Many observers see the post-1950s wave of immigration in Europe as fundamentally different to the pre-1914 patterns. They debate the role of cultural differences, ghettos, race, Muslim fundamentalism, poor education and poverty play in creating nativism among the hosts and a caste-type underclass, more similar to white-black tensions in the US. Sociologists Josip Kešić and Jan Willem Duyvendak define nativism as an intense opposition to an internal minority that is portrayed as a threat to the nation because of its different values and priorities. There are three subtypes: secularist nativism; racial nativism; and populist nativism that seeks to restore the historic power and prestige of indigenous elites.
France
Once Italian workers in France had understood the benefit of unionism, and French unions were willing to overcome their fear of Italians as strikebreakers, integration was open for most Italian immigrants. The French state, which was always more of an immigration state than other Western European nations, fostered and supported family-based immigration, and thus helped Italians on their immigration trajectory, with minimal nativism.
Algerian migration to France has generated nativism, characterized by the prominence of Jean-Marie Le Pen and his National Front.
Since the 1990s France experienced rising levels of Islamic antisemitism and acts. By 2006, rising levels of antisemitism were recorded in French schools. Reports related to the tensions between the children of North African Muslim immigrants and North African Jewish children. In the first half of 2009, an estimated 631 recorded acts of antisemitism took place in France, more than the whole of 2008. Speaking to the World Jewish Congress in December 2009, the French Interior Minister Hortefeux described the acts of antisemitism as "a poison to our republic". He also announced that he would appoint a special coordinator for fighting racism and antisemitism.
Germany
For the Poles in the mining districts of western Germany before 1914, nationalism (on both the German and the Polish sides) kept Polish workers, who had established an associational structure approaching institutional completeness (churches, voluntary associations, press, even unions), separate from the host German society. Lucassen found that religiosity and nationalism were more fundamental in generating nativism and inter-group hostility than the labor antagonism.
Nativism grew rapidly in the 1990s and since.Alexander W. Schmidt-Catran, and Dennis C. Spies, "Immigration and welfare support in Germany." American Sociological Review 81.2 (2016): 242-261. online
United Kingdom
The city of London became notorious for the prevalence of nativist viewpoints in the 16th century, and conditions worsened in the 1580s. Many European immigrants became disillusioned by routine threats of assault, numerous attempts at passing legislation calling for the expulsion of foreigners, and the great difficulty in acquiring English citizenship. Cities in the Dutch Republic often proved more hospitable, and many immigrants left London permanently. Nativism emerged in opposition to Irish and Jewish arrivals in the early 20th century. Irish immigrants in Great Britain during the 20th century became estranged from British society, something which Lucassen (2005) attributes to the deep religious divide between Irish Protestants and Catholics.
1930s
In 1933–1939, many people from Nazi Germany, particularly those belonging to minorities which were persecuted under Nazi rule, especially the Jews, sought to emigrate to the United Kingdom. As many as 50,000 may have been successful. There were immigration caps on the number who could enter and, subsequently, some applicants were turned away. When the UK declared war on Germany in 1939, however, migration between the countries ceased.
See also
Criticism of multiculturalism
Ethnocentrism
History of immigration to the United States
Identity politics
Nationalism
Opposition to immigration
Racism
Religious discrimination
Supremacism
White supremacy
Xenophobia
Antisemitism
References
Bibliography
Betz, Hans-Georg. " Facets of nativism: a heuristic exploration" Patterns of Prejudice (2019) 53#2 pp 111–135.
Groenfeldt, D. "The future of indigenous values: cultural relativism in the face of economic development", Futures, 35#9 (2003), pp. 917–29
Jensen, Richard. "Comparative Nativism: The United States, Canada and Australia, 1880s–1910s," Canadian Journal for Social Research (2010) vol 3#1 pp. 45–55
McNally, Mark. Proving the way: conflict and practice in the history of Japanese nativism (2005)
Mamdani, M. When Victims Become Killers: Colonialism, Nativism and the Genocide in Rwanda (2001)
Minkenberg, Michael. "The Radical Right and Anti-Immigrant Politics in Liberal Democracies since World War II: Evolution of a Political and Research Field." Polity 53.3 (2021): 394–417. doi.org/10.1086/714167
Mudde, Cas. The relationship between immigration and nativism in Europe and North America (Washington press, 2012) online.
Yakushko, Oksana. Modern-Day Xenophobia: Critical Historical and Theoretical Perspectives on the Roots of Anti-Immigrant Prejudice (Palgrave Macmillan, 2018)
United States
Alexseev, Mikhail A. Immigration Phobia and the Security Dilemma: Russia, Europe, and the United States (Cambridge University Press, 2005). 294 pp.
Allerfeldt, Kristofer. Race, Radicalism, Religion, and Restriction: Immigration in the Pacific Northwest, 1890–1924. Praeger, 2003. 235 pp.
Anbinder, Tyler. "Nativism and prejudice against immigrants," in A companion to American immigration, ed. by Reed Ueda (2006) pp. 177–201 excerpt
Barkan, Elliott R. "Return of the Nativists? California Public Opinion and Immigration in the 1980s and 1990s." Social Science History 2003 27(2): 229–83. in Project MUSE
Billington, Ray Allen. The Protestant Crusade, 1800–1860: A Study of the Origins of American Nativism (1964) online
Franchot, Jenny. Roads to Rome: The Antebellum Protestant Encounter with Catholicism (1994)
Finzsch, Norbert, and Dietmar Schirmer, eds. Identity and Intolerance: Nationalism, Racism, and Xenophobia in Germany and the United States (2002)
Higham, John, Strangers in the Land: Patterns of American Nativism, 1860–1925 (1955), the standard scholarly history
Hueston, Robert Francis. The Catholic Press and Nativism, 1840–1860 (1976)
Hughey, Matthew W. 'Show Me Your Papers! Obama's Birth and the Whiteness of Belonging.' Qualitative Sociology 35(2): 163–81 (2012)
Kaufmann, Eric. American Exceptionalism Reconsidered: Anglo-Saxon Ethnogenesis in the 'Universal' Nation, 1776–1850, Journal of American Studies, 33 (1999), 3, pp. 437–57.
Lee, Erika. "America first, immigrants last: American xenophobia then and now." Journal of the Gilded Age and Progressive Era 19.1 (2020): 3–18. online
Lee, Erika. America for Americans: A History of Xenophobia in the United States (2019). excerpt
Leonard, Ira M. and Robert D. Parmet. American Nativism 1830–1860 (1971)
Luebke, Frederick C. Bonds of Loyalty: German-Americans and World War I (1974)
Oxx, Katie. The Nativist Movement in America: Religious Conflict in the 19th Century (2013)
Schrag Peter. Not Fit For Our Society: Immigration and Nativism in America (University of California Press; 2010) 256 pp. online
Canada
Houston, Cecil J. and Smyth, William J. The Sash Canada Wore: A Historical Geography of the Orange Order in Canada. U. of Toronto Press, 1980.
McLaughlin, Robert. "Irish Nationalism and Orange Unionism in Canada: A Reappraisal," Éire-Ireland 41.3&4 (2007) 80–109
Mclean, Lorna. "'To Become Part of Us': Ethnicity, Race, Literacy and the Canadian Immigration Act of 1919". Canadian Ethnic Studies 2004 36(2): 1–28.
Miller, J. R. Equal Rights: The Jesuits’ Estates Act Controversy (1979). in late 19c Canada
Palmer, Howard. Patterns of Prejudice: A History of Nativism in Alberta (1992)
Robin, Martion. Shades of Right: Nativist and Fascist Politics in Canada, 1920–1940 (University of Toronto Press, 1992);
See, S.W. Riots in New Brunswick: Orange Nativism and Social Violence in the 1840s (Univ of Toronto Press, 1993).
Ward, W. Peter. White Canada Forever: Popular Attitudes and Public Policy toward Orientals in British Columbia (1978)
Europe
Alexseev, Mikhail A. Immigration Phobia and the Security Dilemma: Russia, Europe, and the United States (Cambridge University Press, 2005). 294 pp.
Art, David. Inside the Radical Right: The Development of Anti-Immigrant Parties in Western Europe (Cambridge University Press; 2011) 288 pp. – examines anti-immigration activists and political candidates in 11 countries.
Betz, Hans-Georg. "Against the 'Green Totalitarianism': Anti-Islamic Nativism in Contemporary Radical Right-Wing Populism in Western Europe," in Christina Schori Liang, ed. Europe for the Europeans (2007)
Betz, Hans-Georg. ""Facets of nativism: a heuristic exploration" Patterns of Prejudice (2019) 53#2 pp 111–135.
Betz, Hans-Georg. Radical Right-Wing Populism in Western Europe (1994).
Ceuppens, Bambi. "Allochthons, Colonizers, and Scroungers: Exclusionary Populism in Belgium," African Studies Review, Volume 49, Number 2, September 2006, pp. 147–86 "Allochthons" means giving welfare benefits only to those groups that are considered to "truly belong"
Chapin, Wesley D. Germany for the Germans?: The Political Effects of International Migration (Greenwood, 1997).
Chinn, Jeff, and Robert Kaiser, eds. Russians as the New Minority: Ethnicity and Nationalism in the Soviet Successor States (1996)
Finzsch, Norbert, and Dietmar Schirmer, eds. Identity and Intolerance: Nationalism, Racism, and Xenophobia in Germany and the United States (2002)
Lucassen, Leo. The Immigrant Threat: The Integration of Old and New Migrants in Western Europe since 1850. University of Illinois Press, 2005. 280 pp; . Examines Irish immigrants in Britain, Polish immigrants in Germany, Italian immigrants in France (before 1940), and (since 1950), Caribbeans in Britain, Turks in Germany, and Algerians in France
Liang, Christina Schori, ed. Europe for the Europeans (2007)
Rose, Richard. "The End of Consensus in Austria and Switzerland," Journal of Democracy, Volume 11, Number 2, April 2000, pp. 26–40
Wertheimer, Jack. Unwelcome Strangers: East European Jews in Imperial Germany'' (1991)
External links
Henry A. Rhodes, "Nativist and Racist Movements in the U.S. and their Aftermath"
Dennis Kearney, President, and H. L. Knight, Secretary, "Appeal from California. The Chinese Invasion. Workingmen’s Address," Indianapolis Times, 28 February 1878.
"A Nation or Notion", by Patrick J. Buchanan, op-ed, 4 October 2006. A conservative defense of nativism.
PoliticosLatinos.com Videos of 2008 US Presidential Election Candidates' Positions regarding Immigration
"Anti-Immigration Groups and the Masks of False Diversity". False Diversity in Anti-Immigration organizations.
A Defense of Nativism, Conservative Heritage Times.
Anti-immigration politics
Nationalism
Political extremism
Racially motivated violence
Racism
Xenophobia | 0.771817 | 0.997899 | 0.770195 |
Class discrimination | Class discrimination, also known as classism, is prejudice or discrimination on the basis of social class. It includes individual attitudes, behaviors, systems of policies and practices that are set up to benefit the upper class at the expense of the lower class.
Social class refers to the grouping of individuals in a hierarchy based on wealth, income, education, occupation, and social network.
Studies show an interconnection between class discrimination and racism and sexism.
History
Class structures existed in a simplified form in pre-agricultural societies, but it has evolved into a more complex and established structure following the establishment of permanent agriculture-based civilizations with a food surplus.
Classism started to be practiced around the 18th century. Segregation into classes was accomplished through observable traits (such as race or profession) that were accorded varying statuses and privileges. Feudal classification systems might include merchant, serf, peasant, warrior, priestly, and noble classes. Rankings were far from invariant with the merchant class in Europe outranking the peasantry, while merchants were explicitly inferior to peasants during the Tokugawa Shogunate in Japan. Modern classism, with less rigid class structures, is harder to identify. In a professional association posting, psychologist Thomas Fuller-Rowell states, "Experiences of [class] discrimination are often subtle rather than blatant, and the exact reason for unfair treatment is often not clear to the victim."
Intersections with other systems of oppression
Socioeconomic, racial/ethnic and gender inequalities in academic achievement have been widely reported in the United States, but how these three axes of inequality intersect to determine academic and non-academic outcomes among school-aged children is not well understood.
Institutional versus personal classism
The term classism can refer to personal prejudice against lower classes as well as to institutional classism, just as the term racism can refer either strictly to personal prejudice or to institutional racism. The latter has been defined as "the ways in which conscious or unconscious classism is manifest in the various institutions of our society".
As with social classes, the difference in social status between people determines how they behave toward each other and the prejudices they likely hold toward each other. People of higher status do not generally mix with lower-status people and often are able to control other people's activities by influencing laws and social standards.
The term "interpersonal" is sometimes used in place of "personal" as in "institutional classism (versus) interpersonal classism" and terms such as "attitude" or "attitudinal" may replace "interpersonal" as contrasting with institutional classism as in the Association of Magazine Media's definition of classism as "any attitude or institutional practice which subordinates people due to income, occupation, education and/or their economic condition".
Classism is also sometimes broken down into more than two categories as in "personal, institutional and cultural" classism. It is common knowledge in sociolinguistics that meta-social language abounds in lower registers, thus the slang for various classes or racial castes.
Structural positions
Schüssler Fiorenza describes interdependent "stratifications of gender, race, class, religion, heterosexualism, and age" as structural positions assigned at birth. She suggests that people inhabit several positions, and that positions with privilege become nodal points through which other positions are experienced. For example, in a context where gender is the primary privileged position (e.g. patriarchy, matriarchy), gender becomes the nodal point through which sexuality, race, and class are experienced. In a context where class is the primary privileged position (i.e. classism), gender and race are experienced through class dynamics. Fiorenza stresses that kyriarchy is not a hierarchical system as it does not focus on one point of domination. Instead it is described as a "complex pyramidal system" with those on the bottom of the pyramid experiencing the "full power of kyriarchal oppression". The kyriarchy is recognized as the status quo and therefore its oppressive structures may not be recognized.
To maintain this system, kyriarchy relies on the creation of a servant class, race, gender, or people. The position of this class is reinforced through "education, socialization, and brute violence and malestream rationalization". Tēraudkalns suggests that these structures of oppression are self-sustained by internalized oppression; those with relative power tend to remain in power, while those without tend to remain disenfranchised. In addition, structures of oppression amplify and feed into each other.
In the UAE, Western workers and local nationals are given better treatment or are preferred.
Media representation
Class discrimination can be seen in many different forms of media such as television shows, films and social media. Classism is also systemic, and its implications can go unnoticed in the media that is consumed by society. Class discrimination in the media displays the knowledge of what people feel and think about classism. When seeing class discrimination in films and television shows, people are influenced and believe that is how things are in real life, for whatever class is being displayed. Children can be exposed to class discrimination through movies, with a large pool of high-grossing G-rated movies portraying classism in various contexts. Children may develop biases at a young age that shape their beliefs throughout their lifetime, which would demonstrate the issues with class discrimination being prevalent in the media. Media is a big influence on the world today, with that something such as classism is can be seen in many different lights. Media plays an important role in how certain groups of people are perceived, which can make certain biases stronger. Usually, the lower income people are displayed in the media as dirty, lacking education and manners, and homeless. People can use the media to learn more about different social classes or use the media, such as social media to influence others on what they believe. In some cases, people who are in a social class that is portrayed negatively by the media can be affected in school and social life as "teenagers who grew up in poverty reported higher levels of discrimination, and the poorer the teens were, the more they experienced discrimination".
Legislation
The European Convention on Human Rights, in Article 14, contains protections against social class ("social origin") discrimination.
See also
References
Further reading
Bowker, Geoffrey C., and Susan Leigh Star. Sorting Things Out: Classification and Its Consequences. MIT Press, 1999.
(2016) 39(1) University of New South Wales Law Journal 84.
A People's History of the United States by Howard Zinn.
Hill, Marcia, and Esther Rothblum. Classism and Feminist Therapy: Counting Costs. New York: Haworth Press, 1996.
hooks, bell. Where we stand: class matters. New York & London: Routledge, 2000.
Gans, Herbert. The War Against the Poor, 1996.
Homan, Jacqueline S. Classism For Dimwits. Pennsylvania: Elf Books, 2007/2009.
Packard, Vance. Status Seekers, 1959.
Beegle, Donna M. See Poverty - Be the Difference, 2009.
Leondar-Wright, Betsy. Class Matters: Cross-Class Alliance Building for Middle-Class Activists: New Society Publishers, 2005.
External links
"People Like Us" at PBS
Class Action
Political terminology
Social classes | 0.776139 | 0.99231 | 0.770171 |
Myth | Myth is a genre of folklore consisting primarily of narratives that play a fundamental role in a society. For scholars, this is very different from the vernacular usage of the term "myth" that refers to a belief that is not true. Instead, the veracity of a myth is not a defining criterion.
Myths are often endorsed by secular and religious authorities and are closely linked to religion or spirituality. Many societies group their myths, legends, and history together, considering myths and legends to be factual accounts of their remote past. In particular, creation myths take place in a primordial age when the world had not achieved its later form. Origin myths explain how a society's customs, institutions, and taboos were established and sanctified. National myths are narratives about a nation's past that symbolize the nation's values. There is a complex relationship between recital of myths and the enactment of rituals.
Etymology
The word "myth" comes from Ancient Greek , meaning 'speech, narrative, fiction, myth, plot'. In turn, Ancient Greek (, 'story', 'lore', 'legends', or 'the telling of stories') combines the word with the suffix -λογία (-logia, 'study') in order to mean 'romance, fiction, story-telling.' Accordingly, Plato used as a general term for 'fiction' or 'story-telling' of any kind. In Anglicised form, this Greek word began to be used in English (and was likewise adapted into other European languages) in the early 19th century, in a much narrower sense, as a scholarly term for "[a] traditional story, especially one concerning the early history of a people or explaining a natural or social phenomenon, and typically involving supernatural beings or events."
The Greek term was then borrowed into Late Latin, occurring in the title of Latin author Fulgentius' 5th-century Mythologiæ to denote what is now referred to as classical mythology—i.e., Greco-Roman etiological stories involving their gods. Fulgentius' Mythologiæ explicitly treated its subject matter as allegories requiring interpretation and not as true events. The Latin term was then adopted in Middle French as . Whether from French or Latin usage, English adopted the word "mythology" in the 15th century, initially meaning 'the exposition of a myth or myths', 'the interpretation of fables', or 'a book of such expositions'. The word is first attested in John Lydgate's Troy Book.
From Lydgate until the 17th or 18th century, "mythology" meant a moral, fable, allegory or a parable, or collection of traditional stories, understood to be false. It came eventually to be applied to similar bodies of traditional stories among other polytheistic cultures around the world. Thus "mythology" entered the English language before "myth". Johnson's Dictionary, for example, has an entry for mythology, but not for myth. Indeed, the Greek loanword mythos (pl. mythoi) and Latinate mythus (pl. mythi) both appeared in English before the first example of "myth" in 1830.
Protagonists and structure
The main characters in myths are usually non-humans, such as gods, demigods, and other supernatural figures. Others include humans, animals, or combinations in their classification of myth. Stories of everyday humans, although often of leaders of some type, are usually contained in legends, as opposed to myths. Myths are sometimes distinguished from legends in that myths deal with gods, usually have no historical basis, and are set in a world of the remote past, very different from that of the present.
Definitions
Myth
Definitions of "myth" vary to some extent among scholars, though Finnish folklorist Lauri Honko offers a widely-cited definition:
Another definition of myth comes from myth criticism theorist and professor José Manuel Losada. According to Cultural Myth Criticism, the studies of myth must explain and understand "myth from inside", that is, only "as a myth". Losada defines myth as "a functional, symbolic and thematic narrative of one or several extraordinary events with a transcendent, sacred and supernatural referent; that lacks, in principle, historical testimony; and that refers to an individual or collective, but always absolute, cosmogony or eschatology". According to the hylistic myth research by assyriologist Annette Zgoll and classic philologist Christian Zgoll, "A myth can be defined as an Erzählstoff [narrative material] which is polymorphic through its variants and – depending on the variant – polystratic; an Erzählstoff in which transcending interpretations of what can be experienced are combined into a hyleme sequence with an implicit claim to relevance for the interpretation and mastering of the human condition."
Scholars in other fields use the term "myth" in varied ways. In a broad sense, the word can refer to any traditional story, popular misconception or imaginary entity.
Though myth and other folklore genres may overlap, myth is often thought to differ from genres such as legend and folktale in that neither are considered to be sacred narratives. Some kinds of folktales, such as fairy stories, are not considered true by anyone, and may be seen as distinct from myths for this reason. Main characters in myths are usually gods, demigods or supernatural humans, while legends generally feature humans as their main characters. Many exceptions and combinations exist, as in the Iliad, Odyssey and Aeneid. Moreover, as stories spread between cultures or as faiths change, myths can come to be considered folktales, their divine characters recast as either as humans or demihumans such as giants, elves and faeries. Conversely, historical and literary material may acquire mythological qualities over time. For example, the Matter of Britain (the legendary history of Great Britain, especially those focused on King Arthur and the knights of the Round Table) and the Matter of France, seem distantly to originate in historical events of the 5th and 8th centuries, respectively, and became mythologised over the following centuries.
In colloquial use, "myth" can also be used of a collectively held belief that has no basis in fact, or any false story. This usage, which is often pejorative, arose from labelling the religious myths and beliefs of other cultures as incorrect, but it has spread to cover non-religious beliefs as well.
As commonly used by folklorists and academics in other relevant fields, such as anthropology, "myth" has no implication whether the narrative may be understood as true or otherwise. Among biblical scholars of both the Old and New Testament, the word "myth" has a technical meaning, in that it usually refers to "describe the actions of the other‐worldly in terms of this world" such as the Creation and the Fall.
Since "myth" is popularly used to describe stories that are not objectively true, the identification of a narrative as a myth can be highly controversial. Many religious adherents believe that the narratives told in their respective religious traditions are historical without question, and so object to their identification as myths while labelling traditional narratives from other religions as such. Hence, some scholars may label all religious narratives as "myths" for practical reasons, such as to avoid depreciating any one tradition because cultures interpret each other differently relative to one another. Other scholars may abstain from using the term "myth" altogether for purposes of avoiding placing pejorative overtones on sacred narratives.
Related terms
Mythology
In present use, "mythology" usually refers to the collection of myths of a group of people. For example, Greek mythology, Roman mythology, Celtic mythology and Hittite mythology all describe the body of myths retold among those cultures.
"Mythology" can also refer to the study of myths and mythologies.
Mythography
The compilation or description of myths is sometimes known as "mythography", a term also used for a scholarly anthology of myths or of the study of myths generally.
Key mythographers in the Classical tradition include:
Ovid (43 BCE–17/18 CE), whose tellings of myths have been profoundly influential;
Fabius Planciades Fulgentius, a Latin writer of the late-5th to early-6th centuries, whose Mythologies gathered and gave moralistic interpretations of a wide range of myths;
the anonymous medieval Vatican Mythographers, who developed anthologies of Classical myths that remained influential to the end of the Middle Ages; and
Renaissance scholar Natalis Comes, whose ten-book Mythologiae became a standard source for classical mythology in later Renaissance Europe.
Other prominent mythographies include the thirteenth-century Prose Edda attributed to the Icelander Snorri Sturluson, which is the main surviving survey of Norse Mythology from the Middle Ages.
Jeffrey G. Snodgrass (professor of anthropology at the Colorado State University) has termed India's Bhats as mythographers.
Myth Criticism
Myth criticism is a system of anthropological interpretation of culture created by French philosopher Gilbert Durand. Scholars have used myth criticism to explain the mythical roots of contemporary fiction, which means that modern myth criticism needs to be interdisciplinary.
Professor Losada offers his own methodologic, hermeneutic and epistemological approach to myth. While assuming mythopoetical perspectives, Losada's Cultural Myth Criticism takes a step further, incorporating the study of the transcendent dimension (its function, its disappearance) to evaluate the role of myth as a mirror of contemporary culture.
Cultural myth criticism
Cultural myth criticism, without abandoning the analysis of the symbolic, invades all cultural manifestations and delves into the difficulties in understanding myth today. This cultural myth criticism studies mythical manifestations in fields as wide as literature, film and television, theater, sculpture, painting, video games, music, dancing, the Internet and other artistic fields.
Mythos
Because "myth" is sometimes used in a pejorative sense, some scholars have opted for "mythos" instead. "Mythos" now more commonly refers to its Aristotelian sense as a "plot point" or to a body of interconnected myths or stories, especially those belonging to a particular religious or cultural tradition. It is sometimes used specifically for modern, fictional mythologies, such as the world building of H. P. Lovecraft.
Mythopoeia
Mythopoeia ( + , 'I make myth') was termed by J. R. R. Tolkien, amongst others, to refer to the "conscious generation" of mythology. It was notoriously also suggested, separately, by Nazi ideologist Alfred Rosenberg.
Interpretations
Comparative mythology
Comparative mythology is a systematic comparison of myths from different cultures. It seeks to discover underlying themes that are common to the myths of multiple cultures. In some cases, comparative mythologists use the similarities between separate mythologies to argue that those mythologies have a common source. This source may inspire myths or provide a common "protomythology" that diverged into the mythologies of each culture.
Functionalism
A number of commentators have argued that myths function to form and shape society and social behaviour. Eliade argued that one of the foremost functions of myth is to establish models for behavior and that myths may provide a religious experience. By telling or reenacting myths, members of traditional societies detach themselves from the present, returning to the mythical age, thereby coming closer to the divine.
Honko asserted that, in some cases, a society reenacts a myth in an attempt to reproduce the conditions of the mythical age. For example, it might reenact the healing performed by a god at the beginning of time in order to heal someone in the present. Similarly, Barthes argued that modern culture explores religious experience. Since it is not the job of science to define human morality, a religious experience is an attempt to connect with a perceived moral past, which is in contrast with the technological present.
Pattanaik defines mythology as "the subjective truth of people communicated through stories, symbols and rituals." He says, "Facts are everybody's truth. Fiction is nobody's truth. Myths are somebody's truth."
Euhemerism
One theory claims that myths are distorted accounts of historical events. According to this theory, storytellers repeatedly elaborate upon historical accounts until the figures in those accounts gain the status of gods. For example, the myth of the wind-god Aeolus may have evolved from a historical account of a king who taught his people to use sails and interpret the winds. Herodotus (fifth-century BCE) and Prodicus made claims of this kind. This theory is named euhemerism after mythologist Euhemerus, who suggested that Greek gods developed from legends about humans.
Allegory
Some theories propose that myths began as allegories for natural phenomena: Apollo represents the sun, Poseidon represents water, and so on. According to another theory, myths began as allegories for philosophical or spiritual concepts: Athena represents wise judgment, Aphrodite romantic desire, and so on. Müller supported an allegorical theory of myth. He believed myths began as allegorical descriptions of nature and gradually came to be interpreted literally. For example, a poetic description of the sea as "raging" was eventually taken literally and the sea was then thought of as a raging god.
Personification
Some thinkers claimed that myths result from the personification of objects and forces. According to these thinkers, the ancients worshiped natural phenomena, such as fire and air, gradually deifying them. For example, according to this theory, ancients tended to view things as gods, not as mere objects. Thus, they described natural events as acts of personal gods, giving rise to myths.
Ritualism
According to the myth-ritual theory, myth is tied to ritual. In its most extreme form, this theory claims myths arose to explain rituals. This claim was first put forward by Smith, who argued that people begin performing rituals for reasons not related to myth. Forgetting the original reason for a ritual, they account for it by inventing a myth and claiming the ritual commemorates the events described in that myth. James George Frazer—author of The Golden Bough, a book on the comparative study of mythology and religion—argued that humans started out with a belief in magical rituals; later, they began to lose faith in magic and invented myths about gods, reinterpreting their rituals as religious rituals intended to appease the gods.
Academic discipline history
Historically, important approaches to the study of mythology have included those of Vico, Schelling, Schiller, Jung, Freud, Lévy-Bruhl, Lévi-Strauss, Frye, the Soviet school, and the Myth and Ritual School.
Ancient Greece
The critical interpretation of myth began with the Presocratics. Euhemerus was one of the most important pre-modern mythologists. He interpreted myths as accounts of actual historical events, though distorted over many retellings.
Sallustius divided myths into five categories:
theological;
physical (or concerning natural law);
animistic (or concerning soul);
material; and
mixed, which concerns myths that show the interaction between two or more of the previous categories and are particularly used in initiations.
Plato condemned poetic myth when discussing education in the Republic. His critique was primarily on the grounds that the uneducated might take the stories of gods and heroes literally. Nevertheless, he constantly referred to myths throughout his writings. As Platonism developed in the phases commonly called Middle Platonism and neoplatonism, writers such as Plutarch, Porphyry, Proclus, Olympiodorus, and Damascius wrote explicitly about the symbolic interpretation of traditional and Orphic myths.
Mythological themes were consciously employed in literature, beginning with Homer. The resulting work may expressly refer to a mythological background without itself becoming part of a body of myths (Cupid and Psyche). Medieval romance in particular plays with this process of turning myth into literature. Euhemerism, as stated earlier, refers to the rationalization of myths, putting themes formerly imbued with mythological qualities into pragmatic contexts. An example of this would be following a cultural or religious paradigm shift (notably the re-interpretation of pagan mythology following Christianization).
European Renaissance
Interest in polytheistic mythology revived during the Renaissance, with early works of mythography appearing in the sixteenth century, among them the Theologia Mythologica (1532).
19th century
The first modern, Western scholarly theories of myth appeared during the second half of the 19th century—at the same time as "myth" was adopted as a scholarly term in European languages. They were driven partly by a new interest in Europe's ancient past and vernacular culture, associated with Romantic Nationalism and epitomised by the research of Jacob Grimm (1785–1863). This movement drew European scholars' attention not only to Classical myths, but also material now associated with Norse mythology, Finnish mythology, and so forth. Western theories were also partly driven by Europeans' efforts to comprehend and control the cultures, stories and religions they were encountering through colonialism. These encounters included both extremely old texts such as the Sanskrit Rigveda and the Sumerian Epic of Gilgamesh, and current oral narratives such as mythologies of the indigenous peoples of the Americas or stories told in traditional African religions.
The intellectual context for nineteenth-century scholars was profoundly shaped by emerging ideas about evolution. These ideas included the recognition that many Eurasian languages—and therefore, conceivably, stories—were all descended from a lost common ancestor (the Indo-European language) which could rationally be reconstructed through the comparison of its descendant languages. They also included the idea that cultures might evolve in ways comparable to species. In general, 19th-century theories framed myth as a failed or obsolete mode of thought, often by interpreting myth as the primitive counterpart of modern science within a unilineal framework that imagined that human cultures are travelling, at different speeds, along a linear path of cultural development.
Nature
One of the dominant mythological theories of the latter 19th century was nature mythology, the foremost exponents of which included Max Müller and Edward Burnett Tylor. This theory posited that "primitive man" was primarily concerned with the natural world. It tended to interpret myths that seemed distasteful to European Victorians—such as tales about sex, incest, or cannibalism—as metaphors for natural phenomena like agricultural fertility. Unable to conceive impersonal natural laws, early humans tried to explain natural phenomena by attributing souls to inanimate objects, thus giving rise to animism.
According to Tylor, human thought evolved through stages, starting with mythological ideas and gradually progressing to scientific ideas. Müller also saw myth as originating from language, even calling myth a "disease of language". He speculated that myths arose due to the lack of abstract nouns and neuter gender in ancient languages. Anthropomorphic figures of speech, necessary in such languages, were eventually taken literally, leading to the idea that natural phenomena were in actuality conscious or divine. Not all scholars, not even all 19th-century scholars, accepted this view. Lucien Lévy-Bruhl claimed that "the primitive mentality is a condition of the human mind and not a stage in its historical development." Recent scholarship, noting the fundamental lack of evidence for "nature mythology" interpretations among people who actually circulated myths, has likewise abandoned the key ideas of "nature mythology".
Ritual
Frazer saw myths as a misinterpretation of magical rituals, which were themselves based on a mistaken idea of natural law. This idea was central to the "myth and ritual" school of thought. According to Frazer, humans begin with an unfounded belief in impersonal magical laws. When they realize applications of these laws do not work, they give up their belief in natural law in favor of a belief in personal gods controlling nature, thus giving rise to religious myths. Meanwhile, humans continue practicing formerly magical rituals through force of habit, reinterpreting them as reenactments of mythical events. Finally, humans come to realize nature follows natural laws, and they discover their true nature through science. Here again, science makes myth obsolete as humans progress "from magic through religion to science." Segal asserted that by pitting mythical thought against modern scientific thought, such theories imply modern humans must abandon myth.
20th century
The earlier 20th century saw major work developing psychoanalytical approaches to interpreting myth, led by Sigmund Freud, who, drawing inspiration from Classical myth, began developing the concept of the Oedipus complex in his 1899 The Interpretation of Dreams. Jung likewise tried to understand the psychology behind world myths. Jung asserted that all humans share certain innate unconscious psychological forces, which he called archetypes. He believed similarities between the myths of different cultures reveals the existence of these universal archetypes.
The mid-20th century saw the influential development of a structuralist theory of mythology, led by Lévi-Strauss. Strauss argued that myths reflect patterns in the mind and interpreted those patterns more as fixed mental structures, specifically pairs of opposites (good/evil, compassionate/callous), rather than unconscious feelings or urges. Meanwhile, Bronislaw Malinowski developed analyses of myths focusing on their social functions in the real world. He is associated with the idea that myths such as origin stories might provide a "mythic charter"—a legitimisation—for cultural norms and social institutions. Thus, following the Structuralist Era (–1980s), the predominant anthropological and sociological approaches to myth increasingly treated myth as a form of narrative that can be studied, interpreted, and analyzed like ideology, history, and culture. In other words, myth is a form of understanding and telling stories that are connected to power, political structures, and political and economic interests.
These approaches contrast with approaches, such as those of Joseph Campbell and Eliade, which hold that myth has some type of essential connection to ultimate sacred meanings that transcend cultural specifics. In particular, myth was studied in relation to history from diverse social sciences. Most of these studies share the assumption that history and myth are not distinct in the sense that history is factual, real, accurate, and truth, while myth is the opposite.
In the 1950s, Barthes published a series of essays examining modern myths and the process of their creation in his book Mythologies, which stood as an early work in the emerging post-structuralist approach to mythology, which recognised myths' existence in the modern world and in popular culture.
The 20th century saw rapid secularization in Western culture. This made Western scholars more willing to analyse narratives in the Abrahamic religions as myths; theologians such as Rudolf Bultmann argued that a modern Christianity needed to demythologize; and other religious scholars embraced the idea that the mythical status of Abrahamic narratives was a legitimate feature of their importance. This, in his appendix to Myths, Dreams and Mysteries, and in The Myth of the Eternal Return, Eliade attributed modern humans' anxieties to their rejection of myths and the sense of the sacred.
The Christian theologian Conrad Hyers wrote:
21st century
Both in 19th-century research, which tended to see existing records of stories and folklore as imperfect fragments of partially lost myths, and in 20th-century structuralist work, which sought to identify underlying patterns and structures in often diverse versions of a given myth, there had been a tendency to synthesise sources to attempt to reconstruct what scholars supposed to be more perfect or underlying forms of myths. From the late 20th century, researchers influenced by postmodernism tended instead to argue that each account of a given myth has its own cultural significance and meaning, and argued that rather than representing degradation from a once more perfect form, myths are inherently plastic and variable. There is, consequently, no such thing as the 'original version' or 'original form' of a myth. One prominent example of this movement was A. K. Ramanujan's essay "Three Hundred Ramayanas".
Correspondingly, scholars challenged the precedence that had once been given to texts as a medium for mythology, arguing that other media, such as the visual arts or even landscape and place-naming, could be as or more important. Myths are not texts, but narrative materials (Erzählstoffe) that can be adapted in various media (such as epics, hymns, handbooks, movies, dances, etc.). In contrast to other academic approaches, which primarily focus on the (social) function of myths, hylistic myth research aims to understand myths and their nature out of themselves. As part of the Göttingen myth research, Annette and Christian Zgoll developed the method of hylistics (narrative material research) to extract mythical materials from their media and make possible a transmedial comparison. The content of the medium is broken down into the smallest possible plot components (hylemes), which are listed in standardized form (so-called hyleme analysis). Inconsistencies in content can indicate stratification, i.e. the overlapping of several materials, narrative variants and edition layers within the same medial concretion. To a certain extent, this can also be used to reconstruct earlier and alternative variants of the same material that were in competition and/or were combined with each other. The juxtaposition of hyleme sequences enables the systematic comparison of different variants of the same material or several different materials that are related or structurally similar to each other. In his overall presentation of the hundred-year history of myth research, the classical philologist and myth researcher Udo Reinhardt mentions Christian Zgoll's basic work Tractatus mythologicus as "the latest handbook on myth theory" with "outstanding significance" for modern myth research.
Modernity
Scholars in the field of cultural studies research how myth has worked itself into modern discourses. Mythological discourse can reach greater audiences than ever before via digital media. Various mythic elements appear in popular culture, as well as television, cinema and video games.
Although myth was traditionally transmitted through the oral tradition on a small scale, the film industry has enabled filmmakers to transmit myths to large audiences via film. In Jungian psychology, myths are the expression of a culture or society's goals, fears, ambitions and dreams.
The basis of modern visual storytelling is rooted in the mythological tradition. Many contemporary films rely on ancient myths to construct narratives. The Walt Disney Company is well-known among cultural study scholars for "reinventing" traditional childhood myths. While few films are as obvious as Disney fairy tales, the plots of many films are based on the rough structure of myths. Mythological archetypes, such as the cautionary tale regarding the abuse of technology, battles between gods and creation stories, are often the subject of major film productions. These films are often created under the guise of cyberpunk action films, fantasy, dramas and apocalyptic tales.
21st-century films such as Clash of the Titans, Immortals and Thor continue the trend of using traditional mythology to frame modern plots. Authors use mythology as a basis for their books, such as Rick Riordan, whose Percy Jackson and the Olympians series is situated in a modern-day world where the Greek deities are manifest.
Scholars, particularly those within the field of fan studies, and fans of popular culture have also noted a connection between fan fiction and myth. Ika Willis identified three models of this: fan fiction as a reclaiming of popular stories from corporations, myth as a means of critiquing or dismantling hegemonic power, and myth as "a commons of story and a universal story world". Willis supports the third model, a universal story world, and argues that fanfiction can be seen as mythic due to its hyperseriality—a term invented by Sarah Iles Johnston to describe a hyperconnected universe in which characters and stories are interwoven. In an interview for the New York Times, Henry Jenkins stated that fanfiction 'is a way of the culture repairing the damage done in a system where contemporary myths are owned by corporations instead of owned by the folk.'
See also
List of mythologies
List of mythological objects
List of mythology books and sources
Magic and mythology
Mythopoeia, artificially constructed mythology, mainly for the purpose of storytelling
Notes
Sources
"Myth ". Encyclopædia Britannica. 2009. 21 March 2009.
— (1997). "Binary Opposition in Myth: The Propp/Levi-Strauss Debate in Retrospect." Western Folklore 56(Winter): 39–50.
Fabiani, Paolo "The Philosophy of the Imagination in Vico and Malebranche". F.U.P. (Florence UP), English edition 2009. PDF
Northup, Lesley (2006). "Myth-Placed Priorities: Religion and the Study of Myth." Religious Studies Review 32(1):5–10. . .
Simpson, Jacqueline, and Steve Roud, eds. 2003. "Myths." In A Dictionary of English Folklore. Oxford: Oxford University Press. .
External links
Greek words and phrases | 0.771121 | 0.998756 | 0.770161 |
Population geography | Population geography relates to variations in the distribution, composition, migration, and growth of populations. Population geography involves demography in a geographical perspective. It focuses on the characteristics of population distributions that change in a spatial context. This often involves factors such as where population is found and how the size and composition of these population is regulated by the demographic processes of fertility, mortality, and migration.
Contributions to population geography are cross-disciplinary because geographical epistemologies related to environment, place and space have been developed at various times. Related disciplines include geography, demography, sociology, and economics.
History
Since its inception, population geography has taken at least three distinct but related forms, the most recent of which appears increasingly integrated with human geography in general. The earliest and most enduring form of population geography emerged in the 1950s, as part of spatial science. Pioneered by Glenn Trewartha, Wilbur Zelinsky, William A. V. Clark, and others in the United States, as well as Jacqueline Beaujeu-Garnier and Pierre George in France, it focused on the systematic study of the distribution of population as a whole and the spatial variation in population characteristics such as fertility and mortality.
Population geography defined itself as the systematic study of:
the simple description of the location of population numbers and characteristics
the explanation of the spatial configuration of these numbers and characteristics
the geographic analysis of population phenomena (the inter-relations among real differences in population with those in all or certain other elements within the geographic study area).
Accordingly, it categorized populations as groups synonymous with political jurisdictions representing gender, religion, age, disability, generation, sexuality, and race, variables which go beyond the vital statistics of births, deaths, and marriages. Given the rapidly growing global population as well as the baby boom in affluent countries such as the United States, these geographers studied the relation between demographic growth, displacement, and access to resources at an international scale.
Topics in population geography
Demographic phenomena (natality, mortality, growth rates, etc.) through both space and time
Increases or decreases in population numbers
The movements and mobility of populations
Occupational structure
The way in which places in turn react to population phenomena, e.g. immigration
Research topics of other geographic sub-disciplines, such as settlement geography, also have a population geography dimension:
The grouping of people within settlements
The way from the geographical of places, e.g. settlement patterns
All of the above are looked at over space and time. Population geography also studies human-environment interactions, including problems from those relationships, such as overpopulation, pollution, and others.
A few types of maps that show the spatial layout of population are choropleth, isoline, and dot maps.
See also
Geodemography
Geodemographic segmentation
Notes
References
Bibliography
Clarke, John I. Population Geography. London: Pergamon Press, 1965. | 0.780683 | 0.986485 | 0.770132 |
Intelligentsia | The intelligentsia is a status class composed of the university-educated people of a society who engage in the complex mental labours by which they critique, shape, and lead in the politics, policies, and culture of their society; as such, the intelligentsia consists of scholars, academics, teachers, journalists, and literary writers.
Conceptually, the intelligentsia status class arose in the late 18th century, during the Partitions of Poland (1772–1795). Etymologically, the 19th-century Polish intellectual Bronisław Trentowski coined the term (intellectuals) to identify and describe the university-educated and professionally active social stratum of the patriotic bourgeoisie; men and women whose intellectualism would provide moral and political leadership to Poland in opposing the cultural hegemony of the Russian Empire.
Before the Russian Revolution, the term identified and described the status class of university-educated people whose cultural capital (schooling, education, and intellectual enlightenment) allowed them to assume the moral initiative and the practical leadership required in Russian national, regional, and local politics.
In practice, the status and social function of the intelligentsia varied by society. In Eastern Europe, the intellectuals were at the periphery of their societies and thus were deprived of political influence and access to the effective levers of political power and of economic development. In Western Europe, the intellectuals were in the mainstream of their societies and thus exercised cultural and political influence that granted access to the power of government office, such as the Bildungsbürgertum, the cultured bourgeoisie of Germany, as well as the professionals of Great Britain.
Background
In a society, the intelligentsia is a status class of intellectuals whose social functions, politics, and national interests are (ostensibly) distinct from the functions of government, commerce, and the military. In Economy and Society: An Outline of Interpretive Sociology (1921), the political economist Max Weber applied the term intelligentsia in chronological and geographical frames of reference, such as "this Christian preoccupation with the formulation of dogmas was, in Antiquity, particularly influenced by the distinctive character of ‘intelligentsia’, which was the product of Greek education", thus the intelligentsia originated as a social class of educated people created for the greater benefit of society.
In the 19th and 20th centuries, the Polish word and the sociologic concept of the became a European usage to describe the social class of men and women who are the intellectuals of the countries of central and of eastern Europe; in Poland, the critical thinkers educated at university, in Russia, the nihilists who opposed traditional values in the name of reason and progress. In the late 20th century, the sociologist Pierre Bourdieu said that the intelligentsia has two types of workers: (i) intellectual workers who create knowledge (practical and theoretic) and (ii) intellectual workers who create cultural capital. Sociologically, the Polish translates to the in France and the in Germany.
European history
The intelligentsia existed as a social stratum in European societies before the term was coined in 19th-century Poland, to identify the intellectual people whose professions placed them outside the traditional workplaces and labours of the town-and-country social classes (royalty, aristocracy, bourgeoisie) of a monarchy; thus the are a social class native to the city. In their functions as a status class, the intellectuals realised the cultural development of cities, the dissemination of printed knowledge (literature, textbooks, newspapers), and the economic development of housing for rent (the tenement house) for the teacher, the journalist, and the civil servant.
In On Love of the fatherland (1844), the Polish philosopher Karol Libelt used the term , which was the status class, composed of scholars, teachers, lawyers, and engineers, et al. as the educated people of society who provide the moral leadership required to resolve the problems of society, hence the social function of the intelligentsia is to "guide for the reason of their higher enlightenment."
In the 1860s, the journalist Pyotr Boborykin popularised the term to identify and describe the Russian social stratum of people educated at university who engage in the intellectual occupations (law, medicine, engineering, the arts) who produce the culture and the dominant ideology by which society functions. According to the theory of Dr. Vitaly Tepikin, the sociological traits usual to the intelligentsia of a society are:
advanced-for-their-time moral ideals, moral sensitivity to the neighbour, tact and gentleness in expression;
productive mental work, and in continual self-education;
patriotism based on faith in the people, and inexhaustible, self-less love for the small and the big motherlands;
inherent creativity in every stratum of the intelligentsia, and a tendency to asceticism;
an independent personality who speaks freely;
a critical attitude towards the government, and public condemnation of injustice;
loyalty to principle by conscience, grace under pressure, and tendency to self-denial;
an ambiguous perception of reality, which leads to political fickleness that sometimes becomes conservatism;
a sense of resentment, because politics and policies went unrealised; and withdrawal from the public sphere to the in-group;
quarrels about art, ideas, and ideology, which divide the subgroups who compose the intelligentsia.
In The Rise of the Intelligentsia, 1750–1831 (2008) Maciej Janowski said that the Polish intelligentsia were the think tank of the State, intellectual servants whose progressive social and economic policies decreased the social backwardness (illiteracy) of the Polish people, and also decreased Russian political repression in partitioned Poland.
Poland
19th century
In 1844 Poland, the term inteligencja, identifying the intellectuals of a society, first was used by the philosopher Karol Libelt, which he described as a status class of people characterised by intellect and Polish nationalism; qualities of mind, character, and spirit that made them natural leaders of the modern Polish nation. That the intelligentsia were aware of their social status and of their duties to society: Educating the youth with the nationalist objective to restore the Republic of Poland; preserving the Polish language; and love of the Fatherland.
Nonetheless, the writers Stanisław Brzozowski and Tadeusz Boy-Żeleński criticised Libelt's ideological and messianic representation of a Polish republic, because it originated from the social traditionalism and the reactionary conservatism that pervaded Polish culture and impeded socio-economic progress. Consequent to the Imperial Prussian, Austrian, Swedish and Russian Partitions of Poland, the imposition of Tsarist cultural hegemony caused many of the political and cultural élites to participate in the Great Emigration (1831–70).
Second World War
After the invasion of Poland on 1 September 1939, the Nazis launched the extermination of the Polish intelligentsia, by way of the military operations of the Special Prosecution Book-Poland, the German AB-Aktion in Poland, the Intelligenzaktion, and the Intelligenzaktion Pommern. In eastern Poland, the Soviet Union proceeded with the extermination of the Polish intelligentsia with operations such as the Katyn massacre (April–May 1940), during which university professors, physicians, lawyers, engineers, teachers, military, policeman, writers and journalists were murdered.
Russia
Imperial era
The Russian also was a mixture of messianism and intellectual élitism, which the philosopher Isaiah Berlin described as follows: "The phenomenon, itself, with its historical and literally revolutionary consequences, is, I suppose, the largest, single Russian contribution to social change in the world. The concept of intelligentsia must not be confused with the notion of intellectuals. Its members thought of themselves as united, by something more than mere interest in ideas; they conceived themselves as being a dedicated order, almost a secular priesthood, devoted to the spreading of a specific attitude to life."
The Idea of Progress, which originated in Western Europe during the Age of Enlightenment in the 18th century, became the principal concern of the intelligentsia by the mid-19th century; thus, progress social movements, such as the Narodniks, mostly consisted of intellectuals. The Russian philosopher Sergei Bulgakov said that the Russian intelligentsia was the creation of Peter, that they were the "window to Europe through which the Western air comes to us, vivifying and toxic at the same time." Moreover, Bulgakov also said that the literary critic of Westernization, Vissarion Belinsky was the spiritual father of the Russian intelligentsia.
In 1860, there were 20,000 professionals in Russia and 85,000 by 1900.
Originally composed of educated nobles, the intelligentsia became dominated by raznochintsy (classless people) after 1861. In 1833, 78.9 per cent of secondary-school students were children of nobles and bureaucrats, by 1885 they were 49.1 per cent of such students. The proportion of commoners increased from 19.0 to 43.8 per cent, and the remaining percentage were the children of priests. In fear of an educated proletariat, Tsar Nicholas I limited the number of university students to 3,000 per year, yet there were 25,000 students, by 1894. Similarly the number of periodicals increased from 15 in 1855 to 140 periodical publications in 1885. The "third element" were professionals hired by zemstva. By 1900, there were 47,000 of them, most were liberal radicals.
Although Tsar Peter the Great introduced the Idea of Progress to Russia, by the 19th century, the Tsars did not recognize "progress" as a legitimate aim of the state, to the degree that Nicholas II said "How repulsive I find that word" and wished it removed from the Russian language.
Bolshevik perspective
In Russia, the Bolsheviks did not consider the status class of the to be a true social class, as defined in Marxist philosophy. In that time, the Bolsheviks used the Russian word (stratum) to identify and define the intelligentsia as a separating layer without an inherent class character.
In the creation of post-monarchic Russia, Lenin was firmly critical of the class character of the intelligentsia, commending the growth of "the intellectual forces of the workers and the peasants" will depose the "bourgeoisie and their accomplices, intelligents, lackeys of capital who think that they are brain of the nation. In fact it is not brain, but dung". (На деле это не мозг, а говно)
The Russian Revolution of 1917 divided the intelligentsia and the social classes of Tsarist Russia. Some Russians emigrated, the political reactionaries joined the right-wing White movement for counter-revolution, some became Bolsheviks, and some remained in Russia and participated in the political system of the Soviet Union. In reorganizing Russian society, the Bolsheviks deemed non-Bolshevik intelligentsia class enemies and expelled them from society, by way of deportation on Philosophers' ships, forced labor in the gulag, and summary execution. The members of the Tsarist-era intelligentsia who remained in Bolshevik Russia (the USSR) were proletarianized. Although the Bolsheviks recognized the managerial importance of the intelligentsia to the future of Soviet Russia, the bourgeois origin of this stratum gave reason for distrust of their ideological commitment to Marxist philosophy and Bolshevik societal control.
Soviet Union
In the late Soviet Union the term "intelligentsia" acquired a formal definition of mental and cultural workers. There were subcategories of "scientific-technical intelligentsia" (научно-техническая интеллигенция) and "creative intelligentsia" (творческая интеллигенция).
Between 1917 and 1941, there was a massive increase in the number of engineering graduates: from 15,000 to over 250,000.
Post-Soviet period
In the post-Soviet period, the members of the former Soviet intelligentsia have displayed diverging attitudes towards the communist government. While the older generation of intelligentsia has attempted to frame themselves as victims, the younger generation, who were in their 30s when the Soviet Union collapsed, has not allocated so much space for the repressive experience in their self-narratives. Since the collapse of the Soviet Union, the popularity and influence of the intelligentsia has significantly declined. Therefore, it is typical for the post-Soviet intelligentsia to feel nostalgic for the last years of the Soviet Union (perestroika), which they often regard as the golden age of the intelligentsia.
Vladimir Putin has expressed his view on the social duty of intelligentsia in modern Russia.
We should all be aware of the fact that when revolutionary—not evolutionary—changes come, things can get even worse. The intelligentsia should be aware of this. And it is the intelligentsia specifically that should keep this in mind and prevent society from radical steps and revolutions of all kinds. We've had enough of it. We've seen so many revolutions and wars. We need decades of calm and harmonious development.
Mass intelligentsia
In the 20th century, from the status class term Intelligentsia, sociologists derived the term mass intelligentsia to describe the populations of educated adults, with discretionary income, who pursue intellectual interests by way of book clubs and cultural associations, etc. That sociological term was made popular usage by the writer Melvyn Bragg, who said that mass intelligentsia conceptually explains the popularity of book clubs and literary festivals that otherwise would have been of limited intellectual interests to most people from the middle class and from the working class.
In the book Campus Power Struggle (1970), the sociologist Richard Flacks addressed the concept of mass intelligentsia:
Related concepts
The concept of free-floating intelligentsia, coined by Alfred Weber and elaborated by Karl Mannheim, closely relates to the intelligentsia. It refers to an intellectual class that operates independently of social class constraints, allowing for a critical and unbiased perspective. This intellectual autonomy is a defining characteristic of the intelligentsia.
See also
Academia
Anti-intellectualism
Creative class
Ilustrado (Philippines)
Obrazovanshchina
Organic intellectual
References
Further reading
Boborykin, P.D. Russian Intelligentsia In: Russian Thought, 1904, # 12 (In Russian; Боборыкин П.Д. Русская интеллигенция// Русская мысль. 1904. No.12;)
Zhukovsky V. A. From the Diaries of Years 1827–1840, In: Our Heritage, Moscow, #32, 1994. (In Russian; Жуковский В.А. Из дневников 1827–1840 гг. // Наше наследие. М., 1994. No.32.)
The record dated by 2 February 1836 says: "Через три часа после этого общего бедствия ... осветился великолепный Энгельгардтов дом, и к нему потянулись кареты, все наполненные лучшим петербургским дворянством, тем, которые у нас представляют всю русскую европейскую интеллигенцию" ("After three hours after this common disaster ... the magnificent Engelhardt's house was lit up and coaches started coming, filled with the best Peterburg dvoryanstvo, the ones who represent here the best Russian European intelligentsia.") The casual, i.e., no-philosophical and non-literary context, suggests that the word was in common circulation.
Science and technology in Russia
Science and technology in the Soviet Union
Social groups
Social class in Poland
Sociology of knowledge
Society of Russia
Intellectualism | 0.771861 | 0.997752 | 0.770126 |
Polymath | A polymath (; ) or polyhistor is an individual whose knowledge spans many different subjects, known to draw on complex bodies of knowledge to solve specific problems.
Embodying a basic tenet of Renaissance humanism that humans are limitless in their capacity for development, the concept led to the notion that people should embrace all knowledge and develop their capacities as fully as possible. This is expressed in the term Renaissance man, often applied to the gifted people of that age who sought to develop their abilities in all areas of accomplishment: intellectual, artistic, social, physical, and spiritual.
Etymology
In Western Europe, the first work to use the term polymathy in its title was published in 1603 by Johann von Wowern, a Hamburg philosopher. Von Wowern defined polymathy as "knowledge of various matters, drawn from all kinds of studies ... ranging freely through all the fields of the disciplines, as far as the human mind, with unwearied industry, is able to pursue them". Von Wowern lists erudition, literature, philology, philomathy, and polyhistory as synonyms.
The earliest recorded use of the term in the English language is from 1624, in the second edition of The Anatomy of Melancholy by Robert Burton; the form polymathist is slightly older, first appearing in the Diatribae upon the first part of the late History of Tithes of Richard Montagu in 1621. Use in English of the similar term polyhistor dates from the late 16th century.
Renaissance man
The term "Renaissance man" was first recorded in written English in the early 20th century. It is used to refer to great thinkers living before, during, or after the Renaissance. Leonardo da Vinci has often been described as the archetype of the Renaissance man, a man of "unquenchable curiosity" and "feverishly inventive imagination". Many notable polymaths lived during the Renaissance period, a cultural movement that spanned roughly the 14th through to the 17th century that began in Italy in the Late Middle Ages and later spread to the rest of Europe. These polymaths had a rounded approach to education that reflected the ideals of the humanists of the time. A gentleman or courtier of that era was expected to speak several languages, play a musical instrument, write poetry, and so on, thus fulfilling the Renaissance ideal.
The idea of a universal education was essential to achieving polymath ability, hence the word university was used to describe a seat of learning. However, the original Latin word refers in general to "a number of persons associated into one body, a society, company, community, guild, corporation, etc". At this time, universities did not specialize in specific areas, but rather trained students in a broad array of science, philosophy, and theology. This universal education gave them a grounding from which they could continue into apprenticeship toward becoming a master of a specific field.
When someone is called a "Renaissance man" today, it is meant that rather than simply having broad interests or superficial knowledge in several fields, the individual possesses a more profound knowledge and a proficiency, or even an expertise, in at least some of those fields. Some dictionaries use the term "Renaissance man" to describe someone with many interests or talents, while others give a meaning restricted to the Renaissance and more closely related to Renaissance ideals.
In academia
Robert Root-Bernstein and colleagues
Robert Root-Bernstein is considered the principal responsible for rekindling interest in polymathy in the scientific community. His works emphasize the contrast between the polymath and two other types: the specialist and the dilettante. The specialist demonstrates depth but lacks breadth of knowledge. The dilettante demonstrates superficial breadth but tends to acquire skills merely "for their own sake without regard to understanding the broader applications or implications and without integrating it". Conversely, the polymath is a person with a level of expertise that is able to "put a significant amount of time and effort into their avocations and find ways to use their multiple interests to inform their vocations".
A key point in the work of Root-Bernstein and colleagues is the argument in favor of the universality of the creative process. That is, although creative products, such as a painting, a mathematical model or a poem, can be domain-specific, at the level of the creative process, the mental tools that lead to the generation of creative ideas are the same, be it in the arts or science. These mental tools are sometimes called intuitive tools of thinking. It is therefore not surprising that many of the most innovative scientists have serious hobbies or interests in artistic activities, and that some of the most innovative artists have an interest or hobbies in the sciences.
Root-Bernstein and colleagues' research is an important counterpoint to the claim by some psychologists that creativity is a domain-specific phenomenon. Through their research, Root-Bernstein and colleagues conclude that there are certain comprehensive thinking skills and tools that cross the barrier of different domains and can foster creative thinking: "[creativity researchers] who discuss integrating ideas from diverse fields as the basis of creative giftedness ask not 'who is creative?' but 'what is the basis of creative thinking?' From the polymathy perspective, giftedness is the ability to combine disparate (or even apparently contradictory) ideas, sets of problems, skills, talents, and knowledge in novel and useful ways. Polymathy is therefore the main source of any individual's creative potential". In "Life Stages of Creativity", Robert and Michèle Root-Bernstein suggest six typologies of creative life stages. These typologies are based on real creative production records first published by Root-Bernstein, Bernstein, and Garnier (1993).
Type 1 represents people who specialize in developing one major talent early in life (e.g., prodigies) and successfully exploit that talent exclusively for the rest of their lives.
Type 2 individuals explore a range of different creative activities (e.g., through worldplay or a variety of hobbies) and then settle on exploiting one of these for the rest of their lives.
Type 3 people are polymathic from the outset and manage to juggle multiple careers simultaneously so that their creativity pattern is constantly varied.
Type 4 creators are recognized early for one major talent (e.g., math or music) but go on to explore additional creative outlets, diversifying their productivity with age.
Type 5 creators devote themselves serially to one creative field after another.
Type 6 people develop diversified creative skills early and then, like Type 5 individuals, explore these serially, one at a time.
Finally, his studies suggest that understanding polymathy and learning from polymathic exemplars can help structure a new model of education that better promotes creativity and innovation: "we must focus education on principles, methods, and skills that will serve them [students] in learning and creating across many disciplines, multiple careers, and succeeding life stages".
Peter Burke
Peter Burke, Professor Emeritus of Cultural History and Fellow of Emmanuel College at Cambridge, discussed the theme of polymathy in some of his works. He has presented a comprehensive historical overview of the ascension and decline of the polymath as, what he calls, an "intellectual species".
He observes that in ancient and medieval times, scholars did not have to specialize. However, from the 17th century on, the rapid rise of new knowledge in the Western world—both from the systematic investigation of the natural world and from the flow of information coming from other parts of the world—was making it increasingly difficult for individual scholars to master as many disciplines as before. Thus, an intellectual retreat of the polymath species occurred: "from knowledge in every [academic] field to knowledge in several fields, and from making original contributions in many fields to a more passive consumption of what has been contributed by others".
Given this change in the intellectual climate, it has since then been more common to find "passive polymaths", who consume knowledge in various domains but make their reputation in one single discipline, than "proper polymaths", who—through a feat of "intellectual heroism"—manage to make serious contributions to several disciplines. However, Burke warns that in the age of specialization, polymathic people are more necessary than ever, both for synthesis—to paint the big picture—and for analysis. He says: "It takes a polymath to 'mind the gap' and draw attention to the knowledges that may otherwise disappear into the spaces between disciplines, as they are currently defined and organized".
Bharath Sriraman
Bharath Sriraman, of the University of Montana, also investigated the role of polymathy in education. He poses that an ideal education should nurture talent in the classroom and enable individuals to pursue multiple fields of research and appreciate both the aesthetic and structural/scientific connections between mathematics, arts and the sciences.
In 2009, Sriraman published a paper reporting a 3-year study with 120 pre-service mathematics teachers and derived several implications for mathematics pre-service education as well as interdisciplinary education. He utilized a hermeneutic-phenomenological approach to recreate the emotions, voices and struggles of students as they tried to unravel Russell's paradox presented in its linguistic form. They found that those more engaged in solving the paradox also displayed more polymathic thinking traits. He concludes by suggesting that fostering polymathy in the classroom may help students change beliefs, discover structures and open new avenues for interdisciplinary pedagogy.
Michael Araki
Michael Araki is a professor at the UNSW Business School at the University of New South Wales, Australia. He sought to formalize in a general model how the development of polymathy takes place. His Developmental Model of Polymathy (DMP) is presented in a 2018 article with two main objectives:
organize the elements involved in the process of polymathy development into a structure of relationships that is wed to the approach of polymathy as a life project, and;
provide an articulation with other well-developed constructs, theories, and models, especially from the fields of giftedness and education.
The model, which was designed to reflect a structural model, has five major components:
polymathic antecedents
polymathic mediators
polymathic achievements
intrapersonal moderators
environmental moderators
Regarding the definition of the term polymathy, the researcher, through an analysis of the extant literature, concluded that although there are a multitude of perspectives on polymathy, most of them ascertain that polymathy entails three core elements: breadth, depth and integration.
Breadth refers to comprehensiveness, extension and diversity of knowledge. It is contrasted with the idea of narrowness, specialization, and the restriction of one's expertise to a limited domain. The possession of comprehensive knowledge at very disparate areas is a hallmark of the greatest polymaths. Depth refers to the vertical accumulation of knowledge and the degree of elaboration or sophistication of one's sets of one's conceptual network. Like Robert Root-Bernstein, Araki uses the concept of dilettancy as a contrast to the idea of profound learning that polymathy entails.
Integration, although not explicit in most definitions of polymathy, is also a core component of polymathy according to the author. Integration involves the capacity of connecting, articulating, concatenating or synthesizing different conceptual networks, which in non-polymathic persons might be segregated. In addition, integration can happen at the personality level, when the person is able to integrate their diverse activities in a synergic whole, which can also mean a psychic (motivational, emotional and cognitive) integration.
Finally, the author also suggests that, via a psychoeconomic approach, polymathy can be seen as a "life project". That is, depending on a person's temperament, endowments, personality, social situation and opportunities (or lack thereof), the project of a polymathic self-formation may present itself to the person as more or less alluring and more or less feasible to be pursued.
Kaufman, Beghetto and colleagues
James C. Kaufman, from the Neag School of Education at the University of Connecticut, and Ronald A. Beghetto, from the same university, investigated the possibility that everyone could have the potential for polymathy as well as the issue of the domain-generality or domain-specificity of creativity.
Based on their earlier four-c model of creativity, Beghetto and Kaufman proposed a typology of polymathy, ranging from the ubiquitous mini-c polymathy to the eminent but rare Big-C polymathy, as well as a model with some requirements for a person (polymath or not) to be able to reach the highest levels of creative accomplishment. They account for three general requirements—intelligence, motivation to be creative, and an environment that allows creative expression—that are needed for any attempt at creativity to succeed. Then, depending on the domain of choice, more specific abilities will be required. The more that one's abilities and interests match the requirements of a domain, the better. While some will develop their specific skills and motivations for specific domains, polymathic people will display intrinsic motivation (and the ability) to pursue a variety of subject matters across different domains.
Regarding the interplay of polymathy and education, they suggest that rather than asking whether every student has multicreative potential, educators might more actively nurture the multicreative potential of their students. As an example, the authors cite that teachers should encourage students to make connections across disciplines, use different forms of media to express their reasoning/understanding (e.g., drawings, movies, and other forms of visual media).
Waqas Ahmed
In his 2018 book The Polymath, British author Waqas Ahmed defines polymaths as those who have made significant contributions to at least three different fields. Rather than seeing polymaths as exceptionally gifted, he argues that every human being has the potential to become one: that people naturally have multiple interests and talents. He contrasts this polymathic nature against what he calls "the cult of specialisation". For example, education systems stifle this nature by forcing learners to specialise in narrow topics. The book argues that specialisation encouraged by the production lines of the Industrial Revolution is counter-productive both to the individual and wider society. It suggests that the complex problems of the 21st century need the versatility, creativity, and broad perspectives characteristic of polymaths.
For individuals, Ahmed says, specialisation is dehumanising and stifles their full range of expression whereas polymathy "is a powerful means to social and intellectual emancipation" which enables a more fulfilling life. In terms of social progress, he argues that answers to specific problems often come from combining knowledge and skills from multiple areas, and that many important problems are multi-dimensional in nature and cannot be fully understood through one specialism. Rather than interpreting polymathy as a mix of occupations or of intellectual interests, Ahmed urges a breaking of the "thinker"/"doer" dichotomy and the art/science dichotomy. He argues that an orientation towards action and towards thinking support each other, and that human beings flourish by pursuing a diversity of experiences as well as a diversity of knowledge. He observes that successful people in many fields have cited hobbies and other "peripheral" activities as supplying skills or insights that helped them succeed.
Ahmed examines evidence suggesting that developing multiple talents and perspectives is helpful for success in a highly specialised field. He cites a study of Nobel Prize-winning scientists which found them 25 times more likely to sing, dance, or act than average scientists. Another study found that children scored higher in IQ tests after having drum lessons, and he uses such research to argue that diversity of domains can enhance a person's general intelligence.
Ahmed cites many historical claims for the advantages of polymathy. Some of these are about general intellectual abilities that polymaths apply across multiple domains. For example, Aristotle wrote that full understanding of a topic requires, in addition to subject knowledge, a general critical thinking ability that can assess how that knowledge was arrived at. Another advantage of a polymathic mindset is in the application of multiple approaches to understanding a single issue. Ahmed cites biologist E. O. Wilson's view that reality is approached not by a single academic discipline but via a consilience between them. One argument for studying multiple approaches is that it leads to open-mindedness. Within any one perspective, a question may seem to have a straightforward, settled answer. Someone aware of different, contrasting answers will be more open-minded and aware of the limitations of their own knowledge. The importance of recognising these limitations is a theme that Ahmed finds in many thinkers, including Confucius, Ali ibn Abi Talib, and Nicolas of Cusa. He calls it "the essential mark of the polymath." A further argument for multiple approaches is that a polymath does not see diverse approaches as diverse, because they see connections where other people see differences. For example da Vinci advanced multiple fields by applying mathematical principles to each.
Related terms
Aside from Renaissance man, similar terms in use are (Latin) and (Italian), which translate to 'universal man'. The related term generalist—contrasted with a specialist—is used to describe a person with a general approach to knowledge.
The term universal genius or versatile genius is also used, with Leonardo da Vinci as the prime example again. The term is used especially for people who made lasting contributions in at least one of the fields in which they were actively involved and when they took a universality of approach.
When a person is described as having encyclopedic knowledge, they exhibit a vast scope of knowledge. However, this designation may be anachronistic in the case of persons such as Eratosthenes, whose reputation for having encyclopedic knowledge predates the existence of any encyclopedic object.
See also
Amateur
Competent man
Creative class
Genius
Interdisciplinarity
Jack of all trades, master of none
Multipotentiality
Opsimath
Philomath
Polyglotism
Polygraph (author)
Polymatheia – a muse of knowledge in Greek mythology
References and notes
Further reading
Edmonds, David (August 2017). Does the world need polymaths? , BBC.
Frost, Martin, "Polymath: A Renaissance Man".
Grafton, A, "The World of the Polyhistors: Humanism and Encyclopedism", Central European History, 18: 31–47. (1985).
Jaumann, Herbert, "Was ist ein Polyhistor? Gehversuche auf einem verlassenen Terrain", Studia Leibnitiana, 22: 76–89. (1990) .
Mirchandani, Vinnie, "The New Polymath: Profiles in Compound-Technology Innovations" , John Wiley & Sons. (2010).
Twigger, Robert, "Anyone can be a Polymath" We live in a one-track world, but anyone can become a polymath Aeon Essays.
Waquet, F, (ed.) "Mapping the World of Learning: The 'Polyhistor' of Daniel Georg Morhof" (2000) ISBN 978-3447043991.
Brown, Vincent Polymath-Info Portal .
Age of Enlightenment
Giftedness
Renaissance | 0.770304 | 0.999759 | 0.770119 |
Comparative history | Comparative history is the comparison of different societies which existed during the same time period or shared similar cultural conditions.
The comparative history of societies emerged as an important specialty among intellectuals in the Enlightenment in the 18th century, as typified by Montesquieu, Voltaire, Adam Smith, and others. Sociologists and economists in the 19th century often explored comparative history, as exemplified by Alexis de Tocqueville, Karl Marx, and Max Weber.
In the first half of the 20th century, a large reading public followed the comparative histories of (German) Oswald Spengler, (Russian-American) Pitirim Sorokin, and (British) Arnold J. Toynbee. Since the 1950s, however, comparative history has faded from the public view, and is now the domain of specialized scholars working independently.
Besides the people mentioned above, recent exemplars of comparative history include American historians Herbert E. Bolton and Carroll Quigley, and British historian Geoffrey Barraclough. Several sociologists are also prominent in this field, including Barrington Moore, S. N. Eisenstadt, Seymour Martin Lipset, Charles Tilly, Stephen O. Murray, and Michael Mann.
Historians generally accept the comparison of particular institutions (banking, women's rights, ethnic identities) in different societies, but since the hostile reaction to Toynbee in the 1950s, generally do not pay much attention to sweeping comparative studies that cover wide swaths of the world over many centuries.
Notable topics
Comparative studies of the Roman and Han empires
The ancient Chinese and Roman Empires are often compared due to their synchronous and analogous developments from warring states into universal empires.
Atlantic history
Atlantic history studies the Atlantic World in the early modern period. It is premised on the idea that, following the rise of sustained European contact with the New World in the 16th century, the continents that bordered the Atlantic Ocean—the Americas, Europe, and Africa—constituted a regional system or common sphere of economic and cultural exchange that can be studied as a totality.
Its theme is the complex interaction between Europe (especially Britain and France) and the New World colonies. It encompasses a wide range of demographic, social, economic, political, legal, military, intellectual and religious topics treated in comparative fashion by looking at both sides of the Atlantic. Religious revivals characterized Britain and Germany, as well as the First Great Awakening in the American colonies. Migration and race/slavery have been important topics.
Although a relatively new field, it has stimulated numerous studies of comparative history especially regarding ideas, colonialism, slavery, economic history, and political revolutions in the 18th century in North and South America, Europe and Africa.
Modernization models
Beginning with German and French sociologists of the late 19th century, modernization models have been developed to show the sequence of transitions from traditional to modern societies, and indeed to postmodern societies. This research flourished especially in the 1960s, with Princeton University setting up seminars that compared the modernization process in China, Japan, Russia and other nations.
Modernization theory and history have been explicitly used as guides for countries eager to develop rapidly, such as China. Indeed, modernization has been proposed as the most useful framework for world history in China, because as one of the developing countries that started late, "China's modernization has to be based on the experiences and lessons of other countries."
Comparative politics
The field of comparative history often overlaps with the subdivision of political science known as comparative politics. This includes "transnational" history and sometimes also international history.
Comparative history of minorities
Mordechai Zaken compared two non-Muslim minorities in Kurdistan, the Jews and the Assyrian Christians in their relationships with their Muslim rulers and tribal chieftains during the 19th and 20th centuries. His comparative study gave a much clearer picture on the status of the minorities and their relationships with the ruling elites in and around Kurdistan. His PhD dissertation and the book upon which it was based have been widely spread and translated into the local languages in Kurdistan and the surrounding.
Military history
Military historians have often compared the organization, tactical and strategic ideas, leadership, and national support of the militaries of different nations.
Historians have emphasized the need to stretch beyond battles and generals to do more comparative analysis.
Slavery
The study of slavery in comparative perspective, ranging from the ancient world to the 19th century, has attracted numerous historians since the 1960s.
Economics
Much of economic history in recent years has been done by model-building economists who show occasional interest in comparative data analysis. Considerable work has been done by historians on the "Great Divergence" debate launched by Kenneth Pomeranz in 2009. At issue is why Europe moved forward rapidly after 1700 while Asia did not. More traditional research methodologies have been combined with econometrics, for example in the comparison of merchant guilds in Europe.
Quantitative methods
Since the work of Sorokin, scholars in comparative history, especially if sociologists and political scientists, have often used quantitative and statistical data to compare multiple societies on multiple dimensions. There have been some efforts made to build mathematical dynamic models, but these have not come into the mainstream comparative history.
See also
Annales School
Comparative historical research
Comparative Studies in Society and History, a scholarly journal
Universal history
World history
Footnotes
Bibliography
Historiography
Barraclough; Geoffrey. Main Trends in History, ; Holmes & Meier, 1979 online version
Cohen, Deborah and Maura O'Connor; Comparison and History: Europe in Cross-National Perspective. Routledge, 2004 online edition
Cooper, Frederick. "Race, Ideology, and the Perils of Comparative History," American Historical Review, 101:4 (October 1996), 1122–1138. in JSTOR
Detienne, Marcel. Comparing the Incomparable Stanford University Press (2008)
Frederickson, George M. "From Exceptionalism to Variability: Recent Developments in Cross-National Comparative History." Journal of American History 82:2 (September 1995), 587–604. in JSTOR
Guarneri, Carl. "Reconsidering C. Vann Woodward's The Comparative Approach to American History," Reviews in American History, Volume 23, Number 3, September 1995, pp. 552-563
Halperin, Charles J. et al. "AHR Forum: Comparative History in Theory and Practice: A Discussion." American Historical Review, 87:1 (February 1982), 123–143. in JSTOR
Haupt, Heinz-Gerhard. "Comparative History," in Neil J. Smelser et al. eds. International Encyclopedia of Social and Behavioral Sciences (2001) 4:2397–2403.
Hill, Alette Olin and Boyd H. Hill. "AHR Forum: Marc Bloch and Comparative History." The American Historical Review 85:4 (October 1980), 828–846. in JSTOR
Hroch, Miroslav. Comparative Studies in Modern European History Ashgate Variorum 2007
Iriye, Akire. "The Internationalization of History," American Historical Review Vol. 94, No. 1 (Feb., 1989), pp. 1–10 in JSTOR
Mazlish, Bruce. Conceptualizing Global History. Westview Press, 1993.
McGerr, Michael. "The Price of the 'New Transnational History.'" American Historical Review 96:4 (October 1991), 1056–1067. in JSTOR
Magnaghi; Russell M. Herbert E. Bolton and the Historiography of the Americas Greenwood Press, 1998 online edition
Meritt, Richard L. and Stein Rokkan, editors. Comparing Nations: The Use of Quantitative Data in Cross-National Research. Yale University Press, 1966.
Rusen, Jorn. "Some Theoretical Approaches to Intercultural Comparative Historiography." History and Theory 35:4 (December 1996), 5–22.
Skocpol, Theda, and Margaret Somers. "The uses of comparative history in macrosocial inquiry." Comparative studies in society and history (1980) 22#2 pp: 174–197.
Stoler, Ann L. "Tense and Tender Ties: The Politics of Comparison in North American History and (Post) Colonial Studies." Journal of American History (Dec 2001), 831–864. in JSTOR
Tipps, Dean. "Modernization Theory and the Comparative Study of Societies: A Critical Perspective." Comparative Studies in Society and History 15:2 (1973), 199–226.
Welskopp, Thomas: Comparative History, European History Online, Mainz: Institute of European History, 2010, retrieved: June 14, 2012.
Comparative and world histories
Bayly, C. A. The Birth of the Modern World, 1780-1914 (2003)
Black, Cyril Edwin. The dynamics of modernization: a study in comparative history (Harper & Row, 1966)
Doyle, Michael W. Empires. Cornell University Press. 1986. online edition
Eisenstadt, S.N. The Political Systems of Empires (1968),
Gombrich, Ernst. "A Little History of the World" (1936 & 1995)
Kennedy, Paul. The Rise and Fall of the Great Powers: Economic Change and Military Conflict from 1500 to 2000 (Random House, 1987)
Klooster, Wim. Revolutions in the Atlantic World: A Comparative History (2009)
Lieberman, Victor. Strange Parallels: Volume 2, Mainland Mirrors: Europe, Japan, China, South Asia, and the Islands: Southeast Asia in Global Context, c.800-1830 (2009)
Mann, Michael. The sources of social power (1993)
McNeill, William H. "The Rise of the West: A History the Human Community" (1963)
Osterhammel, Jürgen. The Transformation of the World: A Global History of the Nineteenth Century (2014)
Palmer, Robert R. Age of the Democratic Revolution: A Political History of Europe and America, 1760-1800 (2 vol 1966)
Rosenberg, Emily, et al. eds. A World Connecting: 1870-1945 (2012)
Smith, S.A. Revolution and the People in Russia and China: A Comparative History (2009)
Sorokin; Pitirim A. Social Philosophies of an Age of Crisis. 1950 online edition
Sorokin; Pitirim A. Social and Cultural Dynamics (4 vol 1932; one-vol. edn., 1959).
Spengler; Oswald. The decline of the West 2 vol (1918)
Tilly, Charles. Big Structures, Large Processes, Huge Comparisons. Russell Sage Foundation, 1984.
Toynbee, Arnold J. A Study Of History 12 vol (1934–61); (2 vol abridgment 1957) online abridged version v. 1-6
Voegelin, Eric. Order and History, 5 vol (1956–75)
Woodward, C. Vann, ed. The Comparative Approach to American History (1968)
Comparative historical research
Fields of history
Historiography | 0.795004 | 0.968669 | 0.770095 |
Foucauldian discourse analysis | Foucauldian discourse analysis is a form of discourse analysis, focusing on power relationships in society as expressed through language and practices, and based on the theories of Michel Foucault.
Overview
Subject of analysis
Besides focusing on the meaning of a given discourse, the distinguishing characteristic of this approach is its stress on power relationships. These are expressed through language and behaviour, and the relationship between language and power. This form of analysis developed out of Foucault's genealogical work, where power was linked to the formation of discourse within specific historical periods. Some versions of this method stress the genealogical application of discourse analysis to illustrate how discourse is produced to govern social groups. The method analyses how the social world, expressed through language, is affected by various sources of power. As such, this approach is close to social constructivism, as the researcher tries to understand how our society is being shaped (or constructed) by language, which in turn reflects existing power relationships. The analysis attempts to understand how individuals view the world, and studies categorizations, personal and institutional relationships, ideology, and politics.
The approach was inspired by the work of both Michel Foucault and Jacques Derrida, and by critical theory.
Foucauldian discourse analysis, like much of critical theory, is often used in politically oriented studies. It is preferred by scholars who criticize more traditional forms of discourse analysis as failing to account for the political implications of discourse. Political power is gained by those in power being more knowledgeable and therefore more legitimate in exercising their control over others in both blatant and invisible ways.
Process
Kendall and Wickham outline five steps in using "Foucauldian discourse analysis". The first step is a simple recognition that discourse is a body of statements that are organized in a regular and systematic way. The subsequent four steps are based on the identification of rules on:
how those statements are created;
what can be said (written) and what cannot;
how spaces in which new statements can be made are created;
making practices material and discursive at the same time.
Areas of study
Studies employing the Foucauldian discourse analysis might look at how figures in authority use language to express their dominance, and request obedience and respect from those subordinate to them. The disciplinary interaction between authority and their followers emphasize the power dynamic found within the relationships. In a specific example, a study may look at the language used by teachers towards students, or military officers towards conscripts. This approach could also be used to study how language is used as a form of resistance to those in power. Foucauldian discourse analysis has also been deployed to illustrate how scholars and activists at times unwittingly reproduce the very discourses that they aim to challenge and overcome.
L'Ordre du discours
L'Ordre du discours (The Order of Discourse) is Michel Foucault's inaugural lecture at the Collège de France, delivered on December 2, 1970. Foucault presents the hypothesis that in any society the production of discourse is controlled, in order to eliminate powers and dangers and contain random events in this production.
Speech control and exclusion procedures
Foucault presents the hypothesis that, in every society, the production of discourses is controlled with the aim of: 1. exorcising its powers and dangers; 2. reducing the force of uncontrollable events; 3. hide the real forces that materialize the social constitution. To this end, he theorizes that external or internal procedures are used.
External procedures
These procedures are exercised from the outside and function as systems of exclusion, insofar as they concern the part of the discourse that puts power and desire into play. The three great systems of this type are: the prohibited word, the division of madness and the will to truth.
Prohibition: definition of what can be said in each circumstance. It is divided into three: taboo of the object, ritual of the circumstance and privileged or exclusive right of the speaker.
Division of madness: the madman's speech, according to Foucault, "cannot be transmitted like that of others": either he is considered null, or he is endowed with special powers, such as predicting the future.
Will to truth: the will to truth and the institutions that surround it exert pressure on discursive production. He cites as an example the subordination of Western literature to the credible and natural imposed by science.
Internal procedures
These procedures start from the speech itself with the function of classifying, ordering and dictating its distribution; the discourses themselves exercise their own control and are characterized by serving as principles of classification, ordering and distribution to dominate the dimension of discourse related to what happens and to chance. This title includes the commentary, the author and the organization into disciplines.
Commentary: there is a gap between the recurring discourses ("speak themselves"), constantly revisited, and the commonplace ones ("are spoken"). Those that resort to larger ones are called commentaries. Through this gap there is the possibility of creating different discourses, where comments, regardless of their apparent novelty, will always be a repetition of the first text.
Author: it should not be understood as the individual who produces the speech, but rather as a "discourse grouping principle", a section of that individual. It is through the role of the author that the individual will distinguish what to write or not, what will go into his work within everything he says every day.
Disciplines: principle that results from the delimitation of a "field of truth" where the discourse must be inserted. This field concerns the rules imposed for the construction of a discourse in a given field of knowledge (such as botany or medicine, which relate to its construction) as well as "a domain of objects, a set of methods, a body of propositions considered true, a set of rules and definitions, techniques and instruments" necessary for their acceptance within the "true" of a given discipline.
See also
References
Further reading
Johannes Angermuller. Poststructuralist Discourse Analysis. Subjectivity in Enunciative Pragmatics. Postdisciplinary Studies in Discourse. Basingstoke, Houndmills: Palgrave Macmillan, 2014.
Johannes Angermuller. Why There Is No Poststructuralism in France. The Making of An Intellectual Generation. London: Bloomsbury, 2015.
Lucy Niall. A Dictionary of Postmodernism. Wiley-Blackwell, 2016.
Sara Mills. Discourse: The New Critical Idiom. Series Editor: John Drakakis, Routledge, 1997.
Discourse analysis
Michel Foucault | 0.777255 | 0.990769 | 0.77008 |
European witchcraft | European belief in witchcraft can be traced back to classical antiquity, when magic and religion were closely entwined. During the pagan era of ancient Rome, there were laws against harmful magic. After Christianization, the medieval Catholic Church began to see witchcraft (maleficium) as a blend of black magic and apostasy involving a pact with the Devil. During the early modern period, witch hunts became widespread in Europe, partly fueled by religious tensions, societal anxieties, and economic upheaval. European belief in witchcraft gradually dwindled during and after the Age of Enlightenment.
One text that shaped the witch-hunts was the Malleus Maleficarum, a 1486 treatise that provided a framework for identifying, prosecuting, and punishing witches. During the 16th and 17th centuries, there was a wave of witch trials across Europe, resulting in tens of thousands of executions and many more prosecutions. Usually, accusations of witchcraft were made by neighbours and followed from social tensions. Accusations were most often made against women, the elderly, and marginalized individuals. Women made accusations as often as men. The common people believed that magical healers (called 'cunning folk' or 'wise people') could undo bewitchment. These magical healers were sometimes denounced as harmful witches themselves, but seem to have made up a minority of the accused. This dark period of history reflects the confluence of superstition, fear, and authority, as well as the societal tendency of scapegoating. A feminist interpretation of the witch trials is that misogyny led to the association of women and malevolent witchcraft.
Russia also had witchcraft trials during the 17th century. Witches were often accused of sorcery and engaging in supernatural activities, leading to their excommunication and execution. The blending of ecclesiastical and secular jurisdictions in Russian witchcraft trials highlight the intertwined nature of religious and political power during that time. Witchcraft fears and accusations came to be used as a political weapon against individuals who posed threats to the ruling elite.
Since the 1940s, diverse neopagan witchcraft movements have emerged in Europe, seeking to revive and reinterpret historical pagan and mystical practices. Wicca, pioneered by Gerald Gardner, is the biggest and most influential. Inspired by the now-discredited witch-cult theory and ceremonial magic, Wicca emphasizes a connection to nature, the divine, and personal growth. Stregheria is a distinctly Italian form of neopagan witchcraft. Many of these neopagans self-identify as "witches".
Concept
The concept of malevolent magic has since been found among cultures worldwide, and it is prominent in some cultures today. Most societies have believed in, and feared, an ability by some individuals to cause supernatural harm and misfortune to others. This may come from mankind's tendency "to want to assign occurrences of remarkable good or bad luck to agency, either human or superhuman".
Historians and anthropologists see the concept of "witchcraft" as one of the ways humans have tried to explain strange misfortune. Some cultures have feared witchcraft much less than others, because they tend to have other explanations for strange misfortune; for example that it was caused by gods, spirits, demons or fairies, or by other humans who have unwittingly cast the evil eye. For example, the Gaels of Ireland and the Scottish Highlands historically held a strong belief in fairy folk, who could cause supernatural harm, and witch-hunting was very rare in these regions compared to other regions of the British Isles.
Ronald Hutton outlined five key characteristics ascribed to witches and witchcraft by most cultures that believe in the concept. Traditionally, witchcraft was believed to be the use of magic to cause harm or misfortune to others; it was used by the witch against their own community; it was seen as immoral and often thought to involve communion with evil beings; powers of witchcraft were believed to have been acquired through inheritance or initiation; and witchcraft could be thwarted by defensive magic, persuasion, intimidation or physical punishment of the alleged witch.
The Christian concept of witchcraft derives from Old Testament laws against it. European Christianity viewed witchcraft as a blend of sorcery and apostasy (or heresy). Witches were believed to renounce Christ, the sacraments and salvation, instead performing Black Masses and making a pact with the Devil, through which they gained powers of sorcery. In medieval and early modern Europe, many common folk who were Christians believed in both good and bad magic. As opposed to the helpful magic of folk healers, witchcraft was seen as evil.
Pre-modern beliefs about witchcraft
In medieval and early modern Europe, witches were usually believed to be women who used black magic (maleficium) against their community, and often to have communed with demons or the Devil. Witches were commonly believed to cast curses; a spell or set of magical words and gestures intended to inflict supernatural harm. Cursing could also involve inscribing runes or sigils on an object to give that object magical powers; burning or binding a wax or clay image (a poppet) of a person to affect them magically; or using herbs, animal parts and other substances to make potions or poisons; among other means. A common belief was that witches tended to use something from their victim's body to work black magic against them; for example hair, nail clippings, clothing, or bodily waste.
Witches were believed to work in secret, sometimes alone and sometimes with other witches. They were sometimes said to hold gatherings at night where they worked black magic and transgressed social norms by engaging in cannibalism, incest and open nudity.
Another common belief was that witches had a demonic helper or "familiar", often in animal form. Witches were also often thought to be able to shapeshift into animals themselves, particularly cats and owls.
Witchcraft was blamed for many kinds of misfortune. By far the most common kind of harm attributed to witchcraft was illness or death suffered by adults, their children, or their animals. "Certain ailments, like impotence in men, infertility in women, and lack of milk in cows, were particularly associated with witchcraft". Illnesses that were poorly understood were more likely to be blamed on witchcraft. Edward Bever writes: "Witchcraft was particularly likely to be suspected when a disease came on unusually swiftly, lingered unusually long, could not be diagnosed clearly, or presented some other unusual symptoms".
It was thought witchcraft could be thwarted by protective magic or counter-magic, which could be provided by the 'cunning folk' or 'wise people'. This included charms, talismans and amulets, anti-witch marks, witch bottles, witch balls, and burying objects such as horse skulls inside the walls of buildings. People believed that bewitchment could be broken by physically punishing the alleged witch, such as by banishing, wounding, torturing or killing them. "In most societies, however, a formal and legal remedy was preferred to this sort of private action", whereby the alleged witch would be prosecuted and then formally punished if found guilty.
Witches and folk healers
Most societies that have believed in harmful witchcraft or 'black' magic have also believed in helpful or 'white' magic. In these societies, practitioners of helpful magic provided services such as breaking the effects of witchcraft, healing, divination, finding lost or stolen goods, and love magic.< In Britain they were commonly known as cunning folk or wise people. Alan McFarlane writes that they might be called 'white', 'good', or 'unbinding' witches, as well as blessers or wizards, but were more often known as cunning folk. Historian Owen Davies says the term "white witch" was rarely used before the 20th century. Ronald Hutton uses the general term "service magicians". Often these people were involved in identifying alleged witches.
Such magic-workers "were normally contrasted with the witch who practised maleficium—that is, magic used for harmful ends". In the early years of the witch hunts "the cunning folk were widely tolerated by church, state and general populace". Some of the more hostile churchmen and secular authorities tried to smear folk-healers and magic-workers by branding them 'witches' and associating them with harmful 'witchcraft', but generally the masses did not accept this and continued to make use of their services. The English MP and skeptic Reginald Scot sought to disprove magic and witchcraft, writing in The Discoverie of Witchcraft (1584), "At this day it is indifferent to say in the English tongue, 'she is a witch' or 'she is a wise woman. Emma Wilby says folk magicians in Europe were viewed ambivalently by communities, and were considered as capable of harming as of healing, which could lead to their being accused as "witches" in the negative sense. She suggests some English "witches" convicted of consorting with demons may have been cunning folk whose supposed fairy familiars had been demonised.
Hutton says that healers and cunning folk "were sometimes denounced as witches, but seem to have made up a minority of the accused in any area studied". Likewise, Davies says "relatively few cunning-folk were prosecuted under secular statutes for witchcraft" and were dealt with more leniently than alleged witches. The Constitutio Criminalis Carolina (1532) of the Holy Roman Empire, and the Danish Witchcraft Act of 1617, stated that workers of folk magic should be dealt with differently from witches. It was suggested by Richard Horsley that cunning folk (, 'diviner-healers') made up a significant proportion of those tried for witchcraft in France and Switzerland, but more recent surveys conclude that they made up less than 2% of the accused. However, Éva Pócs says that half the accused witches in Hungary seem to have been healers, and Kathleen Stokker says the "vast majority" of Norway's accused witches were folk healers.
Accusations of witchcraft
Witchcraft accusations were often a form of scapegoating (casting blame for misfortune), and often resulted in prosecution, as well as torture and killing, of alleged witches. In pre-modern Europe, most of those accused were women, and accusations of witchcraft usually came from their neighbors who accused them of inflicting harm or misfortune by magical means. Macfarlane found that women made accusations of witchcraft as much as men did. Deborah Willis adds, "The number of witchcraft quarrels that began between women may actually have been higher; in some cases, it appears that the husband as 'head of household' came forward to make statements on behalf of his wife". Hutton and Davies note that folk healers were sometimes accused of witchcraft, but made up a minority of the accused. It is also possible that a small proportion of accused witches may have genuinely sought to harm by magical means.
Éva Pócs writes that reasons for accusations of witchcraft fall into four general categories:
A person was caught in the act of positive or negative sorcery
A well-meaning sorcerer or healer lost their clients' or the authorities' trust
A person did nothing more than gain the enmity of their neighbors
A person was reputed to be a witch and surrounded with an aura of witch-beliefs or occultism
She identifies three kinds of witch in popular belief:
The "neighborhood witch" or "social witch": a witch who curses a neighbor following some dispute.
The "magical" or "sorcerer" witch: either a professional healer, sorcerer, seer or midwife, or a person who was thought to have used magic to increase her fortune to the perceived detriment of a neighboring household; due to neighborhood or community rivalries, and the ambiguity between positive and negative magic, such individuals can become branded as witches.
The "supernatural" or "night" witch: portrayed in court narratives as a demon appearing in visions and dreams.
"Neighborhood witches" are the product of neighborhood tensions, and are found only in village communities where the inhabitants largely rely on each other. Such accusations follow the breaking of some social norm, such as the failure to return a borrowed item, and any person part of the normal social exchange could potentially fall under suspicion. Claims of "sorcerer" witches and "supernatural" witches could arise out of social tensions, but not exclusively; the supernatural witch often had nothing to do with communal conflict, but expressed tensions between the human and supernatural worlds; and in Eastern and Southeastern Europe such supernatural witches became an ideology explaining calamities that befell whole communities.
The historian Norman Gevitz has written:
It was commonly believed that individuals with power and prestige were involved in acts of witchcraft and even cannibalism.
History
Antiquity
In ancient Greece and Rome, circa 8th century BCE - 5th century CE, individuals known as "goêtes" practiced various forms of magic, including divination, spellcasting, and invoking supernatural entities. While some forms of magic were integrated into religious practices, others were seen as superstitious and potentially harmful.
There are accounts of people being prosecuted and punished for witchcraft in the ancient Greco-Roman world, before Christianity. In ancient Greece, for example, Theoris, a woman of Lemnos, was prosecuted for and executed along with her family. Records refer to her as pharmakis (potion specialist), mantis (diviner), and hiereia (priestess), but the sentence against her and her family was for asebeia (impiety).
Meanwhile, legends of Thessalian witches developed during the Classical Greek period. According to many sources, Thessaly was notorious for being a haven for witches, and "folklore about the region has persisted with tales of witches, drugs, poisons and magical spells ever since the Roman period."
During the pagan era of ancient Rome, there were laws against harmful magic. According to Pliny, the 5th century BC laws of the Twelve Tables laid down penalties for uttering harmful incantations and for stealing the fruitfulness of someone else's crops by magic. "The clause forbidding evil incantations does not forbid incantations per se, but only incantations taking the form of song intended to harm". The only recorded trial involving this law was that of Gaius Furius Chresimus in 191 BC. He was acquitted of using spells to draw the fruitfulness of other fields into his own.
The Classical Latin word meant both poisoning and causing harm by magic (such as magic potions), although ancient people would not have distinguished between the two. In 331 BC, a deadly epidemic hit Rome and at least 170 women were executed for causing it by veneficium. However, some portions of these individuals were tested and killed by being made to drink their own medical potions, indicating the charge was straightforward poisoning. In 184–180 BC, another epidemic hit Italy, and about 5,000 were executed for veneficium. Hutton states that if even some portion were charged for killing with magical rites "then the Republican Romans hunted witches on a scale unknown anywhere else in the ancient world, and any other time in European history". However, he acknowledges that it is impossible to tell whether any percentage of these charges were for poisoning or the use of magic.
Under the Lex Cornelia de sicariis et veneficis ("Cornelian law against assassins and poisoners") of 81 BC, killing by veneficium carried the death penalty. During the early Imperial era, the Lex Cornelia began to be used more broadly against other kinds of magic, including "making of love potions, the enactment of rites to enchant, bind or restrain, the possession of books containing magical recipes, and the 'arts of magic' in general." Modestinus, a Roman jurist of the early third century AD, wrote that sacrifices made for evil purposes could be punished under the Lex Cornelia. The Pauli Sententiae, from the same century, says the Lex Cornelia imposed a penalty on those who made sacrifices at night to bewitch someone. It also outlines penalties for giving potions to induce an abortion or to induce love. The magicians were to be burnt at the stake.
Witch characters—women who work powerful evil magic—appear in ancient Roman literature from the first century BC onward. Some of these draw from more neutral models, as seen in Greece. However, there are also distinctly evil figures, typically hags who chant harmful incantations; make poisonous potions from herbs and the body parts of animals and humans; sacrifice children; raise the dead; can control the natural world; can shapeshift themselves and others into animals; and invoke underworld deities and spirits. They include Lucan's Erichtho, Horace's Canidia, Ovid's Dipsas, and Apuleius's Meroe. However, Hutton acknowledges the likelihood that this represents a level of literary license being taken with the historical memory of mass executions of women for veneficium. Another version of the malevolent witch that appeared in Rome was "a highly sexed woman in her prime, fond of young men and inclined to destroy those who reject her." Unique to Rome among its contemporaries as a description of witches, this image was closely related to several stories of demons traded between neighboring and preceding cultures.
The first Christian Roman emperor, Constantine the Great, introduced new laws against magic in the early 4th century AD. Private divination, and working magic to harm others or to induce lust, were to be punished harshly, but protective magic was not outlawed.
Early converts to Christianity looked to Christian clergy to work magic more effectively than the old methods under Roman paganism, and Christianity provided a methodology involving saints and relics, similar to the gods and amulets of the Pagan world. As Christianity became the dominant religion in Europe, its concern with magic lessened.
Middle Ages
Medieval laws
The early legal codes of most European nations contain laws directed against witchcraft. Thus, for example, the oldest document of Frankish legislation, the Salic law, which was reduced to a written form and promulgated under Clovis, who died 27 November, 511, punishes those who practice magic with various fines, especially when it could be proven that the accused launched a deadly curse, or had tied the Witch's Knot. The laws of the Visigoths, which were to some extent founded upon the Roman law, punished witches who had killed any person by their spells with death; while long-continued and obstinate witchcraft, if fully proven, was visited with such severe sentences as slavery for life. The Eastern council in Trullo (692), and certain early Irish canons, treated sorcery as a crime to be visited with excommunication until adequate penance had been performed.
The Pactus Legis Alamannorum, an early 7th-century code of laws of the Alemanni confederation of Germanic tribes, lists witchcraft as a punishable crime on equal terms with poisoning. If a free man accuses a free woman of witchcraft or poisoning, the accused may be disculpated either by twelve people swearing an oath on her innocence or by one of her relatives defending her in a trial by combat. In this case, the accuser is required to pay a fine (Pactus Legis Alamannorum 13). Charles the Great prescribed the death penalty for anyone who would burn witches.
With Christianization, belief in witchcraft came to be seen as superstition. The Council of Leptinnes in 744 drew up a "List of Superstitions", which prohibited sacrifice to saints and created a baptismal formula that required one to renounce works of demons, specifically naming Thor and Odin. Persecution of witchcraft nevertheless persisted throughout most of the Early Middle Ages, into the 10th century.
When Charlemagne imposed Christianity upon the people of Saxony in 789, he proclaimed:
Similarly, the Lombard code of 643 states:
This conforms to the thoughts of Saint Augustine of Hippo, who taught that witchcraft did not exist and that the belief in it was heretical.
In 814, Louis the Pious upon his accession to the throne began to take very active measures against all sorcerers and necromancers, and it was owing to his influence and authority that the Council of Paris in 829 appealed to the secular courts to carry out any such sentences as the Bishops might pronounce. The consequence was that from this time forward the penalty of witchcraft was death, and there is evidence that if the constituted authority, either ecclesiastical or civil, seemed to slacken in their efforts the populace took the law into their own hands with far more fearful results.
In England, the early Penitentials are greatly concerned with the repression of pagan ceremonies, which under the cover of Christian festivities were very largely practised at Christmas and on New Year's Day. These rites were closely connected with witchcraft, and especially do S. Theodore, S. Aldhelm, Ecgberht of York, and other prelates prohibit the masquerade as a horned animal, a stag, or a bull, which S. Caesarius of Arles had denounced as a "foul tradition", an "evil custom", a "most heinous abomination". The laws of King Æthelstan (924–40), corresponsive with the early French laws, punished any person casting a spell which resulted in death by extracting the extreme penalty.
Among the laws attributed to the Pictish King Cináed mac Ailpin (ruled 843 to 858), "is an important statute which enacts that all sorcerers and witches, and such as invoke spirits, 'and use to seek upon them for helpe, let them be burned to death'. Even then this was obviously no new penalty, but the statutory confirmation of a long-established punishment. So the witches of Forres who attempted the life of King Duffus in the year 968 by the old bane of slowly melting a wax image, when discovered, were according to the law burned at the stake."
The Canon Episcopi, which was written circa 900 AD (though alleged to date from 314 AD), once more following the teachings of Saint Augustine, declared that witches did not exist and that anyone who believed in them was a heretic. The crucial passage from the Canon Episcopi reads as follows:
P. G. Maxwell-Stewart in "The Emergence of the Christian Witch" wrote:
The later Middle Ages saw words for these practitioners of harmful magical acts appear in various European languages: sorcière in French, Hexe in German, strega in Italian, and bruja in Spanish. The English term for malevolent practitioners of magic, witch, derived from the earlier Old English term wicce. A person that performs sorcery is referred to as a sorcerer or a witch, conceived as someone who tries to reshape the world through the occult. The word witch is over a thousand years old: Old English formed the compound from ('witch') and ('craft'). The masculine form was ('male sorcerer'). In early modern Scots, the word warlock came to be used as the male equivalent of witch (which can be male or female, but is used predominantly for females).
Developing views of the Church
It was in the Church's interest, as it expanded, to suppress all competing Pagan methodologies of magic. This could be done only by presenting a cosmology in which Christian miracles were legitimate and credible, whereas non-Christian ones were "of the devil". Hence the following law:
While the common people were aware of the difference between witches, who they considered willing to undertake evil actions, such as cursing, and cunning folk who avoided involvement in such activities, the Church attempted to blot out the distinction. In much the same way that culturally distinct non-Christian religions were all lumped together and termed merely "Pagan", so too was all magic lumped together as equally sinful and abhorrent. The earliest written reference to witches as such, from Ælfric's homilies, portrays them as malign.
A rise in the practice of necromancy in the 12th century, spurred on by an influx of texts on magic and diabolism from the Islamic world, had alerted clerical authorities to the potential dangers of malefic magic. Sorcery came to be associated with heresy and apostasy and to be viewed as evil. This elevated concern was slowly expanded to include the common witch, but clerics needed an explanation for why uneducated commoners could perform feats of diabolical sorcery that rivaled those of the most seasoned and learned necromancers, whose magic required the rigorous application of study and complex ritual.
The idea that witches gained their powers through a pact with the Devil provided a satisfactory explanation, and allowed authorities to develop a mythology through which they could project accusations of crimes formerly associated with various heretical sects (cannibalism, ritual infanticide, and the worship of demonic familiars) onto the newly emerging threat of diabolical witchcraft. This pact and the ceremony that accompanied it became widely known as the witches' sabbath. The idea of a pact became important—one could be possessed by the Devil and not responsible for one's actions; but to be a witch, one had to sign a pact with the Devil, often to worship him, which was heresy and meant damnation. The idea of an explicit and ceremonial pact with the Devil was crucial to the development of the witchcraft concept, because it provided an explanation that differentiated the figure of the witch from that of the learned necromancer or sorcerer.
Gregory of Nyssa (c. 335 – c. 395) had said that demons had children with women called cambions, which added to the children they had between them, contributed to increase the number of demons. However, the first popular account of such a union and offspring does not occur in Western literature until around 1136, when Geoffrey of Monmouth wrote the story of Merlin in his pseudohistorical account of British history, Historia Regum Britanniae (History of the Kings of Britain), in which he reported that Merlin's father was an incubus.
Anne Lawrence-Mathers writes that at that time "... views on demons and spirits were still relatively flexible. There was still a possibility that the daemons of classical tradition were different from the demons of the Bible." Accounts of sexual relations with demons in literature continues with The Life of Saint Bernard by Geoffrey of Auxerre ( 1160) and the Life and Miracles of St. William of Norwich by Thomas of Monmouth ( 1173). The theme of sexual relations with demons became a matter of increasing interest for late 12th-century writers.
Prophetiae Merlini (The Prophecies of Merlin), a Latin work of Geoffrey of Monmouth in circulation by 1135, perhaps as a libellus or short work, was the first work about the prophet Myrddin in a language other than Welsh. The Prophetiae was widely read — and believed — much as the prophecies of Nostradamus would be centuries later; John Jay Parry and Robert Caldwell note that the Prophetiae Merlini "were taken most seriously, even by the learned and worldly wise, in many nations", and list examples of this credulity as late as 1445.
It was only beginning in the 1150s that the Church turned its attention to defining the possible roles of spirits and demons, especially with respect to their sexuality and in connection with the various forms of magic which were then believed to exist. Christian demonologists eventually came to agree that sexual relationships between demons and humans happen, but they disagreed on why and how. A common point of view is that demons induce men and women to the sin of lust, and adultery is often considered as an associated sin.
As Christian views on magic continued to evolve and intertwine with changing cultural landscapes, the perception of supernatural practices became increasingly intricate. The Church's endeavors to assert its dominance over alternative belief systems led to the suppression of various magical methodologies. Simultaneously, the conceptualization of witches and their alleged pacts with the Devil solidified during the Early Modern period, resulting in the infamous witch trials. These trials marked a significant turning point in the Church's engagement with magic, as accusations of heinous acts were projected onto the figure of the witch.
Increasing fear and early witch-hunts
The tale of Theophilus recorded in the 13th century by writer Gautier de Coincy's Les Miracles de la Sainte Vierge bears many similarities to the later legend of Faust. Here, a saintly figure makes a bargain with the keeper of the infernal world but is rescued from paying his debt to society through the mercy of the Blessed Virgin. A depiction of the scene in which he subordinates himself to the Devil appears on the north tympanum of the Cathedrale de Notre Dame de Paris. By 1300, the elements were in place for a witch hunt, and for the next century and a half, fear of witches spread gradually throughout Europe.
In the early 14th century, many accusations were brought against clergymen and other learned people who were capable of reading and writing magic; Pope Boniface VIII (d. 1303) was posthumously tried for apostasy, murder, and sodomy, in addition to allegedly entering into a pact with the Devil (while popes had been accused of crimes before, the demonolatry charge was new). The Templars were also tried as Devil-invoking heretics in 1305–14. The middle years of the 14th century were quieter, but towards the end of the century, accusations increased and were brought against ordinary people more frequently.
In 1398, the University of Paris declared that a demonic pact could be implicit; no document need be signed, as the mere act of summoning a demon constituted an implied pact. This freed prosecutors from having to prove the existence of a physical pact. Among Catholics and the secular leadership of late medieval Europe, fears about witchcraft rose to fever pitch and this led to large-scale witch-hunts. Each new conviction reinforced the beliefs in the methods (torture and pointed interrogation) being used to solicit confessions and in the list of accusations to which the accused confessed.
Early Modern witch trials
There were an estimated 110,000 witchcraft trials in Europe between 1450 and 1750, with half of the cases seeing the accused being executed. Witch hunts began to increase first in southern France and Switzerland, during the 14th and 15th centuries. Witch hunts and witchcraft trials rose markedly during the social upheavals of the 16th century, peaking between 1560 and 1660. The peak years of witch-hunts in southwest Germany were from 1561 to 1670.
As the notion spread that all magic involved a pact with the Devil, legal sanctions against witchcraft grew harsher. Tens of thousands of people were executed, and others were imprisoned, tortured, banished, and had lands and possessions confiscated. The majority of those accused were women, though in some regions the majority were men.
Accusations against witches were almost identical to those levelled by 3rd-century pagans against early Christians:
In 1486, Heinrich Kramer, a member of the Dominican Order, published the Malleus Maleficarum (the 'Hammer against the Witches'). It was used by both Catholics and Protestants for several hundred years, outlining how to identify a witch, what makes a woman more likely than a man to be a witch, how to put a witch on trial, and how to punish a witch. The book defines a witch as evil and typically female. It became the handbook for secular courts throughout Europe, but was not used by the Inquisition, which even cautioned against relying on it. It was the most sold book in Europe for over 100 years, after the Bible. Scholars are unclear on just how influential the Malleus was in its day. Less than one hundred years after it was written, the Council of the Inquisitor General in Spain discounted the credibility of the Malleus since it contained numerous errors.
The height of the witch-craze was concurrent with the rise of Renaissance magic in the great humanists of the time (this was called high magic, and the Neoplatonists and Aristotelians that practised it took pains to insist that it was wise and benevolent and nothing like witchcraft, which was considered low magic), which helped abet the rise of the craze. Witchcraft was held to be the worst of heresies, and early skepticism slowly faded from view almost entirely. The origins of the accusations against witches in the Early Modern period are eventually present in trials against heretics, which trials include claims of secret meetings, orgies, and the consumption of babies.
Persecution continued through the Protestant Reformation in the 16th century, and the Protestants and Catholics both continued witch trials with varying numbers of executions from one period to the next. The "Caroline Code", the basic law code of the Holy Roman Empire (1532) imposed heavy penalties on witchcraft. As society became more literate (due mostly to the invention of the printing press in the 1440s), increasing numbers of books and tracts fueled the witch fears.
From the sixteenth century on, there were some writers who protested against witch trials, witch hunting and the belief that witchcraft existed. Among them were Johann Weyer, Reginald Scot, and Friedrich Spee. The Jura Mountains in southern Germany provided a small respite from the insanity; there, torture was imposed only within the precise limits of the Caroline Code of 1532, little attention was paid to the accusations of or by children, and charges had to be brought openly before a suspect could be arrested. These limitations contained the mania in that area. In Frankfurt, the legend of Faust began to circulate in chapbook form around 1587, when Historia von D. Johann Fausten was published.
The craze reached its height between 1560 and 1660. After 1580, the Jesuits replaced the Dominicans as the chief Catholic witch-hunters, and the Catholic Rudolf II (1576–1612) presided over a long persecution in Austria. The nuns of Loudun (1630), novelized by Aldous Huxley and made into a film by Ken Russell, provide an example of the craze during this time. The nuns had conspired to accuse Father Urbain Grandier of witchcraft by faking symptoms of possession and torment; they feigned convulsions, rolled and gibbered on the ground, and accused Grandier of indecencies. Grandier was convicted and burned; however, after the plot succeeded, the symptoms of the nuns only grew worse, and they became more and more sexual in nature. This attests to the degree of mania and insanity present in such witch trials.
After the early 17th century, popular sentiment began to turn against the practice. In 1682, King Louis XIV prohibited further witch-trials in France. In 1687, Louis XIV issued an edict against witchcraft that was rather moderate compared to former ones; it ignored black cats and other lurid fantasies of the witch mania. After this, the number of witches accused and condemned fell rapidly. In 1736, Great Britain formally ended witch-trials with passage of the Witchcraft Act 1735 (9 Geo. 2. c. 5).
In Britain
In Wales, fear of witchcraft mounted around the year 1500. There was a growing alarm of women's magic as a weapon aimed against the state and church. The Church made greater efforts to enforce the canon law of marriage, especially in Wales where tradition allowed a wider range of sexual partnerships. There was a political dimension as well, as accusations of witchcraft were levied against the enemies of Henry VII, who was exerting more and more control over Wales. In 1542, the Henry VIII's Witchcraft Acts was passed defining witchcraft as a crime punishable by death and the forfeiture of property.
William Shakespeare wrote about the infamous "Three Witches" in his tragedy Macbeth during the reign of James I, who was notorious for his ruthless prosecution of witchcraft. Becoming king of Scotland in 1567 and of England in 1603, James VI and I brought to Scotland and England continental explanations of witchcraft. He set out the much stiffer Witchcraft Act 1603 (1 Jas. 1. c. 12), which made it a felony under common law. His goal was to focus fear on female communities and large gatherings of women. He thought they threatened his political power so he laid the foundation for witchcraft and occultism policies, especially in Scotland. The point was that a widespread belief in the conspiracy of witches and a witches' Sabbath with the devil deprived women of political influence. Occult power was supposedly a womanly trait because women were weaker and more susceptible to the devil.
In Wales, witchcraft trials heightened in the 16th and 17th centuries, after the fear of it was imported from England. There was a growing alarm of women's magic as a weapon aimed against the state and church. The Church made greater efforts to enforce the canon law of marriage, especially in Wales where tradition allowed a wider range of sexual partnerships. There was a political dimension as well, as accusations of witchcraft were levied against the enemies of Henry VII, who was exerting more and more control over Wales.
The records of the Courts of Great Sessions for Wales, 1536–1736 show that Welsh custom was more important than English law. Custom provided a framework of responding to witches and witchcraft in such a way that interpersonal and communal harmony was maintained, Showing to regard to the importance of honour, social place and cultural status. Even when found guilty, execution did not occur.
The last persons known to have been executed for witchcraft in England were the so-called Bideford witches in 1682. The last person executed for witchcraft in Great Britain was Janet Horne, in Scotland in 1727. The Witchcraft Act 1735 (9 Geo. 2 c. 5) abolished the penalty of execution for witchcraft, replacing it with imprisonment. This act was repealed by the Fraudulent Mediums Act 1951 (14 & 15 Geo. 6. c. 33).
Enlightenment attitudes after 1700 made a mockery of beliefs in witches. The Witchcraft Act 1735 (9 Geo. 2 c. 5) marked a complete reversal in attitudes. Penalties for the practice of witchcraft as traditionally constituted, which by that time was considered by many influential figures to be an impossible crime, were replaced by penalties for the pretence of witchcraft. A person who claimed to have the power to call up spirits, or foretell the future, or cast spells, or discover the whereabouts of stolen goods, was to be punished as a vagrant and a con artist, subject to fines and imprisonment.
Historians Keith Thomas and his student Alan Macfarlane study witchcraft by combining historical research with concepts drawn from anthropology. They argued that English witchcraft, like African witchcraft, was endemic rather than epidemic. Old women were the favorite targets because they were marginal, dependent members of the community and therefore more likely to arouse feelings of both hostility and guilt, and less likely to have defenders of importance inside the community. Witchcraft accusations were the village's reaction to the breakdown of its internal community, coupled with the emergence of a newer set of values that was generating psychic stress.
In Italy
A particularly rich source of information about witchcraft in Italy before the outbreak of the Great Witch Hunts of the Renaissance are the sermons of Franciscan popular preacher, Bernardino of Siena (1380–1444), who saw the issue as one of the most pressing moral and social challenges of his day and thus preached many a sermon on the subject, inspiring many local governments to take actions against what he called "servants of the Devil". As in most European countries, women in Italy were more likely suspected of witchcraft than men. Women were considered dangerous due to their supposed sexual instability, such as when being aroused, and also due to the powers of their menstrual blood.
In the 16th century, Italy had a high portion of witchcraft trials involving love magic. The country had a large number of unmarried people due to men marrying later in their lives during this time. This left many women on a desperate quest for marriage leaving them vulnerable to the accusation of witchcraft whether they took part in it or not. Trial records from the Inquisition and secular courts discovered a link between prostitutes and supernatural practices. Professional prostitutes were considered experts in love and therefore knew how to make love potions and cast love related spells. Up until 1630, the majority of women accused of witchcraft were prostitutes. A courtesan was questioned about her use of magic due to her relationship with men of power in Italy and her wealth. The majority of women accused were also considered "outsiders" because they were poor, had different religious practices, spoke a different language, or simply from a different city/town/region. Cassandra from Ferrara, Italy, was still considered a foreigner because not native to Rome where she was residing. She was also not seen as a model citizen because her husband was in Venice.
From the 16th to 18th centuries, the Catholic Church enforced moral discipline throughout Italy. With the help of local tribunals, such as in Venice, the two institutions investigated a woman's religious behaviors when she was accused of witchcraft.
In Spain
Galicia in Spain is nicknamed the "Land of the Witches" due to its mythological origins surrounding its people, culture and its land. The Basque Country also suffered persecutions against witches, such as the case of the Witches of Zugarramurdi, six of which were burned in Logroño in 1610, or the witch hunt in the French Basque country in the previous year, burning eighty supposed witches at the stake. This is reflected in the studies of José Miguel de Barandiarán and Julio Caro Baroja. Euskal Herria retains numerous legends that account for an ancient mythology of witchcraft. The town of Zalla is nicknamed "Town of the Witches".
In Russia
The Russian word literally means 'knower', and was the primary word for a malevolent witch.
In 17th century Russia, the dominant societal concern about those practicing witchcraft was not whether it was effective, but whether it could cause harm. Peasants in Russian and Ukrainian societies often shunned witchcraft, unless they needed help against supernatural forces. Impotence, stomach pains, barrenness, hernias, abscesses, epileptic seizures, and convulsions were all attributed to evil (or witchcraft). This is reflected in linguistics; there are numerous words for a variety of practitioners of paganism-based healers. Russian peasants referred to a witch as a (a person who plied his trade with the aid of a black book), (a 'whisperer' male or female), or (a male or female healer), or (an incanter).
There was universal reliance on folk healers—but clients often turned them in if something went wrong. According to Russian historian Valerie A. Kivelson, witchcraft accusations were normally thrown at lower-class peasants, townspeople and Cossacks. The ratio of male to female accusations was 75% to 25%. Males were targeted more, because witchcraft was associated with societal deviation. Because single people with no settled home could not be taxed, males typically had more power than women in their dissent.
The history of witchcraft had evolved around society. More of a psychological concept to the creation and usage of witchcraft can create the assumption as to why women are more likely to follow the practices behind witchcraft. Identifying with the soul of an individual's self is often deemed as "feminine" in society. There is analyzed social and economic evidence to associate between witchcraft and women.
In the seventeenth century, Russia experienced a period of witchcraft trials and persecution that mirrored the witch hysteria occurring across Catholic and Protestant countries. Orthodox Christian Europe joined this phenomenon, targeting individuals, both male and female, believed to be practicing sorcery, paganism, and herbal medicine. Ecclesiastical jurisdiction over witchcraft trials was established in the church, with origins dating back to early references in historical documents, such as the eleventh-century State Statute of Vladimir the Great and the Primary Chronicle. The punishment for witchcraft typically included burning at the stake or the "ordeal of cold water," a method used both in Western Europe and Russia.
While Western Europe often employed harsh torture methods, Russia implemented a more civil system of fines for witchcraft during the seventeenth century. This approach contrasted with the West's cruelties and represented a significant difference in persecution methods. Ivan IV, or Ivan the Terrible, was deeply convinced that witchcraft led to the death of his wife, spurring him to excommunicate and impose the death penalty on those practicing witchcraft. This fear of witchcraft persisted during Ivan IV's rule, leading to the accusation of boyars with witchcraft during the Oprichnina period, followed by increased witchcraft concerns during the Time of Troubles.
Following these periods of turmoil, witchcraft investigations became prevalent within Muscovite households. Between 1622 and 1700, numerous trials were conducted, though the scale of persecution and execution was not as extensive as in Western Europe. Russia's approach to witchcraft trials showcased its unique blend of religious and social factors, culminating in a distinct historical context compared to the widespread witch hysteria observed in other parts of Europe.
20th and 21st centuries
Modern witchcraft in Europe encompasses a diverse range of contemporary traditions. Some adherents practice what they believe are traditions rooted in ancient pagan and mystical practices, while others follow openly modern, syncretic traditions like Wicca. While adherents distinguish between Wicca and these other traditions, religious studies scholars class these various neopagan witchcraft traditions under the broad category of "Wicca."
These traditions emerged predominantly in the mid-20th century, inspired by a revival of interest in pre-Christian spirituality. Influenced by the now-discredited witch-cult hypothesis, which suggested persecuted witches in Europe were followers of a surviving pagan religion, these traditions seek to reconnect with ancient beliefs and rituals.
Britain
During the 20th century, interest in witchcraft rose in Britain. From the 1920s, Margaret Murray popularized the 'witch-cult hypothesis': the idea that those persecuted as 'witches' in early modern Europe were followers of a benevolent pagan religion that had survived the Christianization of Europe. This has been discredited by further historical research.
From the 1930s, occult neopagan groups began to emerge who called their religion a kind of 'witchcraft'. They were initiatory secret societies inspired by Murray's 'witch cult' theory and historical paganism. They did not use the term 'witchcraft' in the traditional way, but instead defined their practices as a kind of "positive magic". Among the most prominent of these traditions is Wicca, pioneered by Gerald Gardner in England during the mid-20th century. Gardnerian Wicca, the earliest known form, draws on elements of ceremonial magic, historical paganism, and witch cult theories.
Another notable tradition is Traditional Witchcraft, which stands apart from mainstream Wicca and emphasizes older, more "traditional" roots. This category includes Cochrane's Craft, founded by Robert Cochrane as a counterpoint to Gardnerian Wicca, and the Sabbatic Craft, as defined by Andrew Chumbley, which draws on a patchwork of ancient symbols and practices while emphasizing the imagery of the "Witches' Sabbath."
Throughout these traditions, practitioners may refer to themselves as witches and engage in rituals, magic, and spiritual practices that reflect their connection to nature, deity, and personal growth. These British-developed traditions have since been adopted and adapted outside of Britain.
Italy
Contemporary witchcraft in Italy represents a revival and reinterpretation of ancient pagan practices, often referred to as "Stregheria" or "La Vecchia Religione" (The Old Religion). Rooted in Italian cultural and mystical heritage, modern Italian witches blend elements of traditional folklore, spirituality, and magic. This resurgence draws from historical beliefs, superstitions, and the desire to reconnect with Italy's pre-Christian spiritual roots.
Streghe celebrate a diverse range of practices. They honor a pantheon of deities, often including a Moon Goddess and a Horned God, similar to some neopagan traditions. These deities are seen as sources of guidance, protection, and spiritual connection. Rituals and magic are integral to contemporary witchcraft in Italy, often involving the use of symbolic tools like the pentagram and the practice of divination. These practices aim to tap into the energies of nature and the cosmos, fostering personal growth and connection to the spiritual realm.
Contemporary Italian witchcraft is not monolithic, as individual practitioners may draw from various sources, adapt rituals to modern contexts, and blend traditional practices with modern influences. While some Streghe focus on healing, protection, and divination, others emphasize honoring ancestors and connecting with local spirits. The resurgence of Italian witchcraft reflects a broader global trend of seeking spiritual authenticity, cultural preservation, and a deeper connection to the mystical aspects of life.
Romania and the Roma
Roma witchcraft stands as a distinctive and culturally significant tradition within the Roma community, weaving together spirituality, healing practices, and fortune-telling abilities passed down through generations of Roma women. Rooted in history and mythology, this practice bears witness to the matrilineal nature of Roma culture, where women are the bearers of these ancient arts.
Unlike the severe witchcraft trials that plagued Western Europe, witchcraft historically took on a different form in Romania. The Romanian Orthodox Church's integration of pre-Christian beliefs and the reliance on village healers in the absence of modern medicine led to a less punitive approach. Instead of harsh punishments, those accused of witchcraft often faced spiritual consequences, such as fasting or temporary bans from the church.
Figures like Maria Campina, revered as the "Queen of Witches", exemplify the prominence of Roma witches in contemporary Romania. Campina's claims of inheriting her powers from her ancestors and her expertise in fortune-telling have earned her respect within both the Roma community and wider society.
Mihaela Drăgan, an influential Roma actress and writer, challenges stereotypes and empowers Roma women through her concept of "Roma Futurism". This visionary movement envisions a future where Roma witches embrace modernity while preserving their cultural heritage. Social media platforms have enabled Roma witches to amplify their reach, reshaping their image and expanding their influence.
Other countries
is the German term for witchcraft. These practitioners engage in folk magic, spellwork, and other witchcraft practices. refers to witchcraft practices in France, often rooted in traditional folk magic, spellcasting, and working with natural elements. is the Polish term for divination and witchcraft. It involves practices like fortune-telling, spellcasting, and working with herbs and charms. refers to witchcraft in Spain. Modern practitioners engage in spellwork, ritual magic, and working with herbs and crystals. Noita refers to Finnish folk magic, which involves practices such as healing, protection, and divination. It draws from local traditions and folklore. Various forms of folk magic and witchcraft practices are present in Eastern European countries, often involving rituals, spells, and working with charms and herbs.
In the 2022 Russian invasion of Ukraine, Russian state media claimed that Ukraine was using black magic against the Russian military, specifically accusing Oleksiy Arestovych of enlisting sorcerers and witches as well as Ukrainian soldiers of consecrating weapons "with blood magick".
Academic views
Witch-cult hypothesis
The witch-cult theory was pioneered by two German scholars, Karl Ernst Jarcke and Franz Josef Mone, in the early nineteenth century, and was adopted by French historian Jules Michelet, American feminist Matilda Joslyn Gage, and American folklorist Charles Leland later that century. The hypothesis received its most prominent exposition when it was adopted by a British Egyptologist, Margaret Murray, who presented her version of it in The Witch-Cult in Western Europe (1921), before further expounding it in books such as The God of the Witches (1931) and her contribution to the Encyclopædia Britannica. Although the "Murrayite theory" proved popular among sectors of academia and the general public in the early and mid-twentieth century, it was never accepted by specialists in the witch trials, who publicly disproved it through in-depth research during the 1960s and 1970s.
Experts in European witchcraft beliefs view the pagan witch cult theory as pseudohistorical. There is now an academic consensus that those accused and executed as witches were not followers of any witch religion, pagan or otherwise. Critics highlight several flaws with the theory. It rested on highly selective use of evidence from the trials, thereby heavily misrepresenting the events and the actions of both the accused and their accusers. It also mistakenly assumed that claims made by accused witches were truthful, and not distorted by coercion and torture. Further, despite claims the witch cult was a pre-Christian survival, there is no evidence of such a pagan witch cult throughout the Middle Ages.
The witch-cult hypothesis has influenced literature, being adapted into fiction in works by John Buchan, Robert Graves, and others. It greatly influenced Wicca, a new religious movement of modern Paganism that emerged in mid-twentieth-century Britain and claimed to be a survival of the pagan witch cult. Since the 1960s, Carlo Ginzburg and other scholars have argued that surviving elements of pre-Christian religion in European folk culture influenced Early Modern stereotypes of witchcraft, but scholars still debate how this may relate, if at all, to the Murrayite witch-cult hypothesis.
Categorization of Wicca
Scholars of religious studies classify Wicca as a new religious movement, and more specifically as a form of modern Paganism. Wicca has been cited as the largest, best known, most influential, and most academically studied form of modern Paganism. Within the movement it has been identified as sitting on the eclectic end of the eclectic to reconstructionist spectrum.
Several academics have also categorised Wicca as a form of nature religion, a term that is also embraced by many of its practitioners, and as a mystery religion. However, given that Wicca also incorporates the practice of magic, several scholars have referred to it as a "magico-religion". Wicca is also a form of Western esotericism, and more specifically a part of the esoteric current known as occultism. Academics like Wouter Hanegraaff and Tanya Luhrmann have categorised Wicca as part of the New Age, although other academics, and many Wiccans themselves, dispute this categorisation.
Use of hallucinogens
The use of hallucinogens in European witchcraft is a topic explored by modern researchers and historical records. Anthropologists such as Edward B. Taylor and pharmacologists like Louis Lewin have argued for the presence of plants like belladonna and mandrake in witchcraft practices, containing hallucinogenic alkaloids. Johannes Hartlieb (1410-1468) wrote a compendium on herbs in ca. 1440, and in 1456 the puch aller verpoten kunst, ungelaubens und der zaubrey (book on all forbidden arts, superstition and sorcery) on the artes magicae, containing the oldest known description of witches' flying ointment. Medieval accounts from writers including Joseph Glanvill and Johannes Nider describe the use of hallucinogenic concoctions, often referred to as ointments or brews, applied to sensitive areas of the body or objects like brooms for inducing altered states of consciousness. These substances were believed to grant witches special abilities to commune with spirits, transform into animals, and participate in supernatural gatherings, forming a complex aspect of the European witchcraft tradition.
Arguments in favor
A number of modern researchers have argued for the existence of hallucinogenic plants in the practice of European witchcraft; among them, anthropologists Edward B. Taylor, Bernard Barnett, Michael J. Harner and Julio C. Baroja and pharmacologists Louis Lewin and Erich Hesse. Many medieval writers also comment on the use of hallucinogenic plants in witches' ointments, including Joseph Glanvill, Jordanes de Bergamo, Sieur de Beauvoys de Chauvincourt, Martin Delrio, Raphael Holinshed, Andrés Laguna, Johannes Nider, Sieur Jean de Nynald, Henry Boguet, Giovanni Porta, Nicholas Rémy, Bartolommeo Spina, Richard Verstegan, Johann Vincent and Pedro Ciruelo.
Much of the knowledge of herbalism in European witchcraft comes from the Spanish Inquisitors and other authorities, who occasionally recognized the psychological nature of the "witches' flight", but more often considered the effects of witches' ointments to be demonic or satanic.
Use patterns
Decoctions of deliriant nightshades (such as henbane, belladonna, mandrake, or datura) were used in European witchcraft. All of these plants contain hallucinogenic alkaloids of the tropane family, including hyoscyamine, atropine and scopolamine —the last of which is unique in that it can be absorbed through the skin. These concoctions are described in the literature variously as brews, salves, ointments, philtres, oils, and unguents. Ointments were mainly applied by rubbing on the skin, especially in sensitive areas—underarms, the pubic region, the mucous membranes of the vagina and anus, or on areas rubbed raw ahead of time. They were often first applied to a "vehicle" to be "ridden" (an object such as a broom, pitchfork, basket, or animal skin that was rubbed against sensitive skin). All of these concoctions were made and used for the purpose of giving the witch special abilities to commune with spirits, gain love, harm enemies, experience euphoria and sexual pleasure, and—importantly—to "fly to the witches' Sabbath".
In art and literature
Witches have a long history of being depicted in art, although most of their earliest artistic depictions seem to originate in Early Modern Europe, particularly the Medieval and Renaissance periods. Many scholars attribute their manifestation in art as inspired by texts such as , a demonology-centered work of literature, and , a "witch-craze" manual published in 1487, by Heinrich Kramer and Jacob Sprenger. Witches in fiction span a wide array of characterizations. They are typically, but not always, female, and generally depicted as either villains or heroines.
See also
Notes
References
Works cited
Further reading
Barry, Jonathan, Marianne Hester, and Gareth Roberts, eds. Witchcraft in early modern Europe: studies in culture and belief (Cambridge UP, 1998).
Brauner, Sigrid. Fearless wives and frightened shrews: the construction of the witch in early modern Germany (Univ of Massachusetts Press, 2001).
Briggs, Robin. Witches & neighbours: the social and cultural context of European witchcraft (Viking, 1996).
Clark, Stuart. Thinking with demons: the idea of witchcraft in early modern Europe (Oxford University Press, 1999).
Even-Ezra, A., “Cursus: an early thirteenth century source for nocturnal flights and ointments in the work of Roland of Cremona,” Magic, Ritual and Witchcraft 12/2 (Winter 2017), 314–330.
Gaskill, Malcolm. "Masculinity and Witchcraft in Seventeenth-century England." In Witchcraft and Masculinities in Early Modern Europe, edited by Alison Rowlands, 171–190. New York: Palgrave-Macmillan, 2009.
Gouges, Linnea de Witch Hunts and State Building in Early Modern Europe Nisus Publications, 2017.
Helvin, N. (2019). Slavic Witchcraft: Old World Conjuring Spells and Folklore. Inner Traditions/Bear. .
Henderson, Lizanne, Witch-Hunting and Witch Belief in the Gàidhealtachd, Witchcraft and Belief in Early Modern Scotland Eds. Julian Goodare, Lauren Martin and Joyce Miller. Basingstoke: Palgrave MacMillan, 2007
Hutton, R. (2006). Witches, Druids and King Arthur. Bloomsbury Academic.
Martin, Lois. The History Of Witchcraft: Paganism, Spells, Wicca and more. (Oldcastle Books, 2015), popular history.
Monter, E. William. "The historiography of European witchcraft: progress and prospects". journal of interdisciplinary history 2#4 (1972): 435–451. in JSTOR.
Monter, E. William. Witchcraft in France and Switzerland: the Borderlands during the Reformation (Cornell University Press, 1976).
Notestein, Wallace. A history of witchcraft in England from 1558 to 1718. New York : Crowell, 1968
.
Pentikainen, Juha. "Marnina Takalo as an Individual." C. Jstor. 26 Feb. 2007.
Pentikainen, Juha. "The Supernatural Experience." F. Jstor. 26 Feb. 2007.
Scarre, Geoffrey, and John Callow. Witchcraft and magic in sixteenth-and seventeenth-century Europe (Palgrave Macmillan, 2001).
Stark, Ryan J. "Demonic Eloquence", in Rhetoric, Science, and Magic in Seventeenth-Century England (Washington, DC: The Catholic University of America Press, 2009), 115–45.
Waite, Gary K. Heresy, Magic and Witchcraft in early modern Europe (Palgrave Macmillan, 2003).
Worobec, Christine D. "Witchcraft Beliefs and Practices in Prerevolutionary Russian and Ukrainian Villages." Jstor. 27 Feb. 2007.
External links
Wicca, Witchcraft or Paganism? at Learnreligions.com
Witchcraft and Wicca at the CUNY Academic Commons
University of Edinburgh's Scottish witchcraft database
Witchcraft
Religious controversies
Sociology of religion
Superstitions of Europe | 0.774534 | 0.99422 | 0.770057 |
Sanskritisation | Sanskritisation (or Sanskritization) is a term in sociology which refers to the process by which castes or tribes placed lower in the caste hierarchy seek upward mobility by emulating the rituals and practices of the dominant castes or upper castes. It is a process similar to "passing" in sociological terms. This term was made popular by Indian sociologist M. N. Srinivas in the 1950s. Sanskritisation has in particular been observed among mid-ranked members of caste-based social hierarchies.
In a broader sense, also called Brahmanisation, it is a historical process in which local Indian religious traditions become syncretised, or aligned to and absorbed within the Brahmanical religion, resulting in the pan-Indian religion of Hinduism.
Definition
Srinivas defined Sanskritisation as a process by which
In a broader sense, Sanskritisation is
{{blockquote|the process whereby local or regional forms of culture and religion – local deities, rituals, literary genres – become identified with the great tradition of Sanskrit literature and culture: namely the culture and religion of orthodox, Aryan, Brahmans, which accepts the Veda as revelation and, generally, adheres to varrṇāśrama-dharma.}}
In this process, local traditions (little traditions) become integrated into the great tradition of Brahmanical religion, disseminating Sanskrit texts and Brahmanical ideas throughout India, and abroad. This facilitated the development of the Hindu synthesis, in which the Brahmanical tradition absorbed local popular traditions of ritual and ideology.
According to Srinivas, Sanskritisation is not just the adoption of new customs and habits, but also includes exposure to new ideas and values appearing in Sanskrit literature. He says the words Karma, dharma, papa, maya, samsara, and moksha are the most common Sanskrit theological ideas which become common in the talk of people who are sanskritised.
Development
Srinivas first propounded this theory in his D.Phil. thesis at Oxford. The thesis was later brought out as a book, which was an ethnographical study of the Kodava (Coorgs) community of Karnataka. Srinivas writes:
The book challenged the then prevalent idea that caste was a rigid and unchanging institution. The concept of Sanskritisation addressed the actual complexity and fluidity of caste relations. It brought into academic focus the dynamics of the renegotiation of status by various castes and communities in India.
According to , a similar heuristic was previously described by Ambedkar (1916, 1917). Jaffrelot goes on to say, "While the term was coined by Srinivas, the process itself had been described by colonial administrators such as E. T. Atkinson in his Himalayan Gazetteer and Alfred Lyall, in whose works Ambedkar might well have encountered it."
Virginius Xaxa notes that sometimes the anthropologists also use the term Kshatriyisation and Rajputisation in place of Sanskritisation.
Examples
Sanskritisation is often aimed to claim the Varna'' status of Brahmin or Kshatriyas, the two prestigious Varna of the Vedic-age Varna system. One of the main example of it is various non-elite pastoral communities like Ahir, Gopa, Ahar, Goala etc. who adopted the Yadav word as part of Sanskritisation effort to gain upward mobility in society during late 19th century to early 20th century. Similar attempts were made by communities who were historically classed as non-elite tillers like Kurmi and various communities like Koeri, Murao etc. from the late 19th century onwards through their caste organisations by claiming higher social status. Kalwar caste is traditionally involved into distillation and selling of liquor, but around the start of the 20th century, various organisations related to the caste sought to redefine the image of their community through this process.
Another example in North India is of Rajput. According to historical evidence, the present day Rajput community varies greatly in status, consisting of those with royal lineage to those whose ancestors were petty tenants or tribals who gained land and political power to justify their claim of being Kshatriya.
One clear example of Sanskritisation is the adoption, in emulation of the practice of twice-born castes, of vegetarianism by people belonging to the so-called low castes who are traditionally not averse to non-vegetarian food.
One more example is of Hindu Jat in rural North India who did Sanskritisation with the help of Arya Samaj as a part of a social upliftment effort.
An unsuccessful example is the Vishwakarma caste's claim to Brahmin status, which is not generally accepted outside that community, despite their adoption of some Brahmin caste traits, such as wearing the sacred thread, and the Brahminisation of their rituals. Srinivas juxtaposed the success of the Lingayat caste in achieving advancement within Karnataka society by such means with the failure of the Vishwakarma to achieve the same. Their position as a left-hand caste has not aided their ambition.
Srinivas was of the view that Sanskritisation was not limited to the Hindu castes, and stated that the semi-tribal groups including Himalayas's Pahadis, central India's Gonds and Oraons, and western India's Bhils also underwent Sanskritisation. He further suggested that, after going through Sanskritisation, such tribes would claim that they are castes and hence Hindus.
This phenomenon has also been observed in Nepal among Khas, Magar, Newar, and Tharu people.
Reception
Yogendra Singh has critiqued the theory as follows:
See also
De-Sanskritisation
Acculturation
Battle for Sanskrit
Islamization
Kshatriyas and would-be Kshatriyas
List of Sanskrit universities in India
Panini (grammarian)
Sanskrit cinema
Sanskrit studies
Shiksha
Notes
References
Sources
External links
Caste system in India
Cultural assimilation
Sanskrit
Social change
Hinduism and evolution | 0.775587 | 0.99283 | 0.770026 |
Gentry | Gentry (from Old French , from ) are "well-born, genteel and well-bred people" of high social class, especially in the past. Gentry, in its widest connotation, refers to people of good social position connected to landed estates (see manorialism), upper levels of the clergy, or "gentle" families of long descent who in some cases never obtained the official right to bear a coat of arms. The gentry largely consisted of landowners who could live entirely from rental income or at least had a country estate; some were gentleman farmers. In the United Kingdom, the term gentry refers to the landed gentry: the majority of the land-owning social class who typically had a coat of arms but did not have a peerage. The adjective "patrician" ("of or like a person of high social rank") describes in comparison other analogous traditional social elite strata based in cities, such as the free cities of Italy (Venice and Genoa) and the free imperial cities of Germany, Switzerland, and the Hanseatic League. The concept of gentries have often been one of aristocracy and often under autocratic hierarchies.
The term "gentry" by itself, so Peter Coss argues, is a construct that historians have applied loosely to rather different societies. Any particular model may not fit a specific society, but some scholars prefer a single, unified term.
Historical background of social stratification in the West
The Indo-Europeans who settled Europe, Central and Western Asia and the Indian subcontinent conceived their societies to be ordered (not divided) in a tripartite fashion, the three parts being castes. Castes came to be further divided, perhaps as a result of greater specialisation.
The "classic" formulation of the caste system as largely described by Georges Dumézil was that of a priestly or religiously occupied caste, a warrior caste, and a worker caste. Dumézil divided the Proto-Indo-Europeans into three categories: sovereignty, military, and productivity (see Trifunctional hypothesis). He further subdivided sovereignty into two distinct and complementary sub-parts. One part was formal, juridical, and priestly, but rooted in this world. The other was powerful, unpredictable, and also priestly, but rooted in the "other", the supernatural and spiritual world. The second main division was connected with the use of force, the military, and war. Finally, there was a third group, ruled by the other two, whose role was productivity: herding, farming, and crafts.
This system of caste roles can be seen in the castes which flourished on the Indian subcontinent and amongst the Italic peoples.
Examples of the Indo-European castes:
Indo-Iranian – Brahmin/Athravan, Kshatriyas/Rathaestar, Vaishyas
Celtic – Druids, Equites, Plebes (according to Julius Caesar)
Slavic – Volkhvs, Voin, Krestyanin/Smerd
Nordic – Earl, Churl, Thrall (according to the Lay of Rig)
Anglo-Saxon – Gebedmen (prayer-men), Fyrdmen (army-men), Weorcmen (workmen) (according to Alfred the Great)
Greece (Attica) – Eupatridae, Geomori, Demiourgoi
Greece (Sparta) – Homoioi, Perioeci, Helots
Kings were born out of the warrior or noble class, and sometimes the priesthood class, like in India.
Medieval Christendom
Emperor Constantine convoked the First Council of Nicaea in 325 whose Nicene Creed included belief in "one holy catholic and apostolic Church". Emperor Theodosius I made Nicene Christianity the state church of the Roman Empire with the Edict of Thessalonica of 380 that allowed it to happen.
After the fall of the Western Roman Empire in the 5th century, there emerged no single powerful secular government in the West, but there was a central ecclesiastical power in Rome, the Catholic Church. In this power vacuum, the Church rose to become the dominant power in the West for the start of this time period.
In essence, the earliest vision of Christendom was a vision of a Christian theocracy, a government founded upon and upholding Christian values, whose institutions are spread through and over with Christian doctrine. The Catholic Church's peak of authority over all European Christians and their common endeavours of the Christian community—for example, the Crusades, the fight against the Moors in the Iberian Peninsula and against the Ottomans in the Balkans—helped to develop a sense of communal identity against the obstacle of Europe's deep political divisions.
The classical heritage flourished throughout the Middle Ages in both the Byzantine Greek East and Latin West. In Plato's ideal state there are three major classes (producers, auxiliaries and guardians), which was representative of the idea of the "tripartite soul", which is expressive of three functions or capacities of the human soul: "appetites" (or "passions"), "the spirited element" and "reason" the part that must guide the soul to truth. Will Durant made a convincing case that certain prominent features of Plato's ideal community were discernible in the organization, dogma and effectiveness of "the" Medieval Church in Europe:
For a thousand years Europe was ruled by an order of guardians considerably like that which was visioned by our philosopher. During the Middle Ages it was customary to classify the population of Christendom into laboratores (workers), bellatores (soldiers), and oratores (clergy). The last group, though small in number, monopolized the instruments and opportunities of culture, and ruled with almost unlimited sway half of the most powerful continent on the globe. The clergy, like Plato's guardians, were placed in authority ... by their talent as shown in ecclesiastical studies and administration, by their disposition to a life of meditation and simplicity, and ... by the influence of their relatives with the powers of state and church. In the latter half of the period in which they ruled [800 BC onwards], the clergy were as free from family cares as even Plato could desire [for such guardians] ... [Clerical] Celibacy was part of the psychological structure of the power of the clergy; for on the one hand they were unimpeded by the narrowing egoism of the family, and on the other their apparent superiority to the call of the flesh added to the awe in which lay sinners held them. ...
Gaetano Mosca wrote on the same subject matter in his book The Ruling Class concerning the Medieval Church and its structure that
Beyond the fact that Clerical celibacy functioned as a spiritual discipline it also was guarantor of the independence of the Church.
the Catholic Church has always aspired to a preponderant share in political power, it has never been able to monopolize it entirely, because of two traits, chiefly, that are basic in its structure. Celibacy has generally been required of the clergy and of monks. Therefore no real dynasties of abbots and bishops have ever been able to establish themselves. ... Secondly, in spite of numerous examples to the contrary supplied by the warlike Middle Ages, the ecclesiastical calling has by its very nature never been strictly compatible with the bearing of arms. The precept that exhorts the Church to abhor bloodshed has never dropped completely out of sight, and in relatively tranquil and orderly times it has always been very much to the fore.
Two principal estates of the realm
The fundamental social structure in Europe in the Middle Ages was between the ecclesiastical hierarchy, nobles i.e. the tenants in chivalry (counts, barons, knights, esquires, franklins) and the ignobles, the villeins, citizens, and burgesses. The division of society into classes of nobles and ignobles, in the smaller regions of medieval Europe was inexact. After the Protestant Reformation, social intermingling between the noble class and the hereditary clerical upper class became a feature in the monarchies of Nordic countries.
The gentility is primarily formed on the bases of the medieval societies' two higher estates of the realm, nobility and clergy, both exempted from taxation. Subsequent "gentle" families of long descent who never obtained official rights to bear a coat of arms were also admitted to the rural upper-class society: the gentry.
The three estates
The widespread three estates order was particularly characteristic of France:
First estate included the group of all clergy, that is, members of the higher clergy and the lower clergy.
Second estate has been encapsulated by the nobility. Here too, it did not matter whether they came from a lower or higher nobility or if they were impoverished members.
Third estate included all nominally free citizens; in some places, free peasants.
At the top of the pyramid were the princes and estates of the king or emperor, or with the clergy, the bishops and the pope.
The feudal system was, for the people of the Middle Ages and early modern period, fitted into a God-given order. The nobility and the third estate were born into their class, and change in social position was slow. Wealth had little influence on what estate one belonged to. The exception was the Medieval Church, which was the only institution where competent men (and women) of merit could reach, in one lifetime, the highest positions in society.
The first estate comprised the entire clergy, traditionally divided into "higher" and "lower" clergy. Although there was no formal demarcation between the two categories, the upper clergy were, effectively, clerical nobility, from the families of the second estate or as in the case of Cardinal Wolsey, from more humble backgrounds.
The second estate was the nobility. Being wealthy or influential did not automatically make one a noble, and not all nobles were wealthy and influential (aristocratic families have lost their fortunes in various ways, and the concept of the "poor nobleman" is almost as old as nobility itself). Countries without a feudal tradition did not have a nobility as such.
The nobility of a person might be either inherited or earned. Nobility in its most general and strict sense is an acknowledged preeminence that is hereditary: legitimate descendants (or all male descendants, in some societies) of nobles are nobles, unless explicitly stripped of the privilege. The terms aristocrat and aristocracy are a less formal means to refer to persons belonging to this social milieu.
Historically in some cultures, members of an upper class often did not have to work for a living, as they were supported by earned or inherited investments (often real estate), although members of the upper class may have had less actual money than merchants. Upper-class status commonly derived from the social position of one's family and not from one's own achievements or wealth. Much of the population that comprised the upper class consisted of aristocrats, ruling families, titled people, and religious hierarchs. These people were usually born into their status, and historically, there was not much movement across class boundaries. This is to say that it was much harder for an individual to move up in class simply because of the structure of society.
In many countries, the term upper class was intimately associated with hereditary land ownership and titles. Political power was often in the hands of the landowners in many pre-industrial societies (which was one of the causes of the French Revolution), despite there being no legal barriers to land ownership for other social classes. Power began to shift from upper-class landed families to the general population in the early modern age, leading to marital alliances between the two groups, providing the foundation for the modern upper classes in the West. Upper-class landowners in Europe were often also members of the titled nobility, though not necessarily: the prevalence of titles of nobility varied widely from country to country. Some upper classes were almost entirely untitled, for example, the Szlachta of the Polish-Lithuanian Commonwealth.
Before the Age of Absolutism, institutions, such as the church, legislatures, or social elites, restrained monarchical power. Absolutism was characterized by the ending of feudal partitioning, consolidation of power with the monarch, rise of state, rise of professional standing armies, professional bureaucracies, the codification of state laws, and the rise of ideologies that justify the absolutist monarchy. Hence, Absolutism was made possible by new innovations and characterized as a phenomenon of Early Modern Europe, rather than that of the Middle Ages, where the clergy and nobility counterbalanced as a result of mutual rivalry.
Gentries
Continental Europe
Baltic
From the middle of the 1860s the privileged position of Baltic Germans in the Russian Empire began to waver. Already during the reign of Nicholas I (1825–55), who was under pressure from Russian nationalists, some sporadic steps had been taken towards the russification of the provinces. Later, the Baltic Germans faced fierce attacks from the Russian nationalist press, which accused the Baltic aristocracy of separatism, and advocated closer linguistic and administrative integration with Russia.
Social division was based on the dominance of the Baltic Germans which formed the upper classes while the majority of indigenous population, called "Undeutsch", composed the peasantry. In the Imperial census of 1897, 98,573 Germans (7.58% of total population) lived in the Governorate of Livonia, 51,017 (7.57%) in the Governorate of Curonia, and 16,037 (3.89%) in the Governorate of Estonia. The social changes faced by the emancipation, both social and national, of the Estonians and Latvians were not taken seriously by the Baltic German gentry. The provisional government of Russia after 1917 revolution gave the Estonians and Latvians self-governance which meant the end of the Baltic German era in Baltics.
The Lithuanian gentry consisted mainly of Lithuanians, but due to strong ties to Poland, had been culturally Polonized. After the Union of Lublin in 1569, they became less distinguishable from Polish szlachta, although preserved Lithuanian national awareness.
Kingdom of Hungary
In Hungary during the late 19th and early 20th century gentry (sometimes spelled as dzsentri) were nobility without land who often sought employment as civil servants, army officers, or went into politics.
Polish–Lithuanian Commonwealth
In the history of the Polish–Lithuanian Commonwealth, "gentry" is often used in English to describe the Polish landed gentry (, from ziemia, "land"). They were the lesser members of the nobility (the szlachta), contrasting with the much smaller but more powerful group of "magnate" families (sing. magnat, plural magnaci in Polish), the Magnates of Poland and Lithuania. Compared to the situation in England and some other parts of Europe, these two parts of the overall "nobility" to a large extent operated as different classes, and were often in conflict. After the Partitions of Poland, at least in the stereotypes of 19th-century nationalist lore, the magnates often made themselves at home in the capitals and courts of the partitioning powers, while the gentry remained on their estates, keeping the national culture alive.
From the 15th century, only the szlachta, and a few patrician bughers from some cities, were allowed to own rural estates of any size, as part of the very extensive szlachta privileges. These restrictions were reduced or removed after the Partitions of Poland, and commoner landowners began to emerge. By the 19th century, there were at least 60,000 szlachta families, most rather poor, and many no longer owning land. By then the "gentry" included many non-noble landowners.
Spain and Portugal
In Spanish nobility and former Portuguese nobility, see hidalgos and infanzones.
Swedish
In Sweden, there was not outright serfdom. Hence, the gentry was a class of well-off citizens that had grown from the wealthier or more powerful members of the peasantry. The two historically legally privileged classes in Sweden were the Swedish nobility (Adeln), a rather small group numerically, and the clergy, which were part of the so-called frälse (a classification defined by tax exemptions and representation in the diet).
At the head of the Swedish clergy stood the Archbishop of Uppsala since 1164. The clergy encompassed almost all the educated men of the day and furthermore was strengthened by considerable wealth, and thus it came naturally to play a significant political role. Until the Reformation, the clergy was the first estate but was relegated to the secular estate in the Protestant North Europe.
In the Middle Ages, celibacy in the Catholic Church had been a natural barrier to the formation of an hereditary priestly class. After compulsory celibacy was abolished in Sweden during the Reformation, the formation of a hereditary priestly class became possible, whereby wealth and clerical positions were frequently inheritable. Hence the bishops and the vicars, who formed the clerical upper class, would frequently have manors similar to those of other country gentry. Hence continued the medieval Church legacy of the intermingling between noble class and clerical upper class and the intermarriage as the distinctive element in several Nordic countries after the Reformation.
Surnames in Sweden can be traced to the 15th century, when they were first used by the Gentry (Frälse), i.e., priests and nobles. The names of these were usually in Swedish, Latin, German or Greek.
The adoption of Latin names was first used by the Catholic clergy in the 15th century. The given name was preceded by Herr (Sir), such as Herr Lars, Herr Olof, Herr Hans, followed by a Latinized form of patronymic names, e.g., Lars Petersson Latinized as Laurentius Petri. Starting from the time of the Reformation, the Latinized form of their birthplace (Laurentius Petri Gothus, from Östergötland) became a common naming practice for the clergy.
In the 17th and 18th centuries, the surname was only rarely the original family name of the ennobled; usually, a more imposing new name was chosen. This was a period which produced a myriad of two-word Swedish-language family names for the nobility (very favored prefixes were Adler, "eagle"; Ehren – "ära", "honor"; Silfver, "silver"; and Gyllen, "golden"). The regular difference with Britain was that it became the new surname of the whole house, and the old surname was dropped altogether.
Ukraine
The Western Ukrainian Clergy of the Ukrainian Greek Catholic Church were a hereditary tight-knit social caste that dominated western Ukrainian society from the late eighteenth until the mid-20th centuries, following the reforms instituted by Joseph II, Emperor of Austria. Because, like their Eastern Orthodox brethren, Ukrainian Catholic priests could marry, they were able to establish "priestly dynasties", often associated with specific regions, for many generations. Numbering approximately 2,000–2,500 by the 19th century, priestly families tended to marry within their group, constituting a tight-knit hereditary caste. In the absence of a significant native nobility and enjoying a virtual monopoly on education and wealth within western Ukrainian society, the clergy came to form that group's native aristocracy. The clergy adopted Austria's role for them as bringers of culture and education to the Ukrainian countryside. Most Ukrainian social and political movements in Austrian-controlled territory emerged or were highly influenced by the clergy themselves or by their children. This influence was so great that western Ukrainians were accused of wanting to create a theocracy in western Ukraine by their Polish rivals. The central role played by the Ukrainian clergy or their children in western Ukrainian society would weaken somewhat at the end of the 19th century but would continue until the mid-20th century.
United States
The American gentry were rich landowning members of the American upper class in the colonial South.
The Colonial American use of gentry was not common. Historians use it to refer to rich landowners in the South before 1776. Typically large scale landowners rented out farms to white tenant farmers. North of Maryland, there were few large comparable rural estates, except in the Dutch domains in the Hudson Valley of New York.
Great Britain
The British upper classes consist of two sometimes overlapping entities, the peerage and landed gentry. In the British peerage, only the senior family member (typically the eldest son) inherits a substantive title (duke, marquess, earl, viscount, baron); these are referred to as peers or lords. The rest of the nobility form part of the "landed gentry" (abbreviated "gentry"). The members of the gentry usually bear no titles but can be described as esquire or gentleman. Exceptions are the eldest sons of peers, who bear their fathers' inferior titles as "courtesy titles" (but for Parliamentary purposes count as commoners), Scottish barons (who bear the designation Baron of X after their name) and baronets (a title corresponding to a hereditary knighthood). Scottish lairds do not have a title of nobility but may have a description of their lands in the form of a territorial designation that forms part of their name.
The landed gentry is a traditional British social class consisting of gentlemen in the original sense; that is, those who owned land in the form of country estates to such an extent that they were not required to actively work, except in an administrative capacity on their own lands. The estates were often (but not always) made up of tenanted farms, in which case the gentleman could live entirely off rent income. Gentlemen, ranking below esquires and above yeomen, form the lowest rank of British nobility. It is the lowest rank to which the descendants of a Knight, Baronet or Peer can sink. Strictly speaking, anybody with officially matriculated English or Scottish arms is a gentleman and thus noble.
The term landed gentry, although originally used to mean nobility, came to be used for the lesser nobility in England around 1540. Once identical, these terms eventually became complementary. The term gentry by itself, as commonly used by historians, according to Peter Coss, is a construct applied loosely to rather different societies. Any particular model may not fit a specific society, yet a single definition nevertheless remains desirable. Titles, while often considered central to the upper class, are not strictly so. Both Captain Mark Phillips and Vice Admiral Sir Timothy Laurence, the respective first and second husbands of HRH Princess Anne, lacked any rank of peerage at the time of their marriage to Princess Anne. However, the backgrounds of both men were considered to be essentially patrician, and they were thus deemed suitable husbands for a princess.
Esquire (abbreviated Esq.) is a term derived from the Old French word "escuier" (which also gave equerry) and is in the United Kingdom the second-lowest designation for a nobleman, referring only to males, and used to denote a high but indeterminate social status. The most common occurrence of term Esquire today is the conferral as the suffix Esq. in order to pay an informal compliment to a male recipient by way of implying gentle birth. In the post-medieval world, the title of esquire came to apply to all men of the higher landed gentry; an esquire ranked socially above a gentleman but below a knight. In the modern world, where all men are assumed to be gentlemen, the term has often been extended (albeit only in very formal writing) to all men without any higher title. It is used post-nominally, usually in abbreviated form (for example, "Thomas Smith, Esq.").
A knight could refer to either a medieval tenant who gave military service as a mounted man-at-arms to a feudal landholder, or a medieval gentleman-soldier, usually high-born, raised by a sovereign to privileged military status after training as a page and squire (for a contemporary reference, see British honours system). In formal protocol, Sir is the correct styling for a knight or for a baronet, used with (one of) the knight's given name(s) or full name, but not with the surname alone. The equivalent for a woman who holds the title in her own right is Dame; for such women, the title Dame is used as Sir for a man, never before the surname on its own. This usage was devised in 1917, derived from the practice, up to the 17th century (and still also in legal proceedings), for the wife of a knight. The wife of a knight or baronet is now styled "Lady [husband's surname]".
Historiography
The "Storm over the gentry" was a major historiographical debate among scholars that took place in the 1940s and 1950s regarding the role of the gentry in causing the English Civil War of the 17th century. R. H. Tawney had suggested in 1941 that there was a major economic crisis for the nobility in the 16th and 17th centuries, and that the rapidly rising gentry class was demanding a share of power. When the aristocracy resisted, Tawney argued, the gentry launched the civil war. After heated debate, historians generally concluded that the role of the gentry was not especially important.
Irish
East Asia
China
The 'four divisions of society' refers to the model of society in ancient China and was a meritocratic social class system in China and other subsequently influenced Confucian societies. The four castes—gentry, farmers, artisans and merchants—are combined to form the term Shìnónggōngshāng (士農工商).
Gentry (士) means different things in different countries. In China, Korea, and Vietnam, this meant that the Confucian scholar gentry that would – for the most part – make up most of the bureaucracy. This caste would comprise both the more-or-less hereditary aristocracy as well as the meritocratic scholars that rise through the rank by public service and, later, by imperial exams. Some sources, such as Xunzi, list farmers before the gentry, based on the Confucian view that they directly contributed to the welfare of the state. In China, the farmer lifestyle is also closely linked with the ideals of Confucian gentlemen.
In Japan, this caste essentially equates to the samurai class.
Hierarchical structure of Feudal Japan
There were two leading classes, i.e. the gentry, in the time of feudal Japan: the daimyō and the samurai. The Confucian ideals in the Japanese culture emphasised the importance of productive members of society, so farmers and fishermen were considered of a higher status than merchants.
In the Edo period, with the creation of the Domains (han) under the rule of Tokugawa Ieyasu, all land was confiscated and reissued as fiefdoms to the daimyōs.
The small lords, the , were ordered either to give up their swords and rights and remain on their lands as peasants or to move to the castle cities to become paid retainers of the daimyōs. Only a few samurai were allowed to remain in the countryside; the . Some 5 per cent of the population were samurai. Only the samurai could have proper surnames, something that after the Meiji Restoration became compulsory to all inhabitants (see Japanese name).
Emperor Meiji abolished the samurai's right to be the only armed force in favor of a more modern, Western-style, conscripted army in 1873. Samurai became Shizoku, but the right to wear a katana in public was eventually abolished along with the right to execute commoners who paid them disrespect.
In defining how a modern Japan should be, members of the Meiji government decided to follow in the footsteps of the United Kingdom and Germany, basing the country on the concept of noblesse oblige. Samurai were not to be a political force under the new order. The difference between the Japanese and European feudal systems was that European feudalism was grounded in Roman legal structure, while Japan feudalism had Chinese Confucian morality as its basis.
Korea
Korean monarchy and the native ruling upper class existed in Korea until the end of the Japanese occupation. The system concerning the nobility is roughly the same as that of the Chinese nobility.
As the monastical orders did during Europe's Dark Ages, the Buddhist monks became the purveyors and guardians of Korea's literary traditions while documenting Korea's written history and legacies from the Silla period to the end of the Goryeo dynasty. Korean Buddhist monks also developed and used the first movable metal type printing presses in history—some 500 years before Gutenberg—to print ancient Buddhist texts. Buddhist monks also engaged in record keeping, food storage and distribution, as well as the ability to exercise power by influencing the Goryeo royal court.
Ottoman middle east
In the Ottoman middle east, the gentry consisted of notables, or a'yan. The a'yan consisted of two groups: urban and rural gentries. Urban elites were traditionally made of city-dwelling merchants (tujjar), clerics ('ulema), ashraf, military officers, and governmental functionaries. The rural notability's ranks included rural sheikhs and village or clan mukhtars. Most notables originated in, and belonged to, the fellahin (peasantry) class, forming a lower-echelon land-owning gentry in the Empire's post-Tanzimat countryside and emergent towns. In Palestine, rural notables form the majority of Palestinian elites, although certainly not the richest. Rural notables took advantage of changing legal, administrative and political conditions, and global economic realities, to achieve ascendancy using households, marriage alliances and networks of patronage. Over all, they played a leading role in the development of modern Palestine and other countries well into the late 20th century.
Values and traditions
Military and clerical
Historically, the nobles in Europe became soldiers; the aristocracy in Europe can trace their origins to military leaders from the migration period and the Middle Ages. For many years, the British Army, together with the Church, was seen as the ideal career for the younger sons of the aristocracy. Although now much diminished, the practice has not totally disappeared. Such practices are not unique to the British either geographically or historically. As a very practical form of displaying patriotism, it has been at times fashionable for "gentlemen" to participate in the military.
The fundamental idea of gentry had come to be that of the essential superiority of the fighting man, usually maintained in the granting of arms. At the last, the wearing of a sword on all occasions was the outward and visible sign of a "gentleman"; the custom survives in the sword worn with "court dress". A suggestion that a gentleman must have a coat of arms was vigorously advanced by certain 19th- and 20th-century heraldists, notably Arthur Charles Fox-Davies in England and Thomas Innes of Learney in Scotland. The significance of a right to a coat of arms was that it was definitive proof of the status of gentleman, but it recognised rather than conferred such a status, and the status could be and frequently was accepted without a right to a coat of arms.
Chivalry
Chivalry is a term related to the medieval institution of knighthood. It is usually associated with ideals of knightly virtues, honour and courtly love.
Christianity had a modifying influence on the virtues of chivalry, with limits placed on knights to protect and honour the weaker members of society and maintain peace. The church became more tolerant of war in the defence of faith, espousing theories of the just war. In the 11th century, the concept of a "knight of Christ" (miles Christi) gained currency in France, Spain and Italy. These concepts of "religious chivalry" were further elaborated in the era of the Crusades.
In the later Middle Ages, wealthy merchants strove to adopt chivalric attitudes. This was a democratisation of chivalry, leading to a new genre called the courtesy book, which were guides to the behaviour of "gentlemen".
When examining medieval literature, chivalry can be classified into three basic but overlapping areas:
Duties to countrymen and fellow Christians
Duties to God
Duties to women
These three areas obviously overlap quite frequently in chivalry and are often indistinguishable. Another classification of chivalry divides it into warrior, religious and courtly love strands. One particular similarity between all three of these categories is honour. Honour is the foundational and guiding principle of chivalry. Thus, for the knight, honour would be one of the guides of action.
Gentleman
The term gentleman (from Latin gentilis, belonging to a race or gens, and "man", cognate with the French word gentilhomme, the Spanish gentilhombre and the Italian gentil uomo or gentiluomo), in its original and strict signification, denoted a man of good family, analogous to the Latin generosus (its invariable translation in English-Latin documents). In this sense the word equates with the French gentilhomme ("nobleman"), which was in Great Britain long confined to the peerage. The term gentry (from the Old French genterise for gentelise) has much of the social-class significance of the French noblesse or of the German Adel, but without the strict technical requirements of those traditions (such as quarters of nobility). To a degree, gentleman signified a man with an income derived from landed property, a legacy or some other source and was thus independently wealthy and did not need to work.
Confucianism
The Far East also held similar ideas to the West of what a gentleman is, which are based on Confucian principles. The term Jūnzǐ (君子) is a term crucial to classical Confucianism. Literally meaning "son of a ruler", "prince" or "noble", the ideal of a "gentleman", "proper man", "exemplary person", or "perfect man" is that for which Confucianism exhorts all people to strive. A succinct description of the "perfect man" is one who "combine[s] the qualities of saint, scholar, and gentleman" (CE). A hereditary elitism was bound up with the concept, and gentlemen were expected to act as moral guides to the rest of society. They were to:
cultivate themselves morally;
participate in the correct performance of ritual;
show filial piety and loyalty where these are due; and
cultivate humaneness.
The opposite of the Jūnzǐ was the Xiǎorén (小人), literally "small person" or "petty person". Like English "small", the word in this context in Chinese can mean petty in mind and heart, narrowly self-interested, greedy, superficial, and materialistic.
Noblesse oblige
The idea of noblesse oblige, "nobility obliges", among gentry is, as the Oxford English Dictionary expresses, that the term "suggests noble ancestry constrains to honorable behaviour; privilege entails to responsibility". Being a noble meant that one had responsibilities to lead, manage and so on. One was not to simply spend one's time in idle pursuits.
Heraldry
A coat of arms is a heraldic device dating to the 12th century in Europe. It was originally a cloth tunic worn over or in place of armour to establish identity in battle. The coat of arms is drawn with heraldic rules for a person, family or organisation. Family coats of arms were originally derived from personal ones, which then became extended in time to the whole family. In Scotland, family coats of arms are still personal ones and are mainly used by the head of the family. In heraldry, a person entitled to a coat of arms is an armiger, and their family would be armigerous.
Ecclesiastical heraldry
Ecclesiastical heraldry is the tradition of heraldry developed by Christian clergy. Initially used to mark documents, ecclesiastical heraldry evolved as a system for identifying people and dioceses. It is most formalised within the Catholic Church, where most bishops, including the pope, have a personal coat of arms. Clergy in Anglican, Lutheran, Eastern Catholic, and Orthodox churches follow similar customs.
See also
American gentry
Aristocracy
Cabang Atas
Gentlewoman
Grand Burgher
Habitus
Hanseaten (class)
Landed gentry
Nobility
Patrician (ancient Rome)
Gentleman farmer
Principalía
Priyayi
Redorer son blason
Bildungsbürgertum
Social environment
Symbolic capital
Szlachta
Yeoman
Notes
References
Further reading
Great Britain
Acheson, Eric. A gentry community: Leicestershire in the fifteenth century, c. 1422–c. 1485 (Cambridge University Press, 2003).
Butler, Joan. Landed Gentry (1954)
Coss, Peter R. The origins of the English gentry (2005) online
Heal, Felicity. The gentry in England and Wales, 1500–1700 (1994) online.
Mingay, Gordon E. The Gentry: The Rise and Fall of a Ruling Class (1976) online
O'Hart, John. The Irish And Anglo-Irish Landed Gentry, When Cromwell Came to Ireland: or, a Supplement to Irish Pedigrees (2 vols) (reprinted 2007)
Sayer, M. J. English Nobility: The Gentry, the Heralds and the Continental Context (Norwich, 1979)
Wallis, Patrick, and Cliff Webb. "The education and training of gentry sons in early modern England." Social History 36.1 (2011): 36–53. online
Europe
Eatwell, Roger, ed. European political cultures (Routledge, 2002).
Jones, Michael ed. Gentry and Lesser Nobility in Late Medieval Europe (1986) online.
Lieven, Dominic C.B. The aristocracy in Europe, 1815–1914 (Macmillan, 1992).
Wallerstein, Immanuel. The modern world-system I: Capitalist agriculture and the origins of the European world-economy in the sixteenth century. Vol. 1 (Univ of California Press, 2011).
Wasson, Ellis. Aristocracy and the modern world (Macmillan International Higher Education, 2006), for 19th and 20th centuries
Historiography
Hexter, Jack H. Reappraisals in history: New views on history and society in early modern Europe (1961), emphasis on England.
MacDonald, William W. "English Historians Repeating Themselves: The Refining of the Whig Interpretation of the English Revolution and Civil War." Journal of Thought (1972): 166–175. online
Tawney, R. H. "The rise of the gentry, 1558–1640." Economic History Review 11.1 (1941): 1–38. online; launched a historiographical debate
Tawney, R. H. "The rise of the gentry: a postscript." Economic History Review 7.1 (1954): 91–97. online
China
Bastid-Bruguiere, Marianne. "Currents of social change." The Cambridge History of China 11.2 1800–1911 (1980): pp. 536–571.
Brook, Timothy. Praying for power: Buddhism and the formation of gentry society in late-Ming China (Brill, 2020).
Chang, Chung-li. The Chinese gentry: studies on their role in nineteenth-century Chinese society (1955) online
Chuzo, Ichiko; "The role of the gentry: an hypothesis." in China in Revolution: The First Phase, 1900–1913 ed. by Mary C. Wright (1968) pp: 297–317.
Miller, Harry. State versus Gentry in Late Ming Dynasty China, 1572–1644 (Springer, 2008).
External links
High society (social class)
Nobility | 0.771722 | 0.997786 | 0.770014 |
Identity formation | Identity formation, also called identity development or identity construction, is a complex process in which humans develop a clear and unique view of themselves and of their identity.
Self-concept, personality development, and values are all closely related to identity formation. Individuation is also a critical part of identity formation. Continuity and inner unity are healthy identity formation, while a disruption in either could be viewed and labeled as abnormal development; certain situations, like childhood trauma, can contribute to abnormal development. Specific factors also play a role in identity formation, such as race, ethnicity, and spirituality.
The concept of personal continuity, or personal identity, refers to an individual posing questions about themselves that challenge their original perception, like "Who am I?" The process defines individuals to others and themselves. Various factors make up a person's actual identity, including a sense of continuity, a sense of uniqueness from others, and a sense of affiliation based on their membership in various groups like family, ethnicity, and occupation. These group identities demonstrate the human need for affiliation or for people to define themselves in the eyes of others and themselves.
Identities are formed on many levels. The micro-level is self-definition, relations with people, and issues as seen from a personal or an individual perspective. The meso-level pertains to how identities are viewed, formed, and questioned by immediate communities and/or families. The macro-level are the connections among and individuals and issues from a national perspective. The global level connects individuals, issues, and groups at a worldwide level.
Identity is often described as finite and consisting of separate and distinct parts (e.g., family, cultural, personal, professional).
Theories
Many theories of development have aspects of identity formation included in them. Two theories directly address the process of identity formation: Erik Erikson's stages of psychosocial development (specifically the Identity versus Role Confusion stage), James Marcia's identity status theory, and Jeffrey Arnett's theories of identity formation in emerging adulthood.
Erikson's theory of identity vs. role confusion
Erikson's theory is that people experience different crises or conflicts throughout their lives in eight stages. Each stage occurs at a certain point in life and must be successfully resolved to progress to the next stage. The particular stage relevant to identity formation takes place during adolescence: Identity versus Role Confusion.
The Identity versus Role Confusion stage involves adolescents trying to figure out who they are in order to form a basic identity that they will build on throughout their life, especially concerning social and occupational identities. They ask themselves the existential questions: "Who am I?" and "What can I be?" They face the complexities of determining one's own identity. Erikson stated that this crisis is resolved with identity achievement, the point at which an individual has extensively considered various goals and values, accepting some and rejecting others, and understands who they are as a unique person. When an adolescent attains identity achievement, they are ready to enter the next stage of Erikson's theory, Intimacy versus Isolation, where they will form strong friendships and a sense of companionship with others.
If the Identity versus Role Confusion crisis is not positively resolved, an adolescent will face confusion about future plans, particularly their roles in adulthood. Failure to form one's own identity leads to failure to form a shared identity with others, which can lead to instability in many areas as an adult. The identity formation stage of Erik Erikson's theory of psychosocial development is a crucial stage in life.
Marcia's identity status theory
Marcia created a structural interview designed to classify adolescents into one of four statuses of identity. The statuses are used to describe and pinpoint the progression of an adolescent's identity formation process. In Marcia's theory, identity is operationally defined as whether an individual has explored various alternatives and made firm commitments to an occupation, religion, sexual orientation, and a set of political values.
The four identity statuses in James Marcia's theory are:
Identity Diffusion (also known as Role Confusion): The opposite of identity achievement. The individual has not resolved their identity crisis yet by failing to commit to any goals or values and establish a future life direction. In adolescents, this stage is characterized by disorganized thinking, procrastination, and avoidance of issues and actions.
Identity Foreclosure: This occurs when teenagers conform to an identity without exploring what suits them best. For instance, teenagers might follow the values and roles of their parents or cultural norms. They might also foreclose on a negative identity, or the direct opposite of their parents' values or cultural norms.
Identity Moratorium: This postpones identity achievement by providing temporary shelter. This status provides opportunities for exploration, either in breadth or in-depth. Examples of moratoria common in American society include college or the military.
Identity Achievement: This status is attained when the person has solved the identity issues by making commitments to goals, beliefs, and values after an extensive exploration of different areas.
Jeffrey Arnett's Theories on Identity Formation in Emerging Adulthood
Jeffrey Arnett's theory states that identity formation is most prominent in emerging adulthood, consisting of ages 18–25. Arnett holds that identity formation consists of indulging in different life opportunities and possibilities to eventually make important life decisions. He believes this phase of life includes a broad range of opportunities for identity formation, specifically in three different realms.
These three realms of identity exploration are:
Love: In emerging adulthood, individuals explore love to find a profound sense of intimacy. While trying to find love, individuals often explore their identity by focusing on questions such as: "Given the kind of person I am, what kind of person do I wish to have as a partner through life?"
Work: Work opportunities that people get involved in are now centered around the idea that they are preparing for careers that they might have throughout adulthood. Individuals explore their identity by asking themselves questions such as: "What kind of work am I good at?", "What kind of work would I find satisfying for the long term", or "What are my chance of getting a job in the field that seems to suit me best?"
Worldviews: It is common for those in the stage of emerging adulthood to attend college. There they may be exposed to different worldviews, compared to those they were raised in, and become open to altering their previous worldviews. Individuals who don't attend college also believe that as adult they should also decide what their beliefs and values are.
Self-concept
Self-concept, or self-identity, is the set of beliefs and ideas an individual has about themselves. Self-concept is different from self-consciousness, which is an awareness of one's self. Components of the self-concept include physical, psychological, and social attributes, which can be influenced by the individual's attitudes, habits, beliefs, and ideas; they cannot be condensed into the general concepts of self-image or self-esteem. Multiple types of identity come together within an individual and can be broken down into the following: cultural identity, professional identity, ethnic and national identity, religious identity, gender identity, and disability identity.
Cultural identity
Cultural identity is formation of ideas an individual takes based on the culture they belong to. Cultural identity relates to but is not synonymous with identity politics. There are modern questions of culture that are transferred into questions of identity. Historical culture also influences individual identity, and as with modern cultural identity, individuals may pick and choose aspects of cultural identity, while rejecting or disowning other associated ideas.
Professional identity
Professional identity is the identification with a profession, exhibited by an aligning of roles, responsibilities, values, and ethical standards as accepted by the profession.
In business, professional identity is the professional self-concept that is founded upon attributes, values, and experiences. A professional identity is developed when there is a philosophy that is manifested in a distinct corporate culture – the corporate personality. A business professional is a person in a profession with certain types of skills that sometimes require formal training or education.
Career development encompasses the total dimensions of psychological, sociological, educational, physical, economic, and chance that alter a person's career practice across the lifespan. Career development also refers to the practices from a company or organization that enhance someone's career or encourages them to make practical career choices.
Training is a form of identity setting, since it not only affects knowledge but also affects a team member's self-concept. On the other hand, knowledge of the position introduces a new path of less effort to the trainee, which prolongs the effects of training and promotes a stronger self-concept. Other forms of identity setting in an organization include Business Cards, Specific Benefits by Role, and Task Forwarding.
Ethnic and national identity
An ethnic identity is an identification with a certain ethnicity, usually on the basis of a presumed common genealogy or ancestry. Recognition by others as a distinct ethnic group is often a contributing factor to developing this identity. Ethnic groups are also often united by common cultural, behavioral, linguistic, ritualistic, or religious traits.
Processes that result in the emergence of such identification are summarized as ethnogenesis. Various cultural studies and social theory investigate the question of cultural and ethnic identities. Cultural identity adheres to location, gender, race, history, nationality, sexual orientation, religious beliefs, and ethnicity.
National identity is an ethical and philosophical concept where all humans are divided into groups called nations. Members of a "nation" share a common identity and usually a common origin, in the sense of ancestry, parentage, or descent.
Religious identity
A religious identity is the set of beliefs and practices generally held by an individual, involving adherence to codified beliefs and rituals and study of ancestral or cultural traditions, writings, history, mythology, and faith and mystical experience. Religious identity refers to the personal practices related to communal faith along with rituals and communication stemming from such conviction. This identity formation begins with an association in the parents' religious contacts, and individuation requires that the person chooses the same or different religious identity than that of their parents.
Gender identity
In sociology, gender identity describes the gender with which a person identifies (i.e., whether one perceives oneself to be a man, a woman, outside of the gender binary), but can also be used to refer to the gender that other people attribute to the individual on the basis of what they know from gender role indications (social behavior, clothing, hairstyle, etc.). Gender identity may be affected by a variety of social structures, including the person's ethnic group, employment status, religion or irreligion, and family. It can also be biological in the sense of puberty.
Disability identity
Disability identity refers to the particular disabilities that an individual identifies with. This may be something as obvious as a paraplegic person identifying as such, or something less prominent such as a deaf person regarding themselves as part of a local, national, or global community of Deaf People Culture.
Disability identity is almost always determined by the particular disabilities that an individual is born with, though it may change later in life if an individual later becomes disabled or when an individual later discovers a previously overlooked disability (particularly applicable to mental disorders). In some rare cases, it may be influenced by exposure to disabled people as with body integrity dysphoria.
Political identity
Political identities often form the basis of public claims and mobilization of material and other resources for collective action. One theory that explores how this occurs is social movement theory. According to Charles Tilly, the interpretation of our relationship to others ("stories") create the rationale and construct of political identity. The capacity for action is constrained by material resources and sometimes perceptions that can be manipulated by using communication strategies that support the creation of illusory ties.
Interpersonal identity development
Interpersonal identity development comes from Marcia's Identity Status Theory, and refers to friendship, dating, gender roles, and recreation as tools to maturity in a psychosocial aspect of an individual.
Social relation can refer to a multitude of social interactions regulated by social norms between two or more people, with each having a social position and performing a social role. In a sociological hierarchy, social relation is more advanced than behavior, action, social behavior, social action, social contact, and social interaction. It forms the basis of concepts like social organization, social structure, social movement, and social system.
Interpersonal identity development is composed of three elements:
Categorization: Assigning everyone into categories.
Identification: Associating others with certain groups.
Comparison: Comparing groups.
Interpersonal identity development allows an individual to question and examine various personality elements, such as ideas, beliefs, and behaviors. The actions or thoughts of others create social influences that change an individual. Examples of social influence can be seen in socialization and peer pressure, which can affect a person's behavior, thinking about one's self, and subsequent acceptance or rejection of how other people attempt to influence the individual. Interpersonal identity development occurs during exploratory self-analysis and self-evaluation, and ends at various times to establish an easy-to-understand and consolidative sense of self or identity.
Interaction
During interpersonal identity development, an exchange of propositions and counter-propositions occurs, resulting in a qualitative transformation of the individual. The aim of interpersonal identity development is to resolve the undifferentiated facets of an individual, which are found to be indistinguishable from others. Given this, and with other admissions, the individual is led to a contradiction between the self and others, and forces the withdrawal of the undifferentiated self as truth. To resolve the incongruence, the person integrates or rejects the encountered elements, which results in a new identity. During each of these exchanges, the individual must resolve the exchange before facing future ones. The exchanges are endless since the changing world constantly presents exchanges between individuals and thus allows individuals to redefine themselves constantly.
Collective identity
Collective identity is a sense of belonging to a group (the collective). If it is strong, an individual who identifies with the group will dedicate their lives to the group over individual identity: they will defend the views of the group and take risks for the group, often with little to no incentive or coercion. Collective identity often forms through a shared sense of interest, affiliation, or adversity. The cohesiveness of the collective identity goes beyond the community, as the collective experiences grief from the loss of a member.
Social support
Individuals gain a social identity and group identity from their affiliations in various groups, which include: family, ethnicity, education and occupational status, friendship, dating, and religion.
Family
One of the most important affiliations is that of the family, whether they be biological, extended, or even adoptive families. Each has its own influence on identity through the interaction that takes place between the family members and with the individual. Researchers and theorists state that an individual's identity (more specifically an adolescent's identity) is influenced by the people around them and the environment in which they live. If a family does not have integration, it is likely to cause identity diffusion (one of James Marcia's four identity statuses, where an individual has not made commitments and does not try to make them), and applies to both males and females.
Peer relationships
Morgan and Korobov performed a study in order to analyze the influence of same-sex friendships in the development of one's identity. This study involved the use of 24 same-sex college student friendship triads, consisting of 12 males and 12 females, with a total of 72 participants. Each triad was required to have known each other for a minimum of six months. A qualitative method was chosen, as it is the most appropriate in assessing the development of identity. Semi-structured group interviews took place, where the students were asked to reflect on stories and experiences concerning relationship problems. The results showed five common responses when assessing these relationship problems: joking about the relationship's problems, providing support, offering advice, relating others' experiences to their own similar experiences, and providing encouragement. The results concluded that adolescents actively construct their identities through common themes of conversation between same-sex friendships; in this case, involving relationship issues. The common themes of conversation that close peers seem to engage in helping to further their identity formation in life.
Influences on identity
Cognitive influences
Cognitive development influences identity formation. When adolescents are able to think abstractly and reason logically, they have an easier time exploring and contemplating possible identities. When an adolescent has advanced cognitive development and maturity, they tend to resolve identity issues more so than age mates that are less cognitively developed. When identity issues are solved quicker and better, there is more time and effort put into developing that identity.
Scholastic influences
Adolescents that have a post-secondary education tend to make more concrete goals and stable occupational commitments. Going to college or university can influence identity formation in a productive way. The opposite can also be true, where identity influences education and academics. Education's effect on identity can be beneficial for the individual's identity; the individual becomes educated on different approaches and paths to take in the process of identity formation.
Sociocultural influences
Sociocultural influences are those of a broader social and historical context. For example, in the past, adolescents would likely just adopt the job or religious beliefs that were expected of them or that were akin to their parents. Today, adolescents have more resources to explore identity choices and more options for commitments. This influence is becoming less significant due to the growing acceptance of identity options that were once less accepted. Many of the identity options from the past are becoming unrecognized and less popular today. The changing sociocultural situation is forcing individuals to develop a unique identity based on their own aspirations. Sociocultural influences play a different role in identity formation now than they have in the past.
Parenting influences
The type of relationship that adolescents have with their parents has a significant role in identity formation. For example, when there is a solid and positive relationship between parents and adolescents, they are more likely to feel freedom in exploring identity options for themselves. A study found that for boys and girls, identity formation is positively influenced by parental involvement, specifically in the areas of support, social monitoring, and school monitoring. In contrast, when the relationship is not as close and the fear of rejection or discontentment from the parent or other guardians is present, they are more likely to feel less confident in forming a separate identity from their parents.
Cyber-socializing and the Internet
The Internet is becoming an extension of the expressive dimension of adolescence. On the Internet, youth talk about their lives and concerns, design the content that they make available to others, and assess the reactions of others to it in the form of optimized and electronically mediated social approval. When connected, youth speak of their daily routines and lives. With each post, image or video they upload, they can ask themselves who they are and try out profiles that differ from the ones they practice in the "real" world.
See also
Otium
Poverty
Workism
Self-Schema
Social theory
Social defeat
Lev Vygotsky
Social stigma
Social identity
Self-discovery
Peer pressure
Cultural identity
Erving Goffman
Religious Values
Consumer culture
Moral development
Identity performance
Wishful Identification
George Herbert Mead
In-group and out-group
Symbolic interactionism
Social comparison theory
Identification (psychology)
Identity crisis (psychology)
Genealogical bewilderment
Values (Western philosophy)
Georg Wilhelm Friedrich Hegel
References
Sources
Further reading
A Erdman, A Study of Bisexual Identity Formation. 2006.
A Portes, D MacLeod, What Shall I Call Myself? Hispanic Identity Formation in the Second Generation. Ethnic and Racial Studies, 1996.
AS Waterman, Identity Formation: Discovery or Creation? The Journal of Early Adolescence, 1984.
AS Waterman, Finding Someone to be: Studies on the Role of Intrinsic Motivation in Identity Formation. Identity: An International Journal of Theory and Research, 2004.
A Warde, Consumption, Identity-Formation and Uncertainty. Sociology, 1994.
A Wendt, Collective Identity Formation and the International State. The American Political Science Review, 1994.
CA Willard, 1996 — Liberalism and the Problem of Knowledge: A New Rhetoric for Modern Democracy, Chicago: University of Chicago Press. ; OCLC 260223405
CG Levine, JE Côté, JE Cãotâ, Identity Formation, Agency, and Culture: a social psychological synthesis. 2002.
G Robert, C Bate, C Pope, J Gabbay, A le May, Processes and dynamics of identity formation in professional organizations. 2007.
HL Minton, GJ McDonald, Homosexual identity formation as a developmental process.
MD Berzonsky, Self-construction over the life-span: A process perspective on identity formation. Advances in personal construct theory, 1990.
RB Hall, (Reviewer) Uses of the Other: 'The East' in European Identity Formation (by IB Neumann) University of Minnesota Press, Minneapolis, 1999. 248 pages. International Studies Review Vol.3, Issue 1, Pages 101-111
VC Cass, Sexual orientation identity formation: A Western phenomenon. Textbook of homosexuality and mental health, 1996.
External links
A positive approach to the identity formation of biracial children ". ematusov.soe.udel.edu
Identity: An International Journal of Theory and Research. "Identity" is the official journal of the Society for Research on Identity Formation.
Social philosophy
Conceptions of self
Career development
Identity (social science) | 0.776656 | 0.99139 | 0.769969 |
Genealogy | Genealogy is the study of families, family history, and the tracing of their lineages. Genealogists use oral interviews, historical records, genetic analysis, and other records to obtain information about a family and to demonstrate kinship and pedigrees of its members. The results are often displayed in charts or written as narratives. The field of family history is broader than genealogy, and covers not just lineage but also family and community history and biography.
The record of genealogical work may be presented as a "genealogy", a "family history", or a "family tree". In the narrow sense, a "genealogy" or a "family tree" traces the descendants of one person, whereas a "family history" traces the ancestors of one person, but the terms are often used interchangeably. A family history may include additional biographical information, family traditions, and the like.
The pursuit of family history and origins tends to be shaped by several motives, including the desire to carve out a place for one's family in the larger historical picture, a sense of responsibility to preserve the past for future generations, and self-satisfaction in accurate storytelling. Genealogy research is also performed for scholarly or forensic purposes, or to trace legal next of kin to inherit under intestacy laws.
Overview
Amateur genealogists typically pursue their own ancestry and that of their spouses. Professional genealogists may also conduct research for others, publish books on genealogical methods, teach, or produce their own databases. They may work for companies that provide software or produce materials of use to other professionals and to amateurs. Both try to understand not just where and when people lived but also their lifestyles, biographies, and motivations. This often requires—or leads to—knowledge of antiquated laws, old political boundaries, migration trends, and historical socioeconomic or religious conditions.
Genealogists sometimes specialize in a particular group, e.g., a Scottish clan; a particular surname, such as in a one-name study; a small community, e.g., a single village or parish, such as in a one-place study; or a particular, often famous, person. Bloodlines of Salem is an example of a specialized family-history group. It welcomes members who can prove descent from a participant of the Salem Witch Trials or who simply choose to support the group.
Genealogists and family historians often join family history societies, where novices can learn from more experienced researchers. Such societies generally serve a specific geographical area. Their members may also index records to make them more accessible or engage in advocacy and other efforts to preserve public records and cemeteries. Some schools engage students in such projects as a means to reinforce lessons regarding immigration and history. Other benefits include family medical histories for families with serious medical conditions that are hereditary.
The terms "genealogy" and "family history" are often used synonymously, but some entities offer a slight difference in definition. The Society of Genealogists, while also using the terms interchangeably, describes genealogy as the "establishment of a pedigree by extracting evidence, from valid sources, of how one generation is connected to the next" and family history as "a biographical study of a genealogically proven family and of the community and country in which they lived".
Motivation
Individuals conduct genealogical research for a number of reasons.
Personal or medical interest
Private individuals research genealogy out of curiosity about their heritage. This curiosity can be particularly strong among those whose family histories were lost or unknown due to, for example, adoption or separation from family through divorce, death, or other situations. In addition to simply wanting to know more about who they are and where they came from, individuals may research their genealogy to learn about any hereditary diseases in their family history.
There is a growing interest in family history in the media as a result of advertising and television shows sponsored by large genealogy companies, such as Ancestry.com. This, coupled with easier access to online records and the affordability of DNA tests, has both inspired curiosity and allowed those who are curious to easily start investigating their ancestry.
Community or religious obligation
In communitarian societies, one's identity is defined as much by one's kin network as by individual achievement, and the question "Who are you?" would be answered by a description of father, mother, and tribe. New Zealand Māori, for example, learn whakapapa (genealogies) to discover who they are.
Family history plays a part in the practice of some religious belief systems. For example, The Church of Jesus Christ of Latter-day Saints (LDS Church) has a doctrine of baptism for the dead, which necessitates that members of that faith engage in family history research.
In East Asian countries that were historically shaped by Confucianism, many people follow a practice of ancestor worship as well as genealogical record-keeping. Ancestors' names are inscribed on tablets and placed in shrines, where rituals are performed. Genealogies are also recorded in genealogy books. This practice is rooted in the belief that respect for one's family is a foundation for a healthy society.
Establishing identity
Royal families, both historically and in modern times, keep records of their genealogies in order to establish their right to rule and determine who will be the next sovereign. For centuries in various cultures, one's genealogy has been a source of political and social status.
Some countries and indigenous tribes allow individuals to obtain citizenship based on their genealogy. In Ireland and in Greece, for example, an individual can become a citizen if one of their grandparents was born in that country, regardless of their own or their parents' birthplace. In societies such as Australia or the United States, by the 20th century, there was growing pride in the pioneers and nation-builders. Establishing descent from these was, and is, important to lineage societies, such as the Daughters of the American Revolution and The General Society of Mayflower Descendants. Modern family history explores new sources of status, such as celebrating the resilience of families that survived generations of poverty or slavery, or the success of families in integrating across racial or national boundaries. Some family histories even emphasize links to celebrity criminals, such as the bushranger Ned Kelly in Australia.
Legal and forensic research
Lawyers involved in probate cases do genealogy to locate heirs of property.
Detectives may perform genealogical research using DNA evidence to identify victims of homicides or perpetrators of crimes.
Scholarly research
Historians and geneticists may carry out genealogical research to gain a greater understanding of specific topics in their respective fields, and some may employ professional genealogists in connection with specific aspects of their research. They also publish their research in peer-reviewed journals.
The introduction of postgraduate courses in genealogy in recent years has given genealogy more of an academic focus, with the emergence of peer-reviewed journals in this area. Scholarly genealogy is beginning to emerge as a discipline in its own right, with an increasing number of individuals who have obtained genealogical qualifications carrying out research on a diverse range of topics related to genealogy, both within academic institutions and independently.
Discrimination and persecution
In the US, the "one-drop rule" asserted that any person with even one ancestor of black ancestry ("one drop" of "black blood") was considered black. It was codified into the law of some States (e.g. the Racial Integrity Act of 1924) to reinforce racial segregation.
Genealogy was also used in Nazi Germany to determine whether a person was considered a "Jew" or a "Mischling" (Mischling Test), and whether a person was considered as "Aryan" (Ahnenpass).
History
Pre-modern genealogy
Hereditary emperors, kings and chiefs in several areas have long claimed descent from gods (thus establishing divine legitimacy). Court genealogists have preserved or invented appropriate genealogical pretensions - for example in Japan,
Polynesia,
and the Indo-European world from Scandinavia through ancient Greece to India.
Historically, in Western societies, genealogy focused on the kinship and descent of rulers and nobles, often arguing or demonstrating the legitimacy of claims to wealth and power. Genealogy often overlapped with heraldry, which reflected the ancestry of noble houses in their coats of arms. Modern scholars regard many claimed noble ancestries as fabrications, such as the Anglo-Saxon Chronicle's tracing of the ancestry of several English kings to the god Woden. With the coming of Christianity to northern Europe, Anglo-Saxon royal genealogies extended the kings' lines of ancestry from Woden back to reach the line of Biblical patriarchs: Noah
and Adam. (This extension offered the side-benefit of connecting pretentious rulers with the prestigious genealogy of Jesus.)
Modern historians and genealogists may regard manufactured pseudo-genealogies with a degree of scepticism. However, the desire to find ancestral links with prominent figures from a legendary or distant past has persisted. In the United States, for example, it does no harm to establish one's links to ancestors who boarded the Mayflower. And the popularity of the genealogical hypothesis of The Holy Blood and the Holy Grail (1982) demonstrates popular interest in ancient bloodlines, however dubious.
Some family trees have been maintained for considerable periods. The family tree of Confucius has been maintained for over 2,500 years and is listed in the Guinness Book of World Records as the largest extant family tree. The fifth edition of the Confucius Genealogy was printed in 2009 by the Confucius Genealogy Compilation Committee (CGCC).
Modern times
In modern times, genealogy has become more widespread, with commoners as well as nobility researching and maintaining their family trees. Genealogy received a boost in the late 1970s with the television broadcast of Roots: The Saga of an American Family by Alex Haley. His account of his family's descent from the African tribesman Kunta Kinte inspired many others to study their own lines.
With the advent of the Internet, the number of resources readily accessible to genealogists has vastly increased, fostering an explosion of interest in the topic. Genealogy on the internet became increasingly popular starting in the early 2000s. The Internet has become a major source not only of data for genealogists but also of education and communication.
India
Some notable places where traditional genealogy records are kept include Hindu genealogy registers at Haridwar (Uttarakhand), Varanasi and Allahabad (Uttar Pradesh), Kurukshetra (Haryana), Trimbakeshwar (Maharashtra), and Chintpurni (Himachal Pradesh).
United States
Genealogical research in the United States was first systematized in the early 19th century, especially by John Farmer (1789–1838). Before Farmer's efforts, tracing one's genealogy was seen as an attempt by the American colonists to secure a measure of social standing, an aim that was counter to the new republic's egalitarian, future-oriented ideals (as outlined in the Constitution). As Fourth of July celebrations commemorating the Founding Fathers and the heroes of the Revolutionary War became increasingly popular, however, the pursuit of "antiquarianism", which focused on local history, became acceptable as a way to honor the achievements of early Americans. Farmer capitalized on the acceptability of antiquarianism to frame genealogy within the early republic's ideological framework of pride in one's American ancestors. He corresponded with other antiquarians in New England, where antiquarianism and genealogy were well established, and became a coordinator, booster, and contributor to the growing movement. In the 1820s, he and fellow antiquarians began to produce genealogical and antiquarian tracts in earnest, slowly gaining a devoted audience among the American people. Though Farmer died in 1838, his efforts led to the founding in 1845 of the New England Historic Genealogical Society (NEHGS), one of New England's oldest and most prominent organizations dedicated to the preservation of public records. NEHGS publishes the New England Historical and Genealogical Register.
The Genealogical Society of Utah, founded in 1894, later became the Family History Department of the Church of Jesus Christ of Latter-day Saints. The department's research facility, the Family History Library, which Utah.com claims as "the largest genealogical library in the world", was established to assist in tracing family lineages for special religious ceremonies which Latter-day Saints believe will seal family units together for eternity. Latter-day Saints believe that this fulfilled a biblical prophecy stating that the prophet Elijah would return to "turn the heart of the fathers to the children, and the heart of the children to their fathers." There is a network of church-operated Family History Centers all over the United States and around the world, where volunteers assist the public with tracing their ancestors.<ref>"Family History Centers ", The Church of Jesus Christ of Latter-day Saints: Newsroom, Accessed 2 Jul 2019.</ref> Brigham Young University offers bachelor's degree, minor, and concentration programs in Family History and is the only school in North America to offer this.
The American Society of Genealogists is the scholarly honorary society of the U.S. genealogical field. Founded by John Insley Coddington, Arthur Adams, and Meredith B. Colket Jr., in December 1940, its membership is limited to 50 living fellows. ASG has semi-annually published The Genealogist, a scholarly journal of genealogical research, since 1980. Fellows of the American Society of Genealogists, who bear the post-nominal acronym "FASG", have written some of the most notable genealogical materials of the last half-century.
Some of the most notable scholarly American genealogical journals include The American Genealogist, National Genealogical Society Quarterly, The New England Historical and Genealogical Register, The New York Genealogical and Biographical Record, and The Genealogist.David L. Greene, "Scholarly Genealogical Journals in America, The American Genealogist 61 (1985–86): 116–20.
Research process
Genealogical research is a complex process that uses historical records and sometimes genetic analysis to demonstrate kinship. Reliable conclusions are based on the quality of sources (ideally, original records), the information within those sources, (ideally, primary or firsthand information), and the evidence that can be drawn (directly or indirectly), from that information. In many instances, genealogists must skillfully assemble indirect or circumstantial evidence to build a case for identity and kinship. All evidence and conclusions, together with the documentation that supports them, is then assembled to create a cohesive genealogy or family history.
Genealogists begin their research by collecting family documents and stories. This creates a foundation for documentary research, which involves examining and evaluating historical records for evidence about ancestors and other relatives, their kinship ties, and the events that occurred in their lives. As a rule, genealogists begin with the present and work backwards in time. Historical, social, and family context is essential to achieving correct identification of individuals and relationships. Source citation is also important when conducting genealogical research.Jeffry Peter La Marca, Simple Citations for Genealogical Sources, (Orting: Family Roots Publishing Co., 2024). To keep track of collected material, family group sheets and pedigree charts are used. Formerly handwritten, these can now be generated by genealogical software.
Genetic analysis
Because a person's DNA contains information that has been passed down relatively unchanged from early ancestors, analysis of DNA is sometimes used for genealogical research. Three DNA types are of particular interest. Mitochondrial DNA (mtDNA) is contained in the mitochondria of the egg cell and is passed down from a mother to all of her children, both male and female; however, only females pass it on to their children. Y-DNA is present only in males and is passed down from a father to his sons (direct male line) with only minor mutations occurring over time. Autosomal DNA (atDNA), is found in the 22 non-sex chromosomes (autosomes) and is inherited from both parents; thus, it can uncover relatives from any branch of the family. A genealogical DNA test allows two individuals to find the probability that they are, or are not, related within an estimated number of generations. Individual genetic test results are collected in databases to match people descended from a relatively recent common ancestor. See, for example, the Molecular Genealogy Research Project. Some tests are limited to either the patrilineal or the matrilineal line.
Collaboration
Most genealogy software programs can export information about persons and their relationships in a standardized format called a GEDCOM. In that format, it can be shared with other genealogists, added to databases, or converted into family web sites. Social networking service (SNS) websites allow genealogists to share data and build their family trees online. Members can upload their family trees and contact other family historians to fill in gaps in their research. In addition to the (SNS) websites, there are other resources that encourage genealogists to connect and share information, such as rootsweb.ancestry.com and rsl.rootsweb.ancestry.com.
Volunteerism
Volunteer efforts figure prominently in genealogy. These range from the extremely informal to the highly organized.
On the informal side are the many popular and useful message boards such as Rootschat and mailing lists on particular surnames, regions, and other topics. These forums can be used to try to find relatives, request record lookups, obtain research advice, and much more. Many genealogists participate in loosely organized projects, both online and off. These collaborations take numerous forms. Some projects prepare name indexes for records, such as probate cases, and publish the indexes, either online or off. These indexes can be used as finding aids to locate original records. Other projects transcribe or abstract records. Offering record lookups for particular geographic areas is another common service. Volunteers do record lookups or take photos in their home areas for researchers who are unable to travel.
Those looking for a structured volunteer environment can join one of thousands of genealogical societies worldwide. Most societies have a unique area of focus, such as a particular surname, ethnicity, geographic area, or descendancy from participants in a given historical event. Genealogical societies are almost exclusively staffed by volunteers and may offer a broad range of services, including maintaining libraries for members' use, publishing newsletters, providing research assistance to the public, offering classes or seminars, and organizing record preservation or transcription projects.
Software
Genealogy software is used to collect, store, sort, and display genealogical data. At a minimum, genealogy software accommodates basic information about individuals, including births, marriages, and deaths. Many programs allow for additional biographical information, including occupation, residence, and notes, and most also offer a method for keeping track of the sources for each piece of evidence.
Most programs can generate basic kinship charts and reports, allow for the import of digital photographs and the export of data in the GEDCOM format (short for GEnealogical Data COMmunication) so that data can be shared with those using other genealogy software. More advanced features include the ability to restrict the information that is shared, usually by removing information about living people out of privacy concerns; the import of sound files; the generation of family history books, web pages and other publications; the ability to handle same-sex marriages and children born out of wedlock; searching the Internet for data; and the provision of research guidance. Programs may be geared toward a specific religion, with fields relevant to that religion, or to specific nationalities or ethnic groups, with source types relevant for those groups. Online resources involve complex programming and large data bases, such as censuses.
Records and documentation
Genealogists use a wide variety of records in their research. To effectively conduct genealogical research, it is important to understand how the records were created, what information is included in them, and how and where to access them.David Hey, The Oxford Companion to Family and Local History (2nd ed. 2008).
List of record types
Records that are used in genealogy research include:
Vital records
Birth records
Death records
Marriage and divorce records
Adoption records
Biographies and biographical profiles (e.g. Who's Who)
Cemetery lists
Census records
Church and Religious records
Baptism or christening
Brit milah or Baby naming certificates
Confirmation
Bar or bat mitzvah
Marriage
Funeral or death
Membership
City directories and telephone directories
Coroner's reports
Court records
Criminal records
Civil records
Diaries, personal letters and family Bibles
DNA tests
Emigration, immigration and naturalization records
Hereditary & lineage organization records, e.g. Daughters of the American Revolution records
Land and property records, deeds
Medical records
Military and conscription records
Newspaper articles
Obituaries
Occupational records
Oral histories
Passports
Photographs
Poorhouse, workhouse, almshouse, and asylum records
School and alumni association records
Ship passenger lists
Social Security (within the US) and pension records
Tax records
Tombstones, cemetery records, and funeral home records
Voter registration records
Wills and probate records
To keep track of their citizens, governments began keeping records of persons who were neither royalty nor nobility. In England and Germany, for example, such record keeping started with parish registers in the 16th century. As more of the population was recorded, there were sufficient records to follow a family. Major life events, such as births, marriages, and deaths, were often documented with a license, permit, or report. Genealogists locate these records in local, regional or national offices or archives and extract information about family relationships and recreate timelines of persons' lives.
In China, India and other Asian countries, genealogy books are used to record the names, occupations, and other information about family members, with some books dating back hundreds or even thousands of years. In the eastern Indian state of Bihar, there is a written tradition of genealogical records among Maithil Brahmins and Karna Kayasthas called "Panjis", dating to the 12th century CE. Even today these records are consulted prior to marriages.
In Ireland, genealogical records were recorded by professional families of senchaidh (historians) until as late as the mid-17th century. Perhaps the most outstanding example of this genre is Leabhar na nGenealach/The Great Book of Irish Genealogies, by Dubhaltach MacFhirbhisigh (d. 1671), published in 2004.
FamilySearch collections
The LDS Church has engaged in large-scale microfilming of records of genealogical value. Its Family History Library in Salt Lake City, Utah, houses over 2 million microfiche and microfilms of genealogically relevant material, which are also available for on-site research at over 4,500 Family History Centers worldwide.
FamilySearch's website includes many resources for genealogists: a FamilyTree database, historical records, digitized family history books, resources and indexing for African American genealogy such as slave and bank records, and a Family History Research Wiki containing research guidance articles.
Indexing ancestral information
Indexing is the process of transcribing parish records, city vital records, and other reports, to a digital database for searching. Volunteers and professionals participate in the indexing process. Since 2006, the microfilm in the FamilySearch granite mountain vault is in the process of being digitally scanned, available online, and eventually indexed.
For example, after the 72-year legal limit for releasing personal information for the United States Census was reached in 2012, genealogical groups cooperated to index the 132 million residents registered in the 1940 United States Census.
Between 2006 and 2012, the FamilySearch indexing effort produced more than 1 billion searchable records.
In 2022, FamilySearch and Ancestry partnered to use Artificial Intelligence (AI) technology to help the process of indexing more records. The process first began with the public release of the 1950 United States Census. The index of the census would at first be created by an AI trained on handwriting in old documents and then reviewed by thousands of volunteers using FamilySearch.
Record loss and preservation
Sometimes genealogical records are destroyed, whether accidentally or on purpose. In order to do thorough research, genealogists keep track of which records have been destroyed so they know when information they need may be missing. Of particular note for North American genealogy is the 1890 United States Census, which was destroyed in a fire in 1921. Although fragments survive, most of the 1890 census no longer exists. Those looking for genealogical information for families that lived in the United States in 1890 must rely on other information to fill that gap.
War is another cause of record destruction. During World War II, many European records were destroyed. Communists in China during the Cultural Revolution and in Korea during the Korean War destroyed genealogy books kept by families.
Often records are destroyed due to accident or neglect. Since genealogical records are often kept on paper and stacked in high-density storage, they are prone to fire, mold, insect damage, and eventual disintegration. Sometimes records of genealogical value are deliberately destroyed by governments or organizations because the records are considered to be unimportant or a privacy risk. Because of this, genealogists often organize efforts to preserve records that are at risk of destruction. FamilySearch has an ongoing program that assesses what useful genealogical records have the most risk of being destroyed, and sends volunteers to digitize such records. In 2017, the government of Sierra Leone asked FamilySearch for help preserving their rapidly deteriorating vital records. FamilySearch has begun digitizing the records and making them available online. The Federation of Genealogical Societies also organized an effort to preserve and digitize United States War of 1812 pension records. In 2010, they began raising funds, which were contribute by genealogists around the United States and matched by Ancestry.com. Their goal was achieved and the process of digitization was able to begin. The digitized records are available for free online.
Types of information
Genealogists who seek to reconstruct the lives of each ancestor consider all historical information to be "genealogical" information. Traditionally, the basic information needed to ensure correct identification of each person are place names, occupations, family names, first names, and dates. However, modern genealogists greatly expand this list, recognizing the need to place this information in its historical context in order to properly evaluate genealogical evidence and distinguish between same-name individuals. A great deal of information is available for British ancestry with growing resources for other ethnic groups.
Family names
Family names are simultaneously one of the most important pieces of genealogical information, and a source of significant confusion for researchers.
In many cultures, the name of a person refers to the family to which they belong. This is called the family name, surname, or last name. Patronymics are names that identify an individual based on the father's name. For example, Marga Olafsdottir is Marga, daughter of Olaf, and Olaf Thorsson is Olaf, son of Thor. Many cultures used patronymics before surnames were adopted or came into use. The Dutch in New York, for example, used the patronymic system of names until 1687 when the advent of English rule mandated surname usage. In Iceland, patronymics are used by a majority of the population. In Denmark and Norway patronymics and farm names were generally in use through the 19th century and beyond, though surnames began to come into fashion toward the end of the 19th century in some parts of the country. Not until 1856 in Denmark and 1923 in Norway were there laws requiring surnames.
The transmission of names across generations, marriages and other relationships, and immigration may cause difficulty in genealogical research. For instance, women in many cultures have routinely used their spouse's surnames. When a woman remarried, she may have changed her name and the names of her children; only her name; or changed no names. Her birth name (maiden name) may be reflected in her children's middle names; her own middle name; or dropped entirely. Children may sometimes assume stepparent, foster parent, or adoptive parent names. Because official records may reflect many kinds of surname change, without explaining the underlying reason for the change, the correct identification of a person recorded identified with more than one name is challenging. Immigrants to America often Americanized their names.
Surname data may be found in trade directories, census returns, birth, death, and marriage records.
Given names
Genealogical data regarding given names (first names) is subject to many of the same problems as are family names and place names. Additionally, the use of nicknames is very common. For example, Beth, Lizzie or Betty are all common for Elizabeth, and Jack, John and Jonathan may be interchanged.
Middle names provide additional information. Middle names may be inherited, follow naming customs, or be treated as part of the family name. For instance, in some Latin cultures, both the mother's family name and the father's family name are used by the children.
Historically, naming traditions existed in some places and cultures. Even in areas that tended to use naming conventions, however, they were by no means universal. Families may have used them some of the time, among some of their children, or not at all. A pattern might also be broken to name a newborn after a recently deceased sibling, aunt or uncle.
An example of a naming tradition from England, Scotland and Ireland:
Another example is in some areas of Germany, where siblings were given the same first name, often of a favourite saint or local nobility, but different second names by which they were known (Rufname). If a child died, the next child of the same gender that was born may have been given the same name. It is not uncommon that a list of a particular couple's children will show one or two names repeated.
Personal names have periods of popularity, so it is not uncommon to find many similarly named people in a generation, and even similarly named families; e.g., "William and Mary and their children David, Mary, and John".
Many names may be identified strongly with a particular gender; e.g., William for boys, and Mary for girls. Others may be ambiguous, e.g., Lee, or have only slightly variant spellings based on gender, e.g., Frances (usually female) and Francis (usually male).
Place names
While the locations of ancestors' residences and life events are core elements of the genealogist's quest, they can often be confusing. Place names may be subject to variant spellings by partially literate scribes. Locations may have identical or very similar names. For example, the village name Brockton occurs six times in the border area between the English counties of Shropshire and Staffordshire. Shifts in political borders must also be understood. Parish, county, and national borders have frequently been modified. Old records may contain references to farms and villages that have ceased to exist. When working with older records from Poland, where borders and place names have changed frequently in past centuries, a source with maps and sample records such as A Translation Guide to 19th-Century Polish-Language Civil-Registration Documents can be invaluable.
Available sources may include vital records (civil or church registration), censuses, and tax assessments. Oral tradition is also an important source, although it must be used with caution. When no source information is available for a location, circumstantial evidence may provide a probable answer based on a person's or a family's place of residence at the time of the event.
Maps and gazetteers are important sources for understanding the places researched. They show the relationship of an area to neighboring communities and may be of help in understanding migration patterns. Family tree mapping using online mapping tools such as Google Earth (particularly when used with Historical Map overlays such as those from the David Rumsey Historical Map Collection) assist in the process of understanding the significance of geographical locations.
Dates
It is wise to exercise extreme caution with dates. Dates are more difficult to recall years after an event, and are more easily mistranscribed than other types of genealogical data. Therefore, one should determine whether the date was recorded at the time of the event or at a later date. Dates of birth in vital records or civil registrations and in church records at baptism are generally accurate because they were usually recorded near the time of the event. Family Bibles are often a source for dates, but can be written from memory long after the event. When the same ink and handwriting is used for all entries, the dates were probably written at the same time and therefore will be less reliable since the earlier dates were probably recorded well after the event. The publication date of the Bible also provides a clue about when the dates were recorded since they could not have been recorded at any earlier date.
People sometimes reduce their age on marriage, and those under "full age" may increase their age in order to marry or to join the armed forces. Census returns are notoriously unreliable for ages or for assuming an approximate death date. Ages over 15 in the 1841 census in the UK are rounded down to the next lower multiple of five years.
Although baptismal dates are often used to approximate birth dates, some families waited years before baptizing children, and adult baptisms are the norm in some religions. Both birth and marriage dates may have been adjusted to cover for pre-wedding pregnancies.
Calendar changes must also be considered. In 1752, England and her American colonies changed from the Julian to the Gregorian calendar. In the same year, the date the new year began was changed. Prior to 1752 it was 25 March; this was changed to 1 January. Many other European countries had already made the calendar changes before England had, sometimes centuries earlier. By 1751 there was an 11-day discrepancy between the date in England and the date in other European countries.
For further detail on the changes involved in moving from the Julian to the Gregorian calendar, see: Gregorian calendar.
The French Republican Calendar or French Revolutionary Calendar was a calendar proposed during the French Revolution, and used by the French government for about 12 years from late 1793 to 1805, and for 18 days in 1871 in Paris. Dates in official records at this time use the revolutionary calendar and need "translating" into the Gregorian calendar for calculating ages etc. There are various websites which do this.
Occupations
Occupational information may be important to understanding an ancestor's life and for distinguishing two people with the same name. A person's occupation may have been related to his or her social status, political interest, and migration pattern. Since skilled trades are often passed from father to son, occupation may also be indirect evidence of a family relationship.
It is important to remember that a person may change occupations, and that titles change over time as well. Some workers no longer fit for their primary trade often took less prestigious jobs later in life, while others moved upwards in prestige. Many unskilled ancestors had a variety of jobs depending on the season and local trade requirements. Census returns may contain some embellishment; e.g., from labourer to mason, or from journeyman to master craftsman. Names for old or unfamiliar local occupations may cause confusion if poorly legible. For example, an ostler (a keeper of horses) and a hostler (an innkeeper) could easily be confused for one another. Likewise, descriptions of such occupations may also be problematic. The perplexing description "ironer of rabbit burrows" may turn out to describe an ironer (profession) in the Bristol district named Rabbit Burrows. Several trades have regionally preferred terms. For example, "shoemaker" and "cordwainer" have the same meaning. Finally, many apparently obscure jobs are part of a larger trade community, such as watchmaking, framework knitting or gunmaking.
Occupational data may be reported in occupational licences, tax assessments, membership records of professional organizations, trade directories, census returns, and vital records (civil registration). Occupational dictionaries are available to explain many obscure and archaic trades.
Reliability of sources
Information found in historical or genealogical sources can be unreliable and it is good practice to evaluate all sources with a critical eye. Factors influencing the reliability of genealogical information include: the knowledge of the informant (or writer); the bias and mental state of the informant (or writer); the passage of time and the potential for copying and compiling errors.
The quality of census data has been of special interest to historians, who have investigated reliability issues.Richard H. Steckel, "The Quality of Census Data for Historical Inquiry: A Research Agenda", Social Science History, vol. 15, no. 4 (Winter, 1991), pp. 579–599.
Knowledge of the informant
The informant is the individual who provided the recorded information. Genealogists must carefully consider who provided the information and what they knew. In many cases the informant is identified in the record itself. For example, a death certificate usually has two informants: a physician who provides information about the time and cause of death and a family member who provides the birth date, names of parents, etc.
When the informant is not identified, one can sometimes deduce information about the identity of the person by careful examination of the source. One should first consider who was alive (and nearby) when the record was created. When the informant is also the person recording the information, the handwriting can be compared to other handwriting samples.
When a source does not provide clues about the informant, genealogists should treat the source with caution. These sources can be useful if they can be compared with independent sources. For example, a census record by itself cannot be given much weight because the informant is unknown. However, when censuses for several years concur on a piece of information that would not likely be guessed by a neighbor, it is likely that the information in these censuses was provided by a family member or other informed person. On the other hand, information in a single census cannot be confirmed by information in an undocumented compiled genealogy since the genealogy may have used the census record as its source and might therefore be dependent on the same misinformed individual.
Motivation of the informant
Even individuals who had knowledge of the fact, sometimes intentionally or unintentionally provided false or misleading information. A person may have lied in order to obtain a government benefit (such as a military pension), avoid taxation, or cover up an embarrassing situation (such as the existence of a non-marital child). A person with a distressed state of mind may not be able to accurately recall information. Many genealogical records were recorded at the time of a loved one's death, and so genealogists should consider the effect that grief may have had on the informant of these records.
The effect of time
The passage of time often affects a person's ability to recall information. Therefore, as a general rule, data recorded soon after the event are usually more reliable than data recorded many years later. However, some types of data are more difficult to recall after many years than others. One type especially prone to recollection errors is dates. Also the ability to recall is affected by the significance that the event had to the individual. These values may have been affected by cultural or individual preferences.
Copying and compiling errors
Genealogists must consider the effects that copying and compiling errors may have had on the information in a source. For this reason, sources are generally categorized in two categories: original and derivative. An original source is one that is not based on another source. A derivative source is information taken from another source. This distinction is important because each time a source is copied, information about the record may be lost and errors may result from the copyist misreading, mistyping, or miswriting the information. Genealogists should consider the number of times information has been copied and the types of derivation a piece of information has undergone. The types of derivatives include: photocopies, transcriptions, abstracts, translations, extractions, and compilations.
In addition to copying errors, compiled sources (such as published genealogies and online pedigree databases) are susceptible to misidentification errors and incorrect conclusions based on circumstantial evidence. Identity errors usually occur when two or more individuals are assumed to be the same person. Circumstantial or indirect evidence does not explicitly answer a genealogical question, but either may be used with other sources to answer the question, suggest a probable answer, or eliminate certain possibilities. Compilers sometimes draw hasty conclusions from circumstantial evidence without sufficiently examining all available sources, without properly understanding the evidence, and without appropriately indicating the level of uncertainty.
Primary and secondary sources
In genealogical research, information can be obtained from primary or secondary sources. Primary sources are records that were made at the time of the event, for example a death certificate would be a primary source for a person's death date and place. Secondary sources are records that are made days, weeks, months, or even years after an event.
Standards and ethics
Organizations that educate and certify genealogists have established standards and ethical guidelines they instruct genealogists to follow.
Research standards
Genealogy research requires analyzing documents and drawing conclusions based on the evidence provided in the available documents. Genealogists need standards to determine whether or not their evaluation of the evidence is accurate. In the past, genealogists in the United States borrowed terms from judicial law to examine evidence found in documents and how they relate to the researcher's conclusions. However, the differences between the two disciplines created a need for genealogists to develop their own standards. In 2000, the Board for Certification of Genealogists published their first manual of standards. The Genealogical Proof Standard created by the Board for Certification of Genealogists is widely distributed in seminars, workshops, and educational materials for genealogists in the United States. Other genealogical organizations around the world have created similar standards they invite genealogists to follow. Such standards provide guidelines for genealogists to evaluate their own research as well as the research of others.
Standards for genealogical research include:
Clearly document and organize findings.
Cite all sources in a specific manner so that others can locate them and properly evaluate them.
Locate all available sources that may contain information relevant to the research question.
Analyze findings thoroughly, without ignoring conflicts in records or negative evidence.
Rely on original, rather than derivative sources, wherever possible.
Use logical reasoning based on reliable sources to reach conclusions.
Acknowledge when a specific conclusion is only "possible" or "probable" rather than "proven".
Acknowledge that other records that have not yet been discovered may overturn a conclusion.
Ethical guidelines
Genealogists often handle sensitive information and share and publish such information. Because of this, there is a need for ethical standards and boundaries for when information is too sensitive to be published. Historically, some genealogists have fabricated information or have otherwise been untrustworthy. Genealogical organizations around the world have outlined ethical standards as an attempt to eliminate such problems. Ethical standards adopted by various genealogical organizations include:
Respect copyright laws
Acknowledge where one consulted another's work and do not plagiarize the work of other researchers.
Treat original records with respect and avoid causing damage to them or removing them from repositories.
Treat archives and archive staff with respect.
Protect the privacy of living individuals by not publishing or otherwise disclosing information about them without their permission.
Disclose any conflicts of interest to clients.
When doing paid research, be clear with the client about scope of research and fees involved.
Do not fabricate information or publish false or unproven information as proven.
Be sensitive about information found through genealogical research that may make the client or family members uncomfortable.
In 2015, a committee presented standards for genetic genealogy at the Salt Lake Institute of Genealogy. The standards emphasize that genealogists and testing companies should respect the privacy of clients and recognize the limits of DNA tests. It also discusses how genealogists should thoroughly document conclusions made using DNA evidence. In 2019, the Board for the Certification of Genealogists officially updated their standards and code of ethics to include standards for genetic genealogy.
See also
References
Further reading
General
Hopwood, Nick, Rebecca Flemming, Lauren Kassell, eds. Reproduction: Antiquity to the Present Day (Cambridge UP, 2018). Illustrations. xxxv + 730 pp. excerpt also online review 44 scholarly essays by historians.
British Isles
Kriesberg, Adam. "The future of access to public records? Public–private partnerships in US state and territorial archives." Archival Science 17.1 (2017): 5–25.
China
Continental Europe
Volkmar Weiss: German Genealogy in Its Social and Political Context.'' KDP, 2020, .
North America
External links
Family History UK
Kinship and descent | 0.772409 | 0.996839 | 0.769968 |
Historical present | In linguistics and rhetoric, the historical present or historic present, also called dramatic present or narrative present, is the employment of the present tense instead of past tenses when narrating past events. It is typically thought to heighten the dramatic force of the narrative by describing events as if they were still unfolding, and/or by foregrounding some events relative to others.
Uses in English
In English, it is used in:
historical chronicles (listing a series of events),
fiction,
news headlines, and
everyday conversation, when recounting events as dramatized stories. In casual conversation, it is particularly common with quotative verbs such as say and go, and especially the newer quotative like.
Examples
In an excerpt from Charles Dickens's David Copperfield, the shift from the past tense to the historical present gives a sense of immediacy, as of a recurring vision:
Novels that are written entirely in the historical present include notably John Updike's Rabbit, Run, Hilary Mantel's Wolf Hall and Margaret Atwood's The Handmaid's Tale.
In describing fiction
Summaries of the narratives (plots) of works of fiction are conventionally presented using the present tense, rather than the past tense. At any particular point of the story, as it unfolds, there is a now and so a past and a future, so whether some event mentioned in the story is past, present, or future, changes as the story progresses. The entire plot description is presented as if the story's now were a continuous present. Thus, in summarizing the plot of A Tale of Two Cities, one may write:
In other languages
The historical past is widely used in writing about history in Latin (where it is sometimes referred to by its Latin name, praesens historicum) and some modern European languages.
In French, the historical present is often used in journalism and in historical texts to report events in the past.
The extinct language Shasta appeared to allow the historical present in narratives.
The New Testament, written in Koine Greek in the 1st century AD, is notable for use of the historical present, particularly in the Gospel of Mark.
See also
Future tense
Grammatical tense
Past tense
Passé simple – the historical past in French
Preterite
Uses of English verb forms
Sources
References
Grammatical tenses
Rhetorical techniques
Narrative techniques | 0.785307 | 0.980411 | 0.769923 |
Identitarian movement | The Identitarian movement or Identitarianism is a pan-European nationalist, ethno-nationalist, far-right political ideology asserting the right of the European ethnic groups and white peoples to Western culture and territories exclusively. Originating in France as Les Identitaires ("The Identitarians"), with its youth wing Generation Identity (GI), the movement expanded to other European countries during the early 21st century. Its ideology was formulated from the 1960s onward by essayists such as Alain de Benoist, Dominique Venner, Guillaume Faye and Renaud Camus, who are considered the main ideological sources of the movement.
Identitarians promote concepts such as pan-European nationalism, localism, ethnopluralism, remigration, or the Great Replacement, and they are generally opposed to globalisation, multiculturalism, the spread of Islam and European immigration. Influenced by New Right metapolitics, they do not seek direct electoral results, but rather to provoke long-term social transformations and eventually achieve cultural hegemony and popular adherence to their ideas.
Identitarians are opposed to cultural mixing and promote the preservation of homogeneous ethno-cultural entities, generally to the exclusion of extra-European migrants and descendants of immigrants, and may espouse ideas considered xenophobic and racialist.
In 2019, the Identitarian Movement was classified by the German Federal Office for the Protection of the Constitution as right-wing extremist.
The movement is most notable in Europe, and although rooted in Western Europe, it has spread more rapidly to the eastern part of the continent through conscious efforts of the likes of Faye. It also has adherents among white nationalists in North America, Australia, and New Zealand. The United States–based Southern Poverty Law Center considers many of these organisations to be hate groups.
Origin and development
The Identitarian ideology is generally believed by scholars to be derived from the , a French far-right philosophical movement that was formed in the 1960s in order to adapt traditionalist conservative, ethnopluralist and illiberal politics to a post-WWII European context and distance itself from earlier far-right ideologies like fascism and Nazism, mainly through a form of pan-European nationalism. The opposes liberal democracy and capitalism, and is hostile to multiculturalism and the mixing of different cultures within a single society. Although it is not supremacist, it is racialist because it identifies Europeans as a race. Strategies and concepts promoted by thinkers, such as ethnopluralism, localism, pan-European nationalism, and the use of metapolitics to influence public opinion, have shaped the ideological structure of the Identitarian movement.
Background
The has widely been considered a neo-fascist attempt to legitimise far-right ideas in the political spectrum, and in some cases to recycle Nazi ideas. According to political scientist Stéphane François, the latter accusation, "though relevant in certain ways, [remains] incomplete, as it (purposely) [shuns] other references, most notably the primordial relationship to the German Conservative Revolution." The original prominence of the French nucleus gradually decreased, and a nebula of similar movements which were grouped under the term "European New Right" began to emerge across the continent. Among them was the Neue Rechte of Armin Mohler, also largely inspired by the Conservative Revolution, and another ideological source for the Identitarian movement. Consequently, connections have been suggested between the worldview of Martin Sellner, one of the biggest figures of the movement, and the theories of Martin Heidegger and Carl Schmitt. Leading Identitarian Daniel Friberg has likewise claimed influences from Ernst Jünger and Julius Evola.
Through their think tank GRECE, figures like Alain de Benoist and Guillaume Faye aimed to imitate Marxist metapolitics, especially the tactics of cultural hegemony, agitprop and entryism which, according to them, had allowed left-wing movements to gain cultural and academic dominance from the second part of the 20th century onward. Dominique Venner and his magazine Europe-Action, which is considered the "embryonic form" of the , along with the writings of Saint-Loup, are conducive to the emergence of the Identitarian movement, by redefining the idea of pan-European nationalism on the "white nation" rather than the "nation state".
Emergence
The neo-Völkisch movement Terre et Peuple, which was founded in 1995 by writers Pierre Vial, Jean Haudry and Jean Mabire, is generally considered a precursor of the Identitarian movement. In the early 21st century, ideas influenced far-right youth movements in France through groups such as Jeunesses Identitaires (founded in 2002 and succeeded by in 2012) and Bloc Identitaire (2003). These French movements exported their ideas to other European nations, turning themselves into a pan-European movement of loosely connected Identitarian groups. In the 2000s and 2010s, thinkers led by Renaud Camus, Guillaume Faye, along with members of the Carrefour de l'Horloge, introduced the Great Replacement and remigration as defining concepts in the movement.
Scholar A. James McAdams has described the Identitarian movement as a "second generation" in the evolution of European far-right foundational critique of liberal democracy during the post-war era: "the first of these generations, congregated around the members of the French (New Right), defined difference as a right ('a right to difference') to which all persons were entitled by virtue of their shared humanity. A second generation, epitomized by the pan-European Identitarian movement of the early 2000s, replaced the language of rights with the less exacting claim to respect the differences of others, especially those based on ethnicity. Finally, in response to the degeneration of Identitarian thinking into outright xenophobia and racism, a third generation of theorists emerged in the 2010s with the expressed aim of restoring the respectability of far-right thought." According to scholar Imogen Richards, "while in many respects [] is characteristic of the 'European New Right' (ENR), its spokespersons' various promotion of capitalism and commodification, including through their advocacy of international trade and sale of merchandise, diverges from the anti-capitalist philosophizing of contemporary ENR thinkers."
Ideology
Definition
Identitarianism can be defined by its opposition to globalisation, multiculturalism, Islam and extra-European immigration; and by its defence of traditions, pan-European nationalism and cultural homogeneity within the nations of Europe. The concept of "identity" is central to the Identitarian movement, which sees, in the words of Guillaume Faye, "every form of [humanity’s] homogenisation [as] synonymous with death, as well as sclerosis and entropy". Scholar Stéphane François has described the essence of Identitarian ideology as "mixophobic", that is the fear of ethnic mixing.
According to philosopher Pierre-André Taguieff, the Identitarian 'party-movements' generally share the following traits: a call to an 'authentic' and 'sane' people, which a leader is claiming to embody, against illegitimate or unworthy elites; and a call for a purifying break with the supposedly 'corrupt' current system, in part achieved by 'cleaning up' the territory from elements perceived as 'non-assimilable' for cultural reasons, Muslims in particular. Following Piero Ignazi, Taguieff classifies those party-movements as a new "post-industrial" far-right, distinct from the "traditional" nostalgic far-right. Their ultimate goal is to enter mainstream politics, Taguieff argues, as "post-fascists rather than neo-fascists, [and as] post-nazis rather than neo-nazis."
Scholars have also described the essence of Identitarianism as a reaction against the permissive ideals of the '68 movement, embodied by the baby boomers and their perceived left-liberal dominance on society, which they sometimes label "Cultural Marxism".
Metapolitics
Inspired by the metapolitics of Marxist philosopher Antonio Gramsci via the , Identitarians do not seek direct electoral results but rather to influence the wider political debate in society. Metapolitics is defined by theorist Guillaume Faye as the "social diffusion of ideas and cultural values for the sake of provoking profound, long-term, political transformation." In 2010, Daniel Friberg established the publishing house Arktos Media, which has grown since that date as the "uncontested global leader in the publication of English-language literature." Some Identitarian parties have nonetheless contested elections, as in France or in Croatia, but so far with no success. Éric Zemmour, who has been described as belonging to the Identitarian movement by some scholars, won 7.1% of the votes during the 2022 French presidential election.
A key strategy of the Identitarian movement is to generate large media attention by symbolically occupying popular public spaces, often with only a handful of militants. The largest action to date, labelled "Defend Europe", occurred in 2017. After crowdsourcing more than $178,000, Identitarian militants chartered a ship in the Mediterranean Sea to ferry rescued migrants back to Africa, observe any incursions by other NGO ships into Libyan waters, and report them to the Libyan coastguard. In the event, the ship suffered an engine failure and had to be rescued by another ship from one of the NGOs rescuing migrants.
The European Identitarian movements often use a yellow lambda symbol, inspired by the shield designs of the Spartan army in the movie 300, based on the comic book by Frank Miller.
Ethnopluralism
According to ethnographer Benjamin R. Teitelbaum, Identitarians advocate "an ostensibly non-hierarchical global separatism to create a 'pluriversum', where differences among peoples are preserved and celebrated." Political scientist Jean-Yves Camus agrees and defines the movement as being centred around the concept of ethnopluralism (or 'ethno-differentialism'): "each people and culture can only flourish on its territory of origin; ethnic and cultural mixing (métissage) is seen as a factor of decadence; multiculturalism as a pathogenic project, producing crime, loss of bearings and, ultimately, the possibility of an 'ethnic war' on European lands, between 'ethnic Europeans' and non-native Maghrebi Arabs, in any case Muslims."
The pairing of Muslim immigration and Islam with the concept of ethnopluralism is indeed one of the main bases of Identitarianism, and the idea of a future ethnic war between whites and immigrants is central for some Identitarian theorists, especially Guillaume Faye, who claimed in 2016 that "the ethnic civil war, like a snake's baby that breaks the shell of its egg, [was] only in its very modest beginnings". He had earlier preached "total ethnic war" between "original" Europeans and Muslims in The Colonization of Europe in 2000, which earned him a criminal conviction for incitement to racial hatred. This emphasis on ethnicity, shared by Pierre Vial and his call to an "ethnic revolution" and a "war of liberation", is however opposed by other Identitarian thinkers and groups. Alain de Benoist disavowing Faye's "strongly racist" ideas regarding Muslims after the publication of his 2000 book.
Identitarians generally dismiss the European Union as "corrupt" and "authoritarian", while at the same time defending a "European-level political body that can hold its own against superpowers like America and China." According to scholar Stéphane François, Identitarian geopolitics should be seen as a form of "ethnopolitics". In the Identitarian vision, the world would be structured into different "ethnospheres", each dominated by ethnically related peoples. They promote ethnic solidarities between European peoples, and the establishment of a confederation of regional identities that would eventually replace the various nation states of Europe, which are seen as an inheritance from the "dubious philosophy of the French Revolution". Influenced by Renaud Camus' Great Replacement theory, Identitarians lament an alleged disappearance of the European peoples through a drop in a birth rate and uncontrolled immigration from the Muslim world.
Views on Islam and liberalism
The movement is strongly opposed to the politics and philosophy of Islam, which some critics describe as disguised Islamophobia. Followers often protest what they see as the Islamisation of Europe through mass immigration, claiming it to be a threat to European culture and society. As summarised by Markus Willinger, a key activist of the movement, "We don't want Mehmed and Mustapha to become Europeans." This theory is connected to the ideas of the Great Replacement, a conspiracy theory which claims that a global elite is colluding against the white population of Europe to replace them with non-European peoples. As a proposed solution to this debunked global conspiracy, the identitarians present mass remigration, a project of reversing growing multiculturalism through a forced mass deportation of non-European immigrants (often including their descendants) back to their supposed place of racial origin, regardless of their citizenship status. has made frequent use of the term Reconquista, in reference to expulsion of Muslims and Jewish people from the Iberian Peninsula in 1492.
Identitarians do not share, however, a common vision on liberalism. Some regard it as a part of European identity "threatened by Muslims who do not respect women or gay people", whereas others like Daniel Friberg describe it as the "disease" that contributed to Muslim immigration in the first place.
Connection to other far-right groups
The movement has been described as being a part of the global alt-right, or as the European counterpart of the American alt-right. Hope not Hate (HNH) has described Identitarianism and the alt-right as "ostensibly separate" in origin, but with "huge areas of ideological crossover". Many white nationalists and alt-right leaders have described themselves as Identitarians, and according to HNH, American alt-right influence is evident in European Identitarian groups and events, forming an amalgamated "International Alternative Right". Figures within the Identitarian movements and alt-right often cite founder Alain de Benoist as an influence. De Benoist rejects any alt-right affiliation, although he has worked with Richard B. Spencer, and once spoke at Spencer's National Policy Institute. As Benoist stated, "Maybe people consider me their spiritual father, but I don't consider them my spiritual sons".
According to Christoph Gurk of , one of the goals of Identitarianism is to make racism modern and fashionable. Austrian Identitarians invited radical right-wing groups from across Europe, including several neo-Nazi groups, to participate in an anti-immigration march, according to Anna Thalhammer of Die Presse. There has also been Identitarian collaboration with the white nationalist activist Tomislav Sunić.
By location
France
The main Identitarian youth movement is in France, originally a youth wing of Bloc Identitaire before it split off in 2012 to become its own organisation. The association Terre et Peuple ("Land and People"), which represents the Völkisch leaning of the , is seen as a precursor of the Identitarian movement. Political scientist Stéphane François estimated the size of the Identitarian movement in France to be 1,500–2,000 in 2017.
An undercover investigation conducted by Al Jazeera Investigates into the French branch, which aired on 10 December 2018, captured GI activists punching a Muslim woman whilst saying "Fuck Mecca" and one saying if ever he gets a terminal illness he will purchase a weapon and cause carnage. When asked by the undercover journalist who would be the target he replies "a mosque, whatever". French prosecutors have launched an inquiry into the findings amidst calls for the group to be proscribed.
was banned by French authorities in March 2021.
Austria
The (IBÖ) was founded in 2012. They have sometimes used the concept of a "War Against the '68ers"; i.e. people whose political identities are seen by Identitarians as stemming from the social changes of the 1960s, what would be called baby boomer liberals in the US.
On 27 April 2018 the IBÖ and the homes of its leaders were searched by the Austrian police, and investigations were started against Sellner on suspicion that a criminal organisation was being formed. The court later ruled that the IBÖ was not a criminal organisation.
Germany
The movement also appeared in Germany and converged with preexisting circles, centered on the magazine Blue Narcissus and its founder , a martial artist and former German Karate Team Champion, who according to Gudrun Hentges – who worked for the official Federal Agency for Civic Education – belongs to the "elite of the movement". It became a "registered association" in 2014. Drawing upon thinkers of the and the Conservative Revolution such as Oswald Spengler, Carl Schmitt or the contemporary Russian fascist Aleksandr Dugin, it played a role in the rise of the Pegida marches in 2014–15.
The Identitarian movement has a close linkage to members of the German New Right, e.g., to its prominent member Götz Kubitschek and his journal Sezession, for which the Identitarian speaker Martin Sellner writes.
In August 2016 members of the Identitarian movement in Germany scaled the iconic Brandenburg Gate in Berlin and hung a banner in protest at European immigration and perceived spread of Islam. In September of the same year, members of the Identitarian movement erected a new summit cross in a "provocative" act (as the Süddeutsche Zeitung reported) on the Schafreuter, after the original one had to be removed because of damage by an unknown person.
In June 2017, the PayPal donations account of the Identitarian "Defend Europe" was locked, and the Identitarian account of the bank was closed.
On 11 July 2019, Germany's Federal Office for the Protection of the Constitution (BfV), the country's domestic intelligence agency, formally designated the Identitarian Movement as "a verified extreme right movement against the liberal democratic constitution." The new classification has allowed the BfV to use more powerful surveillance methods against the group and its youth wing, Generation Identity. The Identitarian Movement has about 600 members in Germany.
South west Germany alone had about 100 members, mostly in Ulm, Reutlingen, Pforzheim and Stuttgart with 2.400 followers on instagram; the group changed its original name from Identitäre Bewegung Schwaben to 'Kesselrevolte/Schwaben Bande' to 'Wackre Schwaben' to 'Reconquista 21'.
United Kingdom
In July 2017, a Facebook page for Generation Identity UK and Ireland was created. A few months later, in October 2017, key figures of the Identitarian movement met in London in efforts to target the United Kingdom, and discussed the founding of a British chapter as a "bridge" to link with radical movements in the US. Their discussions resulted in a new British chapter being officially launched in late October 2017 with Tom Dupre and Ben Jones as its co-founders, after a banner was unfurled on Westminster Bridge reading "Defend London, Stop Islamisation".
On 9 March 2018, Sellner and his girlfriend Brittany Pettibone were barred from entering the UK because their presence was "not conducive to the public good".
Prior the ban, Sellner intended to deliver a speech to the Young Independence party, though they cancelled the event, citing supposed threats of violence from the far-left. Prior to being detained and deported, Sellner intended to deliver his speech at Speakers' Corner in Hyde Park. In June 2018 Tore Rasmussen, a Norwegian activist who had previously been denied entry to the United Kingdom, was working in Ireland to establish a local branch of Generation Identity.
In August 2018, the leader of GI UK Tom Dupre resigned from his position after UK press revealed Rasmussen, who was a senior member in the UK branch, had an active past in neo-Nazi movements within Norway.
Generation Identity UK has been conferencing with other organisations, namely Identity Evropa/American Identity Movement. Identity Evropa/American Identity Movement is known for its involvement in the deadly 11–12 August 2017 Unite the Right rally in Charlottesville, Virginia, United States and its antisemitism. Jacob Bewick, an activist with GI, had been exposed as a member of proscribed terror organisation National Action and was spotted at an NA march in 2016. At an after conference event, one GI UK member told a Hope not Hate informant that two members of the fascist National Front (and former NA members) were present.
The UK branch was condemned by the wider European movement on Twitter when it held its second annual conference and had invited numerous controversial alt-right speakers. Speaking alongside the UK's new leader Ben Jones was alt-right YouTuber Millennial Woes and writer Tomislav Sunić.
This controversy led to a number of members leaving the organisation in disgust at what they perceived to be a shift towards the "Old Right". This led to concern that the British version may become more radicalised and dangerous. Simon Murdoch, Identitarianism researcher at Hope not Hate, said: "Evidence suggests we will be left with a smaller but more toxic group in the UK, open to engagement with the more antisemitic, extreme and thus dangerous elements of the domestic far-right.”
According to Unite Against Fascism, the Identitarian Movement in the UK is estimated to have a membership of less than 200 activists as of June 2019.
Nordics
In Sweden, the organisation (active from 2004 to 2010), which founded the online encyclopedia Metapedia in 2006, promoted Identitarianism.
The influence of Identitarian theories has been noted in the Sweden Democrats' slogan "We are also a people!".
Other European groups
The origin of the Italian chapter Generazione Identitaria dates from 2012.
The founder of the far-right Croatian party Generation of Renovation has stated that it was originally formed in 2017 as that country's version of the alt-right and Identitarian movements.
The separatist party Som Catalans claims to defend the "identity of Catalonia" against "Spanish colonialism and the migrant invasion", as well as the "islamisation" of the Spanish autonomous community. Similar stances are also found in Spanish nationalist parties, such as Identitarios, which align themselves with the European Identity and Democracy Party.
In Belgium, in 2018, the State Security Service saw the rise of in the context of Identitarian groups emerging throughout Europe. A Europol terror report mentioned Soldaten van Odin and the defunct group La Meute.
In the Netherlands, was founded in 2012. Its main goal is "preservation of the national identity". Training their members at camps in France, their protests in the Netherlands attract tens of participants.
In Flanders, the website Voorpost is an ethnic nationalist (volksnationalist) group founded by Karel Dillen in 1976 as a splinter from the People’s Union.
Voorpost pursues an irredentist ideal of a Greater Netherlands, a nation state that would unite all Dutch-speaking territories in Europe.
The organisation has staged rallies on various topics, against Islam and mosques, against leftist organizations, against drugs, against pedophilia, and against socialism.
The Hungarian chapter, Identitesz, merged into Force and Determination in 2017.
Non-European affiliates
Australasia
There was a small group in Australia called Identity Australia around March 2019, which described itself as "a youth-focused identitiarian organisation dedicated to giving European Australians a voice and restoring Australia's European character", and published a manifesto detailing its beliefs, but its website is non-operational.
The Dingoes are an Australian group who were described in a 2016 news report as "young, educated and alternative right", and were compared to the Identitarian movement in Europe. Members do not reveal their identity. National Party MP George Christensen and One Nation candidate Mike Latham were both interviewed on the Dingoes podcast, called The Convict Report, but Christensen later said that he would not have done it if he had known about their extremist views. The podcast also featured a New Zealand man who ran the Dominion Movement, who was later arrested for sharing information that threatened NZ security.
New Zealand had hosted the Dominion Movement, which labelled itself as "a grass-roots Identitarian activist organisation committed to the revitalisation of our country and our people: white New Zealanders". The website for the group shutdown alongside New Zealand National Front in the aftermath of the Christchurch mosque shootings in March 2019. In late 2019, the Dominion Movement was largely replaced by a similar white supremacist group called Action Zealandia, after its co-founder and leader, a New Zealand soldier, was arrested for sharing information that threatened NZ security.
Australian Brenton Harrison Tarrant, the perpetrator of the Christchurch mosque shootings in New Zealand, was a believer in the Great Replacement conspiracy theory, named his manifesto after it, and donated €1,500 to Austrian Identitarian leader Martin Sellner of Identitäre Bewegung Österreich (IBÖ) a year prior to the terror attacks. An investigation into the potential links between Tarrant and IBÖ was conducted by then Austrian Minister of the Interior Herbert Kickl. Other than the donation, no other evidence of contact or connections between the two parties has been found. The Austrian government is considering dissolving the group. The shooter also donated €2,200 to , the French branch of the Generation Identity. Tarrant exchanged emails with Sellner with one asking if they could meet for coffee or beer in Vienna and sent him a link to his YouTube channel. This was confirmed by Sellner, but he denied interacting with Tarrant in person or knowing of his plans. The Austrian government later opened an investigation into Sellner over suspected formation of a terrorist group with Tarrant and the former's fiancée Brittany Pettibone who met Australian far-right figure Blair Cottrell.
North America
United States
The now-defunct neo-Nazi Traditionalist Worker Party was modelled after the European Identitarian movement, according to the Southern Poverty Law Center and the Anti-Defamation League. Identity Evropa and its successor the American Identity Movement in the United States labels itself Identitarian, and is part of the alt-right. Richard Spencer's National Policy Institute is also a white nationalist movement, which advocates an American version of Identitarianism called "American Identitarianism". The SPLC also reports that the Southern California-based Rise Above Movement "is inspired by Identitarian movements in Europe and is trying to bring the philosophies and violent tactics to the United States".
On 20 May 2017, two non-commissioned officers with the U.S. Marines were arrested for trespassing after displaying a banner from a building in Graham, North Carolina, during a Confederate Memorial Day event. The banner included the Identitarian logo, and the phrase "he who controls the past controls the future", a reference to George Orwell's novel Nineteen Eighty-Four, along with the initialism YWNRU, or "you will not replace us". The Marine Corps denounced the behaviour and investigated the incident. A marine spokesperson commented to local news: "Of course we condemn this type of behavior ... we condemn any type of behavior that is not congruent with our values or that is illegal." Both men pleaded guilty to trespassing. One received military administrative punishment. The other was discharged from the corps.
Canada
The Canadian organisation Generation Identity Canada was formed in 2014, and was renamed IDCanada in 2017. The organisation has distributed material across the country, such as in Hamilton, Ontario, Saskatoon, Peterborough, Ontario, Prince Edward Island, Alberta, and in Quebec.
La Meute (French for "The Pack") is a Québécois nationalist pressure group and identitarian movement fighting against illegal immigration and radical Islam. The group was founded in September 2015 in Quebec by two former Canadian Armed Forces members, Éric Venne and Patrick Beaudry, both of whom have left the group. La Meute announced it would prefer "to become large enough and organized enough to constitute a force that can't be ignored". The group has been attacked by anti-fascists in Montreal. A parallel protest encampment was set up in Gatineau, Quebec, during the larger Canada convoy protests in Ottawa. Steeve Charland of Grenville, Quebec, was arrested and charged in relation to the protests. Charland was reported as one of the leaders of La Meute in opposition to Canada's decision to open its borders to Syrian refugees. During the “Freedom Convoy” protests in Ottawa, Steeve Charland acted as the leader and spokesperson for the Farfadaas, a group that opposes COVID-19 health measures and whose members are recognizable by their leather vests marked with an expletive hand gesture.
Critics
Political scientist Cas Mudde has argued in 2021 that although Identitarians claim to share the slogan "0% racism, 100% identity" and officially subscribe to ethnopluralism, "the boundaries between biological and cultural arguments in the movement have become increasingly porous." A 2014 investigation led by political scientist Gudrun Hentges came to the conclusion that the Identitarian movement is ideologically situated between the French National Front, the , and neo-Nazism.
See also
References
Notes
Bibliography
Further reading
External links
21st-century social movements
Anti-immigration politics in Europe
Anti-Islam sentiment
Pan-European nationalism
Identity politics in Europe
Anti-Islam sentiment in Europe
Social movements in Europe
White nationalism
White separatism
Far-right politics in Europe
New Right (Europe) | 0.771797 | 0.997461 | 0.769837 |
Contemporary history | Contemporary history, in English-language historiography, is a subset of modern history that describes the historical period from about 1945 to the present. In the social sciences, contemporary history is also continuous with, and related to, the rise of postmodernity.
Contemporary history is politically dominated by the Cold War (1947–1991) between the Western Bloc, led by the United States, and the Eastern Bloc, led by the Soviet Union. The confrontation spurred fears of a nuclear war. An all-out "hot" war was avoided, but both sides intervened in the internal politics of smaller nations in their bid for global influence and via proxy wars. The Cold War ultimately ended with the Revolutions of 1989 and the dissolution of the Soviet Union in 1991. The latter stages and aftermath of the Cold War enabled the democratization of much of Europe, Africa, and Latin America. Decolonization was another important trend in Southeast Asia, the Middle East, and Africa as new states gained independence from European colonial empires during the period from 1945–1975. The Middle East also saw a conflict involving the new state of Israel, the rise of petroleum politics, the continuing prominence but later decline of Arab nationalism, and the growth of Islamism. The first supranational organizations of government, such as the United Nations and European Union, emerged during the period after 1945.
Countercultures rose and the sexual revolution transformed social relations in western countries between the 1960s and 1980s, as seen in the protests of 1968. Living standards rose sharply across the developed world because of the post-war economic boom. Japan and West Germany both emerged as exceptionally strong economies. The culture of the United States spread widely, with American television and movies spreading across the world. Some Western countries began a slow process of deindustrializing in the 1970s; globalization led to the emergence of new financial and industrial centers in Asia. The Japanese economic miracle was later followed by the Four Asian Tigers of Hong Kong, Singapore, South Korea and Taiwan. China launched major economic reforms from 1979 onward, becoming a major exporter of consumer goods around the world.
Science made new advances after 1945, which included spaceflight, nuclear technology, lasers, semiconductors, molecular biology, genetics, particle physics, and the Standard Model of quantum field theory. The first commercial computers were created, followed by the Internet, beginning the Information Age.
Political history
1945–1991
In 1945, the Allies of World War II had defeated all significant opposition to them. They established the United Nations to govern international relations and disputes. A looming question was how to handle the defeated Axis nations and the shattered nations that the Axis had conquered. Following the Yalta Conference, territory was divided into zones for which Allied country would have responsibility and manage rebuilding. While these zones were theoretically temporary (such as the eventual fate of occupied Austria, which was released to independence as a neutral country), growing tensions between the Western Bloc, led by the United States, with the Eastern Bloc, led by the Soviet Union, meant that many calcified into place. Countries in Soviet zones of Eastern Europe had communist regimes installed as satellite states. The Berlin Blockade of 1948 led to a Western Airlift to preserve West Berlin and signified a cooling of East-West relations. Germany split into two countries in 1949, liberal-democratic West Germany and communist East Germany. The conflict as a whole would become known as the Cold War. The Western Bloc formed NATO in 1949 while the Eastern Bloc formed the Warsaw Pact in 1955. Direct combat between the new Great Powers was generally avoided, although proxy wars fought in other countries by factions equipped by one side against the other side's faction occurred. An arms race to develop and build nuclear weapons happened as policymakers wanted to ensure their side had more if it came to a war.
In East Asia, Chiang Kai-shek's Republic of China was overthrown in the Chinese Communist Revolution from 1945–1949. His government retreated to Taiwan, but both the nationalist KMT government and the new communist mainland government under Mao Zedong continued to claim authority over all of China. Korea was divided similarly to Germany, with the Soviet Union occupying the North and the United States occupying the South (future North Korea and South Korea). Unlike Germany, the conflict there turned hot, as the Korean War erupted from 1950–1953. Korea was not reunified under either government, however, due to strong support from both the US and China for their favored side; it became a frozen conflict instead. Japan was given a new constitution foreswearing aggressive war in 1947, and the American occupation ended in 1952, although a treaty of mutual aid with the US was soon signed. The US also granted the Philippines their independence in 1946 while keeping close relations.
The Middle East became a hotbed of instability. The new Jewish state of Israel declared its independence, recognized by both the United States and the Soviet Union, after which followed the 1948 Arab–Israeli War. Egypt's weak and ineffective king Farouk was overthrown in the 1952 Egyptian revolution, and replaced by General Nasser; the 1953 Iran coup saw the American-friendly shah Mohammad Reza Pahlavi remove the democratic constraints on his government and take power directly; and Iraq's Western-friendly monarchy was overthrown in 1958. Nasser's Egypt would go on to face the Suez Crisis in 1956, briefly unify with Syria as the United Arab Republic (UAR) from 1958 to 1961, and expensively intervene in the North Yemen civil war from 1962 to 1970.
Decolonization was the most important development across Southeast Asia and Africa from 1946–1975, as the old British, French, Dutch, and Portuguese colonial empires were dismantled. Many new states were given their independence, but soon found themselves having to choose between allying with the Western Bloc, Eastern Bloc, or attempting to stay neutral as a member of the Non-Aligned Movement. British India was granted independence in 1947 without an outright war of independence being required. It was partitioned into Hindu-majority India and Muslim-majority Pakistan (West Pakistan and East Pakistan, future Pakistan and Bangladesh); Indo-Pakistani wars were fought in 1947, 1965, and 1971. Sukarno took control of an independent Indonesia in 1950, as attempts to reinstate Dutch rule in 1945–1949 had largely failed, and took an independent-to-Eastern leaning stance. He would later be overthrown by Suharto in 1968, who took a pro-Western stance. The Federation of Malaya was granted independence in 1957, with the concurrent fighting of the Malayan Emergency against communist forces from 1948–1960. The French unsuccessfully fought the First Indochina War in an attempt to hold on to French Indochina; at the 1954 Geneva Conference, the new states of Cambodia, Laos, the Democratic Republic of Vietnam, and the eventual Republic of Vietnam were created. The division of Indochina eventually led to the Vietnam War in the 1960s and 70s (as well as the Laotian Civil War and Cambodian Civil War), which ended in communist North Vietnam took over Sai Gon in 1975.
In Africa, France fought the grinding Algerian War from 1954–1962 that saw the end of French Algeria and the rise of a new independent Algeria. The British and French both slowly released their vast holdings, leading to the creation of states such as First Nigerian Republic in 1963. Portugal, on the other hand, fiercely held onto their Empire, leading to the Portuguese Colonial War from 1961–1974 in Angola, Guinea-Bissau, and Mozambique until the Estado Novo government fell. Meanwhile, apartheid-era South Africa remained fiercely anti-communist, but withdrew from the British Commonwealth in 1961, and supported various pro-colonial factions across Africa that had lost support from their "home" governments in Europe. Many of the newly independent African governments struggled with the balance between being too weak and overthrown by ambitious coup-plotters, and too strong and becoming dictatorships.
Latin America saw gradual economic growth but also instability in many countries, as the threat of coups and military regimes (juntas) were a major threat. The most famous was the Cuban Revolution that overthrew Fulgencio Batista's American-friendly government for Fidel Castro's Soviet-aligned government. This led to the Cuban Missile Crisis in 1963, generally considered one of the incidents most dangerously close to turning the Cold War into a direct military conflict. The 1968 Peruvian coup d'état and also installed a Soviet-friendly government. Despite this, the region ultimately leaned toward the US in this period, with the CIA supporting American-friendly factions in the 1954 Guatemalan coup d'état, the 1964 Brazilian coup d'état, the 1973 Chilean coup d'état, and others. Nicaragua suffered the most, with the Nicaraguan Revolution seeing major military aid from both great powers to their favored factions that extended a civil war in the country for decades. Mexico escaped this unrest, although functioned largely as a one-party state dominated by the PRI. Argentina had a succession of idiosyncratic governments that courted both the US and USSR, but generally mismanaged the economy.
The Middle East saw events that presaged later conflicts in the 70s and 80s. A few years after the end of the UAR's union between Egypt and Syria, Syria's government was overthrown in the 1966 Syrian coup d'état and replaced with the Neo-Baathist Party, eventually leading to the leadership of the Al-Assad family. Israel and its neighbors fought the Six-Day War in 1967 and the Yom Kippur War of 1973. Under Anwar Sadat and later Hosni Mubarak, Egypt switched from Nasserism to favoring the Western Bloc, and signed a peace treaty with Israel. Lebanon, once among the most prosperous countries in the region and a cultural center, collapsed into the decade-long Lebanese Civil War from 1975–1990. Iran's unpopular pro-American government was overthrown in the 1979 Iranian Revolution and was replaced by a new Islamic Republic headed by Ruhollah Khomeini. Iran and Baathist Iraq under Saddam Hussein then fought each other in the Iran–Iraq War from 1980–1988, which ended inconclusively.
In East Asia, China underwent the Cultural Revolution from 1966 to 1976, a major internal struggle that saw an intense program of Maoism and persecution of perceived internal enemies. China's relations with the Soviets deteriorated in the 1960s and 70s, resulting in the Sino-Soviet split, although the two were able to cooperate on some matters. "Ping-pong diplomacy" led to a rapprochement between the US and China and American recognition of the Chinese communist government in the 1970s. China's pro-democracy movement was suppressed after the 1989 Tiananmen Square protests, and China's government survived the tensions that would roil the Soviet-aligned bloc during the 1980s. South Korea (in the June Democratic Struggle) and Taiwan (with the lifting of martial law) would take major steps toward liberalization in 1987–1988, shifting from Western-aligned one-party states to more fully participatory democracies.
The 1980s saw a general retreat for the communist bloc. The Soviet–Afghan War (1979–1989) is often called the "Soviet Union's Vietnam War" in comparison to the American defeat, being an expensive and ultimately unsuccessful war and occupation. More importantly, the intervening decades had seen that Eastern Europe was unable to compete economically with Western Europe, which undermined the promise of communist abundance compared to capitalist poverty. The Western capitalist economies had proven wealthier and stronger, which made matching the Soviet defense budget to the American one strain limited resources. The Pan-European Picnic in 1989 then set in motion a peaceful chain reaction with the subsequent fall of the Berlin Wall. The Revolutions of 1989 saw many countries of Eastern Europe throw off their communist governments, and the USSR declined to invade to re-establish them. East and West Germany were reunified. Client state status for many states ended, as there was no conflict left to fund. The Malta Summit on 3 December 1989, the failure of the August Coup by Soviet hardliners, and the formal dissolution of the Soviet Union on 26 December 1991 sealed the end of the Cold War.
1991–2001
The end of the Cold War left the United States the world's sole superpower. Communism seemed discredited; while China remained an officially communist state, Deng Xiaoping's economic reforms and socialism with Chinese characteristics allowed for the growth of a capitalist private sector in China. In Russia, President Boris Yeltsin pursued a policy of privatization, spinning off former government agencies into private corporations, attempting to handle budget problems inherited from the USSR. The end of Soviet foreign aid caused a variety of changes in countries previously part of the Eastern Bloc; many officially became democratic republics, though some were more accurately described as authoritarian or oligarchic republics and one-party states. Many Western commentators treated the development optimistically; it was thought the world was steadily progressing toward free, liberal democracies. South Africa, no longer able to attract Western support by claiming to be anti-communist, ended apartheid in the early 1990s, and many Eastern European countries switched to stable democracies. While some Americans had anticipated a "peace dividend" from budget cuts to the Defense Department, these cuts were not as large as some had hoped. The European Economic Community evolved into the European Union with the signing of the Maastricht Treaty in 1993, which integrated Europe across borders to a new degree. International coalitions continued to have a role; the Gulf War saw a large international coalition undo Baathist Iraq's annexation of Kuwait, but other "police" style actions were less successful. Somalia and Afghanistan descended into long, bloody civil wars for almost the entirety of the decade (Somali Civil War, Afghan Civil War (1992–1996), Afghan Civil War (1996–2001)). Russia fought a brutal war in Chechnya that failed to suppress the insurgency there from 1994–1996; war would resume during the Second Chechen War in 1999–2000 that saw a resumption of Russian control after Russia successfully convinced enough rebels to join their cause with promises of autonomy. The breakup of Yugoslavia also led to a series of Yugoslav Wars; NATO eventually intervened in the Kosovo War. In the Middle East, the Israeli–Palestinian peace process offered the prospect of a long-term peace deal to many; the Oslo Accords signed in 1993 seemed to offer a "roadmap" to resolving the conflict. Despite these high hopes, they would be largely dashed in 2000–2001 after a breakdown of negotiations and the Second Intifada.
2001–present
War on Terror, Afghanistan War, and Iraq War
The September 11 attacks were a series of coordinated suicide attacks by al-Qaeda upon the United States on 11 September 2001. On that morning, nineteen al-Qaeda terrorists hijacked four commercial passenger jet airliners. The hijackers intentionally crashed two of the airliners into the Twin Towers of the World Trade Center in New York City, killing everyone on board and many others working in the buildings. Both buildings collapsed within two hours, destroying nearby buildings and damaging others. The hijackers crashed a third airliner into the Pentagon in Arlington, Virginia. The fourth plane crashed into a field near Shanksville in rural Somerset County, Pennsylvania, after some of its passengers and flight crew attempted to retake control of the plane.
In response, the United States under President George W. Bush enacted the Patriot Act. Many other countries also strengthened their anti-terrorism legislation and expanded law enforcement powers. Major terrorist events after the September 11 attacks include the 2002 Bali bombings, the 2002 Moscow theater hostage crisis, the 2003 Istanbul bombings, the 2004 Madrid train bombings, the 2004 Beslan school siege, the 2005 London bombings, the 2005 Delhi bombings, and the 2008 Mumbai attacks, generally from Islamic terrorism.
The United States responded to the 11 September 2001 attacks by launching a "Global War on Terrorism", invading the Islamic Emirate of Afghanistan to depose the Taliban, who had harbored al-Qaeda terrorists. The War in Afghanistan began in late 2001 and was launched by the UN-authorized ISAF, with the United States and United Kingdom providing most of the troops. The Bush administration policy and the Bush Doctrine stated forces would not distinguish between terrorist organizations and nations or governments that harbor them. Operation Enduring Freedom (OEF) was the United States combat operation involving some coalition partners and operating primarily in the eastern and southern parts of the country along the Pakistan border; the ISAF established by the United Nations Security Council was in charge of securing the capital of Kabul and its surrounding areas. NATO assumed control of ISAF in 2003.
Despite initial coalition successes, the Taliban were never entirely defeated, and continued to hold territory in mountainous regions as well as threaten the new government, the Islamic Republic of Afghanistan, whose grasp on power outside the major cities was shaky at best. The war was also less successful in restricting al-Qaeda than anticipated.
The Iraq War began in March 2003 with the invasion of Iraq by a multinational force. The invasion of Iraq led to an occupation and the eventual capture of Saddam Hussein, who was later executed by the Iraqi Government. Despite government assumptions that the war in Iraq would be over with the fall of Hussein, it continued and intensified. Sectarian groups both fought each other and the occupying coalition forces via asymmetric warfare during the Iraqi insurgency, as Iraq was starkly divided between Sunni, Shia, and Kurdish groups that now competed with each other for power. Al-Qaeda operations in Iraq continued as well. In late 2008, the U.S. and Iraqi governments approved a Status of Forces Agreement effective through to the end of 2011.
The Obama administration re-focused US involvement in the conflict on the withdrawal of its troops from Iraq and a surge of troops and government support in Afghanistan. In May 2011, the bin Laden raid occurred after bin Laden was tracked to his compound in Abbottabad, Pakistan.
In 2011, the United States declared a formal end to the Iraq War. In February 2020, President Donald Trump agreed with the Taliban to withdraw all American troops from Afghanistan over the next year. The Biden administration delayed the withdrawal by a few months, but still largely kept to the deal; the coalition-supported Afghan government soon collapsed, and the Taliban took undisputed control of the country in August 2021 after the successful 2021 Taliban offensive.
Arab Spring and Syria
The Arab Spring began in earnest in 2010 with anti-government protests in the Muslim world, but quickly escalated to full-scale military conflicts in countries like Syria, Libya, and Yemen and also gave the opportunity for the emergence of various militant groups including the Islamic State (IS). The IS was able to take advantage of social media platforms including Twitter to recruit foreign fighters from around the world and seized significant portions of territory in Iraq, Syria, Afghanistan, and the Sinai Peninsula of Egypt from 2013 and ongoing. On the other hand, some violent militant organizations were able to negotiate peace with governments including the Moro Islamic Liberation Front in the Philippines in 2014. The presence of IS and the stalemate in the Syrian civil war created a migration of refugees to Europe and also galvanized and encouraged high-profile terrorism attacks and armed conflicts around the world, such as the November 2015 Paris attacks and the Siege of Marawi in the Philippines in 2017. In 2014, the United States decided to intervene against the Islamic State in Iraq, with most IS fighters being driven out by the end of 2018. Russia and Iran also jointly launched a campaign against IS and in support of Syrian President Bashar al-Assad. As of 2022, Assad has largely regained authority in the southern half of the country, while the northern reaches are controlled by a mixture of Arab Sunni rebels, Kurds, and Turkey.
Russia
Vladimir Putin, Yeltsin's successor, was very popular in Russia after his victory in the Second Chechen War. He portrayed himself as a corruption fighter initially, checking Russian oligarchs who had acquired vast wealth during Russia's liberalization period. With a combination of genuine popularity and legal rollbacks, Russia gradually moved toward being a one-party state, a democracy but one where Putin's party always won. Russia has since intervened in a variety of military conflicts in its neighboring countries including the 2008 Russo-Georgian War; the 2014 Russo-Ukrainian War and Annexation of Crimea; a 2015 intervention in the Syrian Civil War; and the expansion of the Russo-Ukrainian War to the full-fledged Russian invasion of Ukraine where Russia declared their intent to depose the Ukrainian government and install a compliant, Russia-friendly government. The Russian government has often cited the enlargement of NATO as a major complaint.
Economic history
The end of World War II in 1945 saw an increase in international trade and an interconnected system of treaties and agreements to ease its flow. In particular, the United States and the United States dollar took a pivotal role in the world economy, displacing the UK. The era is sometimes called "Pax Americana" for the relative liberal peace in the Western world, resulting from the preponderance of power enjoyed by the US, as a comparison to the Pax Romana established at the height of the Roman Empire. New York's financial sector ("Wall Street") was the center of the financial world from 1945–1970 in a dominant way unlikely to be seen again. Unlike the aftermath of World War I, the US strongly aided in the rebuilding of Europe, including aid to the defeated Axis nations, rather than punishment. The Marshall Plan sent billions of dollars of aid to Western Europe to ensure its stability and ward off a potential economic downturn. The 1944 Bretton Woods Conference established the Bretton Woods system, a set of practices that governed world trade and currencies from 1945–1971, as well as the World Bank and the International Monetary Fund (IMF). Western Europe also established the European Economic Community in 1957 to ease customs and aid international trade. In general, vast quality of life improvements affected most every corner of the globe during this period, in both the Western and Eastern spheres. France called them Les Trente Glorieuses ("The Glorious Thirty [Years]"). Despite being largely destroyed in the war, West Germany soon bounced back to being an economic powerhouse by the 1950s with the wirtschaftswunder. Surprisingly, Japan followed Germany, achieving incredible economic growth and becoming the second largest economy in the world in 1968, a phenomenon called the Japanese economic miracle. Many explanations are proffered for the enviable results of these years: relative peace (at least outside the "Third World"); a reduction in average family size; technological improvements; and others. The Eastern Bloc, meanwhile, established Comecon as their equivalent to the Marshall Plan and to establish internal trading rules between communist states.
The 1970s saw economic headwinds. Notably, the price of oil started to go up in the 1970s, as the easiest and most accessible wells had already been pumped dry in the preceding century, and oil is a non-renewable resource. Attention was drawn to the abundant oil in the Middle East, where countries in OPEC controlled substantial untapped oil reserves. Political tensions over the Yom Kippur War and the Iranian Revolution led to the 1973 oil crisis and 1979 oil crisis. The Soviet Union called it the "Era of Stagnation". The 1970s and 80s also saw the rise of the Four Asian Tigers, as South Korea, Taiwan, Singapore, and Hong Kong emulated the Japanese route to prosperity with varying degree of success. In China, the leftist Gang of Four were overthrown in 1976, and Deng Xiaoping pursued a policy of tentatively opening the Chinese economy to capitalist innovations throughout the 1980s, which would be continued by his successors in the 1990s. China's economy, tiny in 1976, would see tremendous growth, and eventually take the spot as second largest economy from Japan in 2010. Among Western economies, the collapse of the Bretton Woods system was replaced by a more flexible era of floating exchange rates. The Group of Seven (G7) first met in 1975 and become one of the main international forums that regulated international trade among developed country. The Soviet Union implemented a policy of perestroika in the 1980s which allowed tentative market reforms. The fall of the USSR saw differing approaches in the 1990s in the East: some newly independent states went in a capitalist direction such as Estonia, some maintained a strong governmental presence in their economy, and some opted for a mix. The privatization of government firms and resources drew accusations of crony capitalism in many states, however, including the Russian Federation, the largest and most important state of the USSR; the beneficiaries of the turbulent period were often called the "Russian oligarchs".
In the beginning of the 2000s, there was a global rise in prices in commodities and housing, marking an end to the 2000s commodities boom. The US mortgage-backed securities, which had risks that were hard to assess, were marketed around the world and a broad based credit boom fed a global speculative bubble in real estate and equities. The financial situation was also affected by a sharp increase in oil and food prices. The collapse of the American housing bubble caused the values of securities tied to real estate pricing to plummet thereafter, damaging financial institutions. The Great Recession, a severe economic recession which began in the United States in 2007, was sparked by the outbreak of the 2007–2008 financial crisis. The modern financial crisis was linked to earlier lending practices by financial institutions and the trend of securitization of American real estate mortgages.
The Great Recession spread to much of the developed country, and has caused a pronounced deceleration of economic activity. The global recession occurred in an economic environment characterized by various imbalances. This global recession has resulted in a sharp drop in international trade, rising unemployment and slumping commodity prices. The recession renewed interest in Keynesian economic ideas on how to combat recessionary conditions. However, various industrial countries continued to undertake austerity policies to cut deficits, reduced spending, as opposed to following Keynesian theories.
From late 2009 European debt crisis, fears of a sovereign debt crisis developed among investors concerning rising government debt levels across the globe together with a wave of downgrading of government debt of certain European states. Concerns intensified early 2010 and thereafter making it difficult or impossible for sovereigns to re-finance their debts. On 9 May 2010, Europe's Finance Ministers approved a rescue package worth €750 billion aimed at ensuring financial stability across Europe. The European Financial Stability Facility (EFSF) was a special purpose vehicle financed by members of the eurozone to combat the European sovereign debt crisis. In October 2011 eurozone leaders agreed on another package of measures designed to prevent the collapse of member economies. The three most affected countries, Greece, Ireland and Portugal, collectively account for six percent of eurozone's gross domestic product (GDP). In 2012, eurozone finance ministers reached an agreement on a second €130-billion Greek bailout. In 2013, the European Union agreed to a €10 billion economic bailout for Cyprus. The 2020 coronavirus pandemic caused economic disruption, with wide-ranging economic impacts of COVID-19 such as supply chain changes and an increase in working-from-home, along with the COVID-19 recession.
Social history
Social changes since 1945 have been vast and disparate, affecting countries and subgroups within those countries in ways specific to each population, meaning there is not one single global story of social change. Despite this, one of the major trends has been an increasing interchange between cultures and a wider spread of the most successful works, enabled by new technology and globalization. In earlier periods, a successful musician or theater troupe might be confined to playing in a single city at a time, limiting their reach. The spread of better recording technology, such as the magnetophon, meant that a musical act could have their song be played over the radio everywhere without loss of sound quality, creating international superstars such as Elvis Presley and The Beatles. The spread of home television sets allowed people across the globe to easily watch the same show, rather than requiring viewers to attend a local theater. Hollywood in California produced films that dominated cinema; while intended for the lucrative American market, these films spread across the globe, backed by their large budgets and the cinematic expertise gathered there. The rise of the Internet in the 1990s allowed both for an ever further spread of the most popular and dominant works, but the comparatively cheap cost of publishing there, whether as a personal website, blog, or YouTube video, also allowed specific niche subcultures to connect and thrive in a way that was less true in the 20th century. For example, diaspora groups of immigrants can more easily stay in contact with their family and friends in their origin region, compared to earlier eras where travel and communication was far more expensive, making a narrative of strictly increasing global homogenization incomplete. International telephone networks, and later Internet telephony, allowed cheaper and easier long-distance communication than previous eras.
Language usage in the contemporary era has seen a rise in English as a lingua franca, where people across the world learn the English language as a second language. This has been both to facilitate international communication, especially in places tied to international trade or tourism, as well as to better consume widespread English-language media. This is tied to increased Americanization, as American culture has grown increasingly influential and widespread. To a lesser extent, during the Cold War, something similar happened with the Russian language in the Eastern Bloc and among communist-aligned factions; however, this status was mostly reversed after the collapse of the Soviet Union. The French and German languages saw their prestige as global languages decline after World War II.
Religious trends have been disparate and not consistent across countries, often with sharply varying results even between similar and nearby groups. In industrialized and economically prosperous regions, there has been a loose trend toward secularization that deprioritized the role of religion, even among people who still identified as adherents. The decline of Christianity in the Western world has been perhaps the most notable of these trends, although many non-Western cultures have been affected as well, such as the rise of irreligion in China (buttressed by antireligious campaigns). As an example of how localized this process can be, during the Cold War both the Polish People's Republic and the Czechoslovak Socialist Republic endorsed state atheism. However, after the fall of the Iron Curtain in 1989–1990, the people of these bordering states had radically different cultural attitudes toward religion; Poland was one of the more religious states in Europe, with 96% of its population espousing a belief in Catholic Christianity in 2011, while the Czech Republic was one of the most stridently irreligious, with only 15% of its population espousing any religious beliefs at all by 2011. In the Islamic world, a notable trend has been the spread of international schools of thought into regions where belief was previously localized, such as the International propagation of Salafism and Wahhabism funded by the government of Saudi Arabia. While regional Islamic groups remain strong, they are more contested than in the past.
Another social trend has been the rise of urbanization as a larger proportion of the world's population has moved to live in cities and urban areas, and fewer people live in rural areas. In the United States, as the overall population more than doubled from 1930 to 1990, around a third of its counties saw their population decline by around 27%, suggesting that as rural counties empty, the urban counties are where the vast majority of inhabitants are moving to. In Eastern Africa, the urban population soared from 11 million in 1920 to 77 million in 2010. Many rural Chinese people moved to large coastal cities such as Shenzhen to work in the 1990s and 2000s, leading to a sharp increase of Urbanization in China. Rural parts of Japan have seen stark population declines, especially among the young, with only the Greater Tokyo area continuing to grow. How to deal with this change is a major issue, as many cities and their transportation networks were not designed to serve the larger populations that now occupy them.
A major trend in many industrialized nations was the sexual revolution, an adoption of publicly more tolerant attitudes toward sex and pre-marital sex. "The pill" was first approved for use in 1960 in the United States, and spread rapidly around the world. The pill made birth control easier and more reliable than earlier methods. This made sex for pleasure less likely to result in unintended children. It also allowed for easier family planning, where couples could choose more specifically when to have kids compared to earlier eras. Some analysts credit this as one reason behind a decline in birth rates in the industrialized world, which had multiple second-order effects. Many regions have also made divorce much easier to officially procure. However, the decline in birth rate is not a universal trend; many nations continue to have high birth rates, and the world's overall population is still growing as of 2022.
One of the yet evolving and unknown impacts in the contemporary era has been the social effects of cheap and common Internet access. As users gradually switched from personal web pages to blogs to social media, many surprising effects have resulted with both positive and negative assessments. Optimistic assessments often praise the decentralized nature that allows anyone to theoretically gain a platform without the need to convince a publisher or media company to back them, as well as the ease in enabling like-minded people to collaborate at long-distance, even if the digital utopianism of the 1990s is less common. Pessimistic assessments worry about the effects on children such as enabling cyberbullying; filter bubbles where Internet users are not challenged by outsider views; "cancel culture" where people are pilloried online but sometimes disproportionately; and slacktivism as an appealing but ineffective replacement for older forms of community work.
Contemporary science and technology
Energy
The growing world population and rising standards of living has caused a vast increase in demand for energy development, both to power vehicles such as personal cars as well as on public electrical grids. In particular, petroleum oil has been in ravenous demand across the world. Many of the cheapest and easiest sources of oil to access were largely drained in the 19th and early 20th centuries, leading to a hunt for new sources of oil. The value of oil has spilled over into politics as well, as "petrostates" with access to oil found a source of vast revenue that did not require traditional government revenue-raising measures, such as tariffs or income taxation. The rising cost of oil led to the 1970s energy crisis and various adaptations in energy conservation to better conserve oil, such as more efficient engines and better insulation. It has also led to concerns of "peak oil," that the rising extraction costs of oil will eventually lead to massive shortages and a large disincentive to burn oil except when absolutely necessary (such as in the case of aviation fuel), although oil continues to be one of the most popular sources of energy.
Other fossil fuels have continued a prominent role in the world's energy production. Coal energy, usually credited as helping kickstart the Industrial Revolution, has declined somewhat in prominence, but it started from a commanding large slice of the sources of energy. Even if diminished, coal is still a popular and common style of power plant; it made up a huge proportion of South Africa and India's power grid from 1945 to the present, for example. That said, increasing price, as well as concerns both over the air pollution generated when it is burnt and the landscape destruction when it is mined (such as mountaintop removal mining), have caused setbacks for the coal industry. Natural gas has grown in its proportion of the market, especially as Liquefied natural gas (LNG) has enabled it to be transported over longer distances than was previously feasible.
An entirely new form of energy creation dawned in the 1950s and 1960s: nuclear power for peaceful purposes and the construction of nuclear power plants. Hopes that atomic energy would be "too cheap to meter" in the 1950s proved overly optimistic, however. Atomic energy grew to be a large part of several nations energy generation strategies, especially nuclear power in France. Nuclear power continues to be controversial. Concerns include its association with nuclear weapons, financial cost, disposal of radioactive nuclear waste, and fears of safety from reactor meltdowns, especially after the 1986 Chernobyl disaster. An anti-nuclear movement arose that was skeptical of atomic energy and has discouraged many projects. Nuclear proponents counter that nuclear energy produces no air pollution compared to traditional fossil fuel plants, and can provide a steady supply of energy regardless of external conditions unlike solar and wind energy. With the supply of Russian natural gas disrupted in 2022, France is looking to reactivate some of its older decommissioned nuclear plants, for example.
Various forms of renewable energy have grown in prominence in the contemporary era. Wind energy, while used on a small scale for centuries, has seen growth with large distributed groups of windmills used to produce energy for the grid. Solar power has also grown in prominence, with around 4% of the world's overall energy production in 2021 (compared to a much smaller slice before). While these energy sources are considered to be much less environmentally impactful than fossil fuels, concerns have been raised over the various rare earth metals used in the production of batteries and solar, which can require destructive mining techniques to gather.
Computing and the Internet
The Information Age or Information Era, also commonly known as the Age of the Computer, is an idea that the current age will be characterized by the ability of individuals to transfer information freely, and to have instant access to knowledge that would have been difficult or impossible to find previously. The idea is heavily linked to the concept of a Digital Age or Digital Revolution, and carries the ramifications of a shift from traditional industry that the Industrial Revolution brought through industrialization, to an economy based around the manipulation of information. The period is generally said to have begun in the latter half of the 20th century, though the particular date varies. The term began its use around the late 1980s and early 1990s, and has been used up to the present with the availability of the Internet.
During the late 1990s, both Internet directories and search engines were popular—Yahoo! and Altavista (both founded 1995) were the respective industry leaders. By late 2001, the directory model had begun to give way to search engines, tracking the rise of Google (founded 1998), which had developed new approaches to relevancy ranking. Directory features, while still commonly available, became after-thoughts to search engines. Database size, which had been a significant marketing feature through the early 2000s (decade), was similarly displaced by emphasis on relevancy ranking, the methods by which search engines attempt to sort the best results first.
"Web 2.0" is characterized as facilitating communication, information sharing, interoperability, User-centered design and collaboration on the World Wide Web. It has led to the development and evolution of web-based communities, hosted services, and web applications. Examples include social-networking sites, video-sharing sites, wikis, blogs, mashups and folksonomies. Social networking emerged in the early 21st century as a popular social communication, largely replacing much of the function of email, message boards and instant messaging services. Twitter, Facebook, and YouTube are all major examples of social websites that gained widespread popularity. The information distribution continued into the early 21st century with mobile interaction and Internet access growing massively in the early 21st century. By the 2010s, a majority of people in the developed world had Internet access and a majority of people worldwide had a mobile phone. Marking the rise of mobile computing, worldwide sales of personal computers fall 14% during the first quarter of 2013. The Semantic Web (dubbed, "Web 3.0") begins the inclusion of semantic content in web pages, converting the current web dominated by unstructured and semi-structured documents into a "web of data".
With the rise of information technology, computer security, and information security in general, is a concern for computers and networks. Concerns include information and services which are protected from unintended or unauthorized access, change or destruction. This has also raised questions of Internet privacy and personal privacy globally.
Space exploration
The Space Race was one of the rivalries of the Cold War, with both the United States space program (NASA) and the Soviet space program launching satellites, probes, and planning missions. While the Soviets put the first human into space with Yuri Gagarin, the Americans soon caught up, and the US was the first to launch a successful Moon landing mission with Apollo 11 in 1969, followed by five more landings in the next few years.
In the 1970s and 80s, the US took a new approach with the Space Shuttle program, hoping to reduce the cost of launches by creating a re-usable Space Shuttle. The first fully functional Space Shuttle orbiter was Columbia (designated OV-102), launched into low Earth orbit in April 1981. In 1996, Shuttle mission STS-75 conducted research in space with the electrodynamic tether generator and other tether configurations. The program suffered from two incidents that destroyed a shuttle: the Challenger disaster and the Columbia disaster). The program ultimately had 135 missions. The retirement of NASA's Space Shuttle fleet took place from March to July 2011.
The end of the Cold War saw a new era of international cooperation with the International Space Station (ISS). Commercial spaceflight also became possible as governments loosened what had previously been their firm control over satellites, opening new possibilities, but also new risks such as light pollution from satellites. The Commercial Orbital Transportation Services (COTS) program began in 2006.
There are various spaceports, including spaceports of human spaceflight and other launch systems (space logistics). Private spaceflight is flight beyond the Kármán line that is conducted and paid for by an entity other than a government agency. Commercialization of space is the use of equipment sent into or through outer space to provide goods or services of commercial value, either by a corporation or state. Space trade plans and predictions began in the 1960s. Spacecraft propulsion is any method used to accelerate spacecraft and artificial satellites.
NASA announced in 2011 that its Mars Reconnaissance Orbiter captured photographic evidence of possible liquid water on Mars during warm seasons. On 6 August 2012, the Mars Science Laboratory Curiosity, the most elaborate Martian exploration vehicle to date, landed on Mars. After the WMAP observations of the cosmic microwave background, information was released in 2011 of the work done by the Planck Surveyor, estimating the Age of the universe to 13.8 billion years old (a 100 million years older than previously thought). Another technological advancement came in 2012 with European physicists statistically demonstrating the existence of the Higgs boson.
Challenges and problems
Climate change
Climate change and global warming reflects the notion of the modern climate. The changes of climate over the past century, have been attributed to various factors which have resulted in a global warming. This warming is the increase in the average temperature of the Earth's near-surface air and oceans since the mid-20th century and its projected continuation. Some effects on both the natural environment and human life are, at least in part, already being attributed to global warming. A 2001 report by the Intergovernmental Panel on Climate Change suggests that glacier retreat, ice shelf disruption such as that of the Larsen Ice Shelf, sea level rise, changes in rainfall patterns, and increased intensity and frequency of extreme weather events are attributable in part to global warming. Other expected effects include water scarcity in some regions and increased precipitation in others, changes in mountain snowpack, and adverse health effects from warmer temperatures.
It is usually impossible to connect specific weather events to human impact on the world. Instead, such impact is expected to cause changes in the overall distribution and intensity of weather events, such as changes to the frequency and intensity of heavy precipitation. Broader effects are expected to include glacial retreat, Arctic shrinkage, and worldwide sea level rise. Other effects may include changes in crop yields, addition of new trade routes, species extinctions, and changes in the range of disease vectors. Until 2009, the Arctic Northwest Passage pack ice prevented regular marine shipping throughout most of the year in this area, but climate change has reduced the pack ice, and this Arctic shrinkage made the waterways more navigable.
Health and pandemics
Several disease outbreaks, epidemics, and pandemics have occurred during contemporary history. Some of these include the 1957–1958 influenza pandemic, the Hong Kong flu of 1968–1969, the 1977–1979 Russian flu, the HIV/AIDS epidemic (1981–present), the SARS outbreak of 2002–2004, the swine flu pandemic of 2009–2010, and the COVID-19 pandemic (2019–2022).
COVID-19 pandemic
In 2020, an outbreak of the COVID-19 disease, first documented in late 2019 in Wuhan, China, spread to other countries becoming a global pandemic, which caused a major socio-economic disruption all over the world. Many countries ordered mandatory lockdowns on movement and closures of non-essential businesses. The threat of the disease caused the COVID-19 recession, although the distribution of vaccines has since eased the economic impact in many countries.
More generally, COVID-19 has been held up as an example of a global catastrophic risk unique to the modern era's ease of travel. New diseases can spread far faster and further in the contemporary era than any previous era of human history; pandemic prevention is one resulting field to ensure that if this happens with a sufficiently deadly virus, humanity can take measures to stop its spread.
Charts
Timeline
Contemporary world map
See also
General:
Modern history, Timelines of modern history, Anthropocene
Generations:
Generation, List of generations, Baby Boom Generation, Generation X, Xennials, Generation Y, Generation Z, Generation Alpha
Music and arts:
Contemporary art, Contemporary dance, Contemporary literature, Contemporary music, Contemporary hit radio, Adult contemporary music, Contemporary Christian music, Contemporary R&B, Urban contemporary, Video games
Future:
Future history, Futurology, Timeline of the near future], Third millennium, 21st century
References
Further reading
Bell, P. M. H. and Mark Gilbert. The World Since 1945: An International History (2nd ed. 2017), 584pp excerpt
Boyd, Andrew, Joshua Comenetz. An atlas of world affairs (2007) excerpt.
Briggs, Asa, and Peter Burke. A Social History of the Media: From Gutenberg to the Internet (2002) excerpt.
Hunt, Michael H. The World Transformed: 1945 to the Present (2nd ed. 2015) 624pp website
Hunt, Michael H. ed., The World Transformed, 1945 to the Present: A Documentary Reader (2nd ed. 2001) primary sources excerpts
McWilliams, Wayne C. and Harry Piotrowski. The World Since 1945: A History of International Relations (8th ed. 2014), 620pp
External links
Internet Modern History Sourcebook at Fordham University
Journal of Contemporary History. SAGE Publications. (Print )
Contemporary History Institute (CHI). ohiou.edu (ed., Analyzes the contemporary period in world affairs—the period from World War II to the present—from an interdisciplinary historical perspective.)
Soviet Union Timeline on BBC
Historiography
Historical eras
Articles which contain graphical timelines
Articles containing video clips | 0.772791 | 0.996096 | 0.769774 |
History of India | Anatomically modern humans first arrived on the Indian subcontinent between 73,000 and 55,000 years ago. The earliest known human remains in South Asia date to 30,000 years ago. Sedentariness began in South Asia around 7000 BCE; by 4500 BCE, settled life had spread, and gradually evolved into the Indus Valley Civilisation, which flourished between 2500 BCE and 1900 BCE in present-day Pakistan and north-western India. Early in the second millennium BCE, persistent drought caused the population of the Indus Valley to scatter from large urban centres to villages. Indo-Aryan tribes moved into the Punjab from Central Asia in several waves of migration. The Vedic Period of the Vedic people in northern India (1500–500 BCE) was marked by the composition of their extensive collections of hymns (Vedas). The social structure was loosely stratified via the varna system, incorporated into the highly evolved present-day Jāti system. The pastoral and nomadic Indo-Aryans spread from the Punjab into the Gangetic plain. Around 600 BCE, a new, interregional culture arose; then, small chieftaincies (janapadas) were consolidated into larger states (mahajanapadas). Second urbanization took place, which came with the rise of new ascetic movements and religious concepts, including the rise of Jainism and Buddhism. The latter was synthesized with the preexisting religious cultures of the subcontinent, giving rise to Hinduism.
Chandragupta Maurya overthrew the Nanda Empire and established the first great empire in ancient India, the Maurya Empire. India's Mauryan king Ashoka is widely recognised for his historical acceptance of Buddhism and his attempts to spread nonviolence and peace across his empire. The Maurya Empire would collapse in 185 BCE, on the assassination of the then-emperor Brihadratha by his general Pushyamitra Shunga. Shunga would form the Shunga Empire in the north and north-east of the subcontinent, while the Greco-Bactrian Kingdom would claim the north-west and found the Indo-Greek Kingdom. Various parts of India were ruled by numerous dynasties, including the Gupta Empire, in the 4th to 6th centuries CE. This period, witnessing a Hindu religious and intellectual resurgence is known as the Classical or Golden Age of India. Aspects of Indian civilisation, administration, culture, and religion spread to much of Asia, which led to the establishment of Indianised kingdoms in the region, forming Greater India. The most significant event between the 7th and 11th centuries was the Tripartite struggle centred on Kannauj. Southern India saw the rise of multiple imperial powers from the middle of the fifth century. The Chola dynasty conquered southern India in the 11th century. In the early medieval period, Indian mathematics, including Hindu numerals, influenced the development of mathematics and astronomy in the Arab world, including the creation of the Hindu-Arabic numeral system.
Islamic conquests made limited inroads into modern Afghanistan and Sindh as early as the 8th century, followed by the invasions of Mahmud Ghazni.
The Delhi Sultanate was founded in 1206 by Central Asian Turks who were Indianized. They ruled a major part of the northern Indian subcontinent in the early 14th century. It was ruled by multiple Turk, Afghan and Indian dynasties, including the Turco-Mongol Indianized Tughlaq Dynasty but declined in the late 14th century following the invasions of Timur and saw the advent of the Malwa, Gujarat, and Bahmani Sultanates, the last of which split in 1518 into the five Deccan sultanates. The wealthy Bengal Sultanate also emerged as a major power, lasting over three centuries. During this period, multiple strong Hindu kingdoms, notably the Vijayanagara Empire and the Rajput states, emerged and played significant roles in shaping the cultural and political landscape of India.
The early modern period began in the 16th century, when the Mughal Empire conquered most of the Indian subcontinent, signaling the proto-industrialisation, becoming the biggest global economy and manufacturing power. The Mughals suffered a gradual decline in the early 18th century, largely due to the rising power of the Marathas, who took control of extensive regions of the Indian subcontinent. The East India Company, acting as a sovereign force on behalf of the British government, gradually acquired control of huge areas of India between the middle of the 18th and the middle of the 19th centuries. Policies of company rule in India led to the Indian Rebellion of 1857. India was afterwards ruled directly by the British Crown, in the British Raj. After World War I, a nationwide struggle for independence was launched by the Indian National Congress, led by Mahatma Gandhi. Later, the All-India Muslim League would advocate for a separate Muslim-majority nation state. The British Indian Empire was partitioned in August 1947 into the Dominion of India and Dominion of Pakistan, each gaining its independence.
Prehistoric era (until 3300 BCE)
Paleolithic
Hominin expansion from Africa is estimated to have reached the Indian subcontinent approximately two million years ago, and possibly as early as 2.2 million years ago. This dating is based on the known presence of Homo erectus in Indonesia by 1.8 million years ago and in East Asia by 1.36 million years ago, as well as the discovery of stone tools at Riwat in Pakistan. Although some older discoveries have been claimed, the suggested dates, based on the dating of fluvial sediments, have not been independently verified.
The oldest hominin fossil remains in the Indian subcontinent are those of Homo erectus or Homo heidelbergensis, from the Narmada Valley in central India, and are dated to approximately half a million years ago. Older fossil finds have been claimed, but are considered unreliable. Reviews of archaeological evidence have suggested that occupation of the Indian subcontinent by hominins was sporadic until approximately 700,000 years ago, and was geographically widespread by approximately 250,000 years ago.
According to a historical demographer of South Asia, Tim Dyson:Modern human beings—Homo sapiens—originated in Africa. Then, intermittently, sometime between 60,000 and 80,000 years ago, tiny groups of them began to enter the north-west of the Indian subcontinent. It seems likely that initially they came by way of the coast. It is virtually certain that there were Homo sapiens in the subcontinent 55,000 years ago, even though the earliest fossils that have been found of them date to only about 30,000 years before the present.
According to Michael D. Petraglia and Bridget Allchin: Y-Chromosome and Mt-DNA data support the colonisation of South Asia by modern humans originating in Africa. ... Coalescence dates for most non-European populations average to between 73–55 ka.
Historian of South Asia, Michael H. Fisher, states: Scholars estimate that the first successful expansion of the Homo sapiens range beyond Africa and across the Arabian Peninsula occurred from as early as 80,000 years ago to as late as 40,000 years ago, although there may have been prior unsuccessful emigrations. Some of their descendants extended the human range ever further in each generation, spreading into each habitable land they encountered. One human channel was along the warm and productive coastal lands of the Persian Gulf and northern Indian Ocean. Eventually, various bands entered India between 75,000 years ago and 35,000 years ago.
Archaeological evidence has been interpreted to suggest the presence of anatomically modern humans in the Indian subcontinent 78,000–74,000 years ago, although this interpretation is disputed. The occupation of South Asia by modern humans, initially in varying forms of isolation as hunter-gatherers, has turned it into a highly diverse one, second only to Africa in human genetic diversity.
According to Tim Dyson: Genetic research has contributed to knowledge of the prehistory of the subcontinent's people in other respects. In particular, the level of genetic diversity in the region is extremely high. Indeed, only Africa's population is genetically more diverse. Related to this, there is strong evidence of 'founder' events in the subcontinent. By this is meant circumstances where a subgroup—such as a tribe—derives from a tiny number of 'original' individuals. Further, compared to most world regions, the subcontinent's people are relatively distinct in having practised comparatively high levels of endogamy.
Neolithic
Settled life emerged on the subcontinent in the western margins of the Indus River alluvium approximately 9,000 years ago, evolving gradually into the Indus Valley Civilisation of the third millennium BCE. According to Tim Dyson: "By 7,000 years ago agriculture was firmly established in Baluchistan... [and] slowly spread eastwards into the Indus valley." Michael Fisher adds: The earliest discovered instance ... of well-established, settled agricultural society is at Mehrgarh in the hills between the Bolan Pass and the Indus plain (today in Pakistan) (see Map 3.1). From as early as 7000 BCE, communities there started investing increased labor in preparing the land and selecting, planting, tending, and harvesting particular grain-producing plants. They also domesticated animals, including sheep, goats, pigs, and oxen (both humped zebu [Bos indicus] and unhumped [Bos taurus]). Castrating oxen, for instance, turned them from mainly meat sources into domesticated draft-animals as well.
Bronze Age (c. 3300 – 1800 BCE)
Indus Valley Civilisation
The Bronze Age in the Indian subcontinent began around 3300 BCE. The Indus Valley region was one of three early cradles of civilisation in the Old World; the Indus Valley civilisation was the most expansive, and at its peak, may have had a population of over five million.
The civilisation was primarily centred in modern-day Pakistan, in the Indus river basin, and secondarily in the Ghaggar-Hakra River basin. The mature Indus civilisation flourished from about 2600 to 1900 BCE, marking the beginning of urban civilisation on the Indian subcontinent. It included cities such as Harappa, Ganweriwal, and Mohenjo-daro in modern-day Pakistan, and Dholavira, Kalibangan, Rakhigarhi, and Lothal in modern-day India.
Inhabitants of the ancient Indus River valley, the Harappans, developed new techniques in metallurgy and handicraft, and produced copper, bronze, lead, and tin. The civilisation is noted for its cities built of brick, and its roadside drainage systems, and is thought to have had some kind of municipal organisation. The civilisation also developed an Indus script, the earliest of the ancient Indian scripts, which is presently undeciphered. This is the reason why Harappan language is not directly attested, and its affiliation is uncertain.
After the collapse of Indus Valley civilisation, the inhabitants migrated from the river valleys of Indus and Ghaggar-Hakra, towards the Himalayan foothills of Ganga-Yamuna basin.
Ochre Coloured Pottery culture
During 2nd millennium BCE, Ochre Coloured Pottery culture was in Ganga Yamuna Doab region. These were rural settlements with agriculture and hunting. They were using copper tools such as axes, spears, arrows, and swords, and had domesticated animals.
Iron Age ( 1800 – 200 BCE)
Vedic period ( 1500 – 600 BCE)
Starting 1900 BCE, Indo-Aryan tribes moved into the Punjab from Central Asia in several waves of migration. The Vedic period is when the Vedas were composed of liturgical hymns from the Indo-Aryan people. The Vedic culture was located in part of north-west India, while other parts of India had a distinct cultural identity. Many regions of the Indian subcontinent transitioned from the Chalcolithic to the Iron Age in this period.
The Vedic culture is described in the texts of Vedas, still sacred to Hindus, which were orally composed and transmitted in Vedic Sanskrit. The Vedas are some of the oldest extant texts in India. The Vedic period, lasting from about 1500 to 500 BCE, contributed to the foundations of several cultural aspects of the Indian subcontinent.
Vedic society
Historians have analysed the Vedas to posit a Vedic culture in the Punjab, and the upper Gangetic Plain. The Peepal tree and cow were sanctified by the time of the Atharva Veda. Many of the concepts of Indian philosophy espoused later, like dharma, trace their roots to Vedic antecedents.
Early Vedic society is described in the Rigveda, the oldest Vedic text, believed to have been compiled during the 2nd millennium BCE, in the north-western region of the Indian subcontinent. At this time, Aryan society consisted of predominantly tribal and pastoral groups, distinct from the Harappan urbanisation which had been abandoned. The early Indo-Aryan presence probably corresponds, in part, to the Ochre Coloured Pottery culture in archaeological contexts.
At the end of the Rigvedic period, the Aryan society expanded from the north-western region of the Indian subcontinent into the western Ganges plain. It became increasingly agricultural and was socially organised around the hierarchy of the four varnas, or social classes. This social structure was characterised both by syncretising with the native cultures of northern India but also eventually by the exclusion of some indigenous peoples by labelling their occupations impure. During this period, many of the previous small tribal units and chiefdoms began to coalesce into Janapadas (monarchical, state-level polities).
Sanskrit epics
The Sanskrit epics Ramayana and Mahabharata were composed during this period. The Mahabharata remains the longest single poem in the world. Historians formerly postulated an "epic age" as the milieu of these two epic poems, but now recognise that the texts went through multiple stages of development over centuries. The existing texts of these epics are believed to belong to the post-Vedic age, between 400 BCE and 400 CE.
Janapadas
The Iron Age in the Indian subcontinent from about 1200 BCE to the 6th century BCE is defined by the rise of Janapadas, which are realms, republics and kingdoms—notably the Iron Age Kingdoms of Kuru, Panchala, Kosala and Videha.
The Kuru Kingdom ( 1200–450 BCE) was the first state-level society of the Vedic period, corresponding to the beginning of the Iron Age in north-western India, around 1200–800 BCE, as well as with the composition of the Atharvaveda. The Kuru state organised the Vedic hymns into collections and developed the srauta ritual to uphold the social order. Two key figures of the Kuru state were king Parikshit and his successor Janamejaya, who transformed this realm into the dominant political, social, and cultural power of northern India. When the Kuru kingdom declined, the centre of Vedic culture shifted to their eastern neighbours, the Panchala kingdom. The archaeological PGW (Painted Grey Ware) culture, which flourished in north-eastern India's Haryana and western Uttar Pradesh regions from about 1100 to 600 BCE, is believed to correspond to the Kuru and Panchala kingdoms.
During the Late Vedic Period, the kingdom of Videha emerged as a new centre of Vedic culture, situated even farther to the East (in what is today Nepal and Bihar state); reaching its prominence under the king Janaka, whose court provided patronage for Brahmin sages and philosophers such as Yajnavalkya, Aruni, and Gārgī Vāchaknavī. The later part of this period corresponds with a consolidation of increasingly large states and kingdoms, called Mahajanapadas, across Northern India.
Second urbanisation ( 600 – 200 BCE)
The period between 800 and 200 BCE saw the formation of the Śramaṇa movement, from which Jainism and Buddhism originated. The first Upanishads were written during this period. After 500 BCE, the so-called "second urbanisation" started, with new urban settlements arising at the Ganges plain. The foundations for the "second urbanisation" were laid prior to 600 BCE, in the Painted Grey Ware culture of the Ghaggar-Hakra and Upper Ganges Plain; although most PGW sites were small farming villages, "several dozen" PGW sites eventually emerged as relatively large settlements that can be characterised as towns, the largest of which were fortified by ditches or moats and embankments made of piled earth with wooden palisades.
The Central Ganges Plain, where Magadha gained prominence, forming the base of the Maurya Empire, was a distinct cultural area, with new states arising after 500 BCE. It was influenced by the Vedic culture, but differed markedly from the Kuru-Panchala region. "It was the area of the earliest known cultivation of rice in South Asia and by 1800 BCE was the location of an advanced Neolithic population associated with the sites of Chirand and Chechar". In this region, the Śramaṇic movements flourished, and Jainism and Buddhism originated.
Buddhism and Jainism
The time between 800 BCE and 400 BCE witnessed the composition of the earliest Upanishads, which form the theoretical basis of classical Hinduism, and are also known as the Vedanta (conclusion of the Vedas).
The increasing urbanisation of India in the 7th and 6th centuries BCE led to the rise of new ascetic or "Śramaṇa movements" which challenged the orthodoxy of rituals. Mahavira ( 599–527 BCE), proponent of Jainism, and Gautama Buddha ( 563–483 BCE), founder of Buddhism, were the most prominent icons of this movement. Śramaṇa gave rise to the concept of the cycle of birth and death, the concept of samsara, and the concept of liberation. Buddha found a Middle Way that ameliorated the extreme asceticism found in the Śramaṇa religions.
Around the same time, Mahavira (the 24th Tirthankara in Jainism) propagated a theology that was to later become Jainism. However, Jain orthodoxy believes the teachings of the Tirthankaras predates all known time and scholars believe Parshvanatha (c. 872 – c. 772 BCE), accorded status as the 23rd Tirthankara, was a historical figure. The Vedas are believed to have documented a few Tirthankaras and an ascetic order similar to the Śramaṇa movement.
Mahajanapadas
The period from 600 BCE to 300 BCE witnessed the rise of the Mahajanapadas, sixteen powerful and vast kingdoms and oligarchic republics. These Mahajanapadas evolved and flourished in a belt stretching from Gandhara in the north-west to Bengal in the eastern part of the Indian subcontinent and included parts of the trans-Vindhyan region. Ancient Buddhist texts, like the Aṅguttara Nikāya, make frequent reference to these sixteen great kingdoms and republics—Anga, Assaka, Avanti, Chedi, Gandhara, Kashi, Kamboja, Kosala, Kuru, Magadha, Malla, Matsya (or Machcha), Panchala, Surasena, Vṛji, and Vatsa. This period saw the second major rise of urbanism in India after the Indus Valley Civilisation.
Early "republics" or , such as Shakyas, Koliyas, Mallakas, and Licchavis had republican governments. s, such as the Mallakas, centered in the city of Kusinagara, and the Vajjika League, centred in the city of Vaishali, existed as early as the 6th century BCE and persisted in some areas until the 4th century CE. The most famous clan amongst the ruling confederate clans of the Vajji Mahajanapada were the Licchavis.
This period corresponds in an archaeological context to the Northern Black Polished Ware culture. Especially focused in the Central Ganges plain but also spreading across vast areas of the northern and central Indian subcontinent, this culture is characterised by the emergence of large cities with massive fortifications, significant population growth, increased social stratification, wide-ranging trade networks, construction of public architecture and water channels, specialised craft industries, a system of weights, punch-marked coins, and the introduction of writing in the form of Brahmi and Kharosthi scripts. The language of the gentry at that time was Sanskrit, while the languages of the general population of northern India are referred to as Prakrits.
Many of the sixteen kingdoms had merged into four major ones by the time of Gautama Buddha. These four were Vatsa, Avanti, Kosala, and Magadha.
Early Magadha dynasties
Magadha formed one of the sixteen Mahajanapadas (Sanskrit: "Great Realms") or kingdoms in ancient India. The core of the kingdom was the area of Bihar south of the Ganges; its first capital was Rajagriha (modern Rajgir) then Pataliputra (modern Patna). Magadha expanded to include most of Bihar and Bengal with the conquest of Licchavi and Anga respectively, followed by much of eastern Uttar Pradesh and Orissa. The ancient kingdom of Magadha is heavily mentioned in Jain and Buddhist texts. It is also mentioned in the Ramayana, Mahabharata and Puranas. The earliest reference to the Magadha people occurs in the Atharva-Veda where they are found listed along with the Angas, Gandharis, and Mujavats. Magadha played an important role in the development of Jainism and Buddhism. Republican communities (such as the community of Rajakumara) are merged into Magadha kingdom. Villages had their own assemblies under their local chiefs called Gramakas. Their administrations were divided into executive, judicial, and military functions.
Early sources, from the Buddhist Pāli Canon, the Jain Agamas and the Hindu Puranas, mention Magadha being ruled by the Pradyota dynasty and Haryanka dynasty ( 544–413 BCE) for some 200 years, 600–413 BCE. King Bimbisara of the Haryanka dynasty led an active and expansive policy, conquering Anga in what is now eastern Bihar and West Bengal. King Bimbisara was overthrown and killed by his son, Prince Ajatashatru, who continued the expansionist policy of Magadha. During this period, Gautama Buddha, the founder of Buddhism, lived much of his life in the Magadha kingdom. He attained enlightenment in Bodh Gaya, gave his first sermon in Sarnath and the first Buddhist council was held in Rajgriha. The Haryanka dynasty was overthrown by the Shaishunaga dynasty ( 413–345 BCE). The last Shishunaga ruler, Kalasoka, was assassinated by Mahapadma Nanda in 345 BCE, the first of the so-called Nine Nandas (Mahapadma Nanda and his eight sons).
Nanda Empire and Alexander's campaign
The Nanda Empire ( 345–322 BCE), at its peak, extended from Bengal in the east, to the Punjab in the west and as far south as the Vindhya Range. The Nanda dynasty built on the foundations laid by their Haryanka and Shishunaga predecessors. Nanda empire have built a vast army, consisting of 200,000 infantry, 20,000 cavalry, 2,000 war chariots and 3,000 war elephants (at the lowest estimates).
Maurya Empire
The Maurya Empire (322–185 BCE) unified most of the Indian subcontinent into one state, and was the largest empire ever to exist on the Indian subcontinent. At its greatest extent, the Mauryan Empire stretched to the north up to the natural boundaries of the Himalayas and to the east into what is now Assam. To the west, it reached beyond modern Pakistan, to the Hindu Kush mountains in what is now Afghanistan. The empire was established by Chandragupta Maurya assisted by Chanakya (Kautilya) in Magadha (in modern Bihar) when he overthrew the Nanda Empire.
Chandragupta rapidly expanded his power westwards across central and western India, and by 317 BCE the empire had fully occupied north-western India. The Mauryan Empire defeated Seleucus I, founder of the Seleucid Empire, during the Seleucid–Mauryan war, thus gained additional territory west of the Indus River. Chandragupta's son Bindusara succeeded to the throne around 297 BCE. By the time he died in 272 BCE, a large part of the Indian subcontinent was under Mauryan suzerainty. However, the region of Kalinga (around modern day Odisha) remained outside Mauryan control, perhaps interfering with trade with the south.
Bindusara was succeeded by Ashoka, whose reign lasted until his death in about 232 BCE. His campaign against the Kalingans in about 260 BCE, though successful, led to immense loss of life and misery. This led Ashoka to shun violence, and subsequently to embrace Buddhism. The empire began to decline after his death and the last Mauryan ruler, Brihadratha, was assassinated by Pushyamitra Shunga to establish the Shunga Empire.
Under Chandragupta Maurya and his successors, internal and external trade, agriculture, and economic activities all thrived and expanded across India thanks to the creation of a single efficient system of finance, administration, and security. The Mauryans built the Grand Trunk Road, one of Asia's oldest and longest major roads connecting the Indian subcontinent with Central Asia. After the Kalinga War, the Empire experienced nearly half a century of peace and security under Ashoka. Mauryan India also enjoyed an era of social harmony, religious transformation, and expansion of scientific knowledge. Chandragupta Maurya's embrace of Jainism increased social and religious renewal and reform across his society, while Ashoka's embrace of Buddhism has been said to have been the foundation of the reign of social and political peace and non-violence across India. Ashoka sponsored Buddhist missions into Sri Lanka, Southeast Asia, West Asia, North Africa, and Mediterranean Europe.
The Arthashastra written by Chanakya and the Edicts of Ashoka are the primary written records of the Mauryan times. Archaeologically, this period falls in the era of Northern Black Polished Ware. The Mauryan Empire was based on a modern and efficient economy and society in which the sale of merchandise was closely regulated by the government. Although there was no banking in the Mauryan society, usury was customary. A significant amount of written records on slavery are found, suggesting a prevalence thereof. During this period, a high-quality steel called Wootz steel was developed in south India and was later exported to China and Arabia.
Sangam period
During the Sangam period Tamil literature flourished from the 3rd century BCE to the 4th century CE. Three Tamil dynasties, collectively known as the Three Crowned Kings of Tamilakam: Chera dynasty, Chola dynasty, and the Pandya dynasty ruled parts of southern India.
The Sangam literature deals with the history, politics, wars, and culture of the Tamil people of this period. Unlike Sanskrit writers who were mostly Brahmins, Sangam writers came from diverse classes and social backgrounds and were mostly non-Brahmins.
Around 300 BCE – 200 CE, Pathupattu, an anthology of ten mid-length book collections, which is considered part of Sangam Literature, were composed; the composition of eight anthologies of poetic works Ettuthogai as well as the composition of eighteen minor poetic works Patiṉeṇkīḻkaṇakku; while Tolkāppiyam, the earliest grammarian work in the Tamil language was developed. Also, during Sangam period, two of the Five Great Epics of Tamil Literature were composed. Ilango Adigal composed Silappatikaram, which is a non-religious work, that revolves around Kannagi, and Manimekalai, composed by Chithalai Chathanar, is a sequel to Silappatikaram, and tells the story of the daughter of Kovalan and Madhavi, who became a Buddhist Bhikkhuni.
Classical period (c. 200 BCE – c. 650 CE)
The time between the Maurya Empire in the 3rd century BCE and the end of the Gupta Empire in the 6th century CE is referred to as the "Classical" period of India. The Gupta Empire (4th–6th century) is regarded as the Golden Age of India, although a host of kingdoms ruled over India in these centuries. Also, the Sangam literature flourished from the 3rd century BCE to the 3rd century CE in southern India. During this period, India's economy is estimated to have been the largest in the world, having between one-third and one-quarter of the world's wealth, from 1 CE to 1000 CE.
Early classical period (c. 200 BCE – c. 320 CE)
Shunga Empire
The Shungas originated from Magadha, and controlled large areas of the central and eastern Indian subcontinent from around 187 to 78 BCE. The dynasty was established by Pushyamitra Shunga, who overthrew the last Maurya emperor. Its capital was Pataliputra, but later emperors, such as Bhagabhadra, also held court at Vidisha, modern Besnagar.
Pushyamitra Shunga ruled for 36 years and was succeeded by his son Agnimitra. There were ten Shunga rulers. However, after the death of Agnimitra, the empire rapidly disintegrated; inscriptions and coins indicate that much of northern and central India consisted of small kingdoms and city-states that were independent of any Shunga hegemony. The empire is noted for its numerous wars with both foreign and indigenous powers. They fought with the Mahameghavahana dynasty of Kalinga, Satavahana dynasty of Deccan, the Indo-Greeks, and possibly the Panchalas and Mitras of Mathura.
Art, education, philosophy, and other forms of learning flowered during this period including architectural monuments such as the Stupa at Bharhut and the renowned Great Stupa at Sanchi. The Shunga rulers helped to establish the tradition of royal sponsorship of learning and art. The script used by the empire was a variant of Brahmi and was used to write the Sanskrit language. The Shunga Empire played an imperative role in patronising Indian culture at a time when some of the most important developments in Hindu thought were taking place.
Satavahana Empire
The Śātavāhanas were based from Amaravati in Andhra Pradesh as well as Junnar (Pune) and Prathisthan (Paithan) in Maharashtra. The territory of the empire covered large parts of India from the 1st century BCE onward. The Sātavāhanas started out as feudatories to the Mauryan dynasty, but declared independence with its decline.
The Sātavāhanas are known for their patronage of Hinduism and Buddhism, which resulted in Buddhist monuments from Ellora (a UNESCO World Heritage Site) to Amaravati. They were one of the first Indian states to issue coins with their rulers embossed. They formed a cultural bridge and played a vital role in trade as well as the transfer of ideas and culture to and from the Indo-Gangetic Plain to the southern tip of India.
They had to compete with the Shunga Empire and then the Kanva dynasty of Magadha to establish their rule. Later, they played a crucial role to protect large part of India against foreign invaders like the Sakas, Yavanas and Pahlavas. In particular, their struggles with the Western Kshatrapas went on for a long time. The notable rulers of the Satavahana Dynasty Gautamiputra Satakarni and Sri Yajna Sātakarni were able to defeat the foreign invaders like the Western Kshatrapas and to stop their expansion. In the 3rd century CE, the empire was split into smaller states.
Trade and travels to India
The spice trade in Kerala attracted traders from all over the Old World to India. India's Southwest coastal port Muziris had established itself as a major spice trade centre from as early as 3,000 BCE, according to Sumerian records. Jewish traders arrived in Kochi, Kerala, India as early as 562 BCE. The Greco-Roman world followed by trading along the incense route and the Roman-India routes. During the 2nd century BCE Greek and Indian ships met to trade at Arabian ports such as Aden. During the first millennium, the sea routes to India were controlled by the Indians and Ethiopians that became the maritime trading power of the Red Sea.
Indian merchants involved in spice trade took Indian cuisine to Southeast Asia, where spice mixtures and curries became popular with the native inhabitants. Buddhism entered China through the Silk Road in the 1st or 2nd century CE. Hindu and Buddhist religious establishments of South and Southeast Asia came to be centres of production and commerce as they accumulated capital donated by patrons. They engaged in estate management, craftsmanship, and trade. Buddhism in particular travelled alongside the maritime trade, promoting literacy, art, and the use of coinage.
Kushan Empire
The Kushan Empire expanded out of what is now Afghanistan into the northwest of the Indian subcontinent under the leadership of their first emperor, Kujula Kadphises, about the middle of the 1st century CE. The Kushans were possibly a Tocharian speaking tribe, one of five branches of the Yuezhi confederation. By the time of his grandson, Kanishka the Great, the empire spread to encompass much of Afghanistan, and then the northern parts of the Indian subcontinent.
Emperor Kanishka was a great patron of Buddhism; however, as Kushans expanded southward, the deities of their later coinage came to reflect its new Hindu majority. Historian Vincent Smith said about Kanishka:
The empire linked the Indian Ocean maritime trade with the commerce of the Silk Road through the Indus valley, encouraging long-distance trade, particularly between China and Rome. The Kushans brought new trends to the budding and blossoming Gandhara art and Mathura art, which reached its peak during Kushan rule. The period of peace under Kushan rule is known as Pax Kushana. By the 3rd century, their empire in India was disintegrating and their last known great emperor was Vasudeva I.
Classical period (c. 320 – 650 CE)
Gupta Empire
The Gupta period was noted for cultural creativity, especially in literature, architecture, sculpture, and painting. The Gupta period produced scholars such as Kalidasa, Aryabhata, Varahamihira, Vishnu Sharma, and Vatsyayana. The Gupta period marked a watershed of Indian culture: the Guptas performed Vedic sacrifices to legitimise their rule, but they also patronised Buddhism, an alternative to Brahmanical orthodoxy. The military exploits of the first three rulers – Chandragupta I, Samudragupta, and Chandragupta II – brought much of India under their leadership. Science and political administration reached new heights during the Gupta era. Strong trade ties also made the region an important cultural centre and established it as a base that would influence nearby kingdoms and regions. The period of peace under Gupta rule is known as Pax Gupta.
The latter Guptas successfully resisted the northwestern kingdoms until the arrival of the Alchon Huns, who established themselves in Afghanistan by the first half of the 5th century CE, with their capital at Bamiyan. However, much of the southern India including Deccan were largely unaffected by these events.
Vakataka Empire
The Vākāṭaka Empire originated from the Deccan in the mid-third century CE. Their state is believed to have extended from the southern edges of Malwa and Gujarat in the north to the Tungabhadra River in the south as well as from the Arabian Sea in the western to the edges of Chhattisgarh in the east. They were the most important successors of the Satavahanas in the Deccan, contemporaneous with the Guptas in northern India and succeeded by the Vishnukundina dynasty.
The Vakatakas are noted for having been patrons of the arts, architecture and literature. The rock-cut Buddhist viharas and chaityas of Ajanta Caves (a UNESCO World Heritage Site) were built under the patronage of Vakataka emperor, Harishena.
Kamarupa Kingdom
Samudragupta's 4th-century Allahabad pillar inscription mentions Kamarupa (Western Assam) and Davaka (Central Assam) as frontier kingdoms of the Gupta Empire. Davaka was later absorbed by Kamarupa, which grew into a large kingdom that spanned from Karatoya river to near present Sadiya and covered the entire Brahmaputra valley, North Bengal, parts of Bangladesh and, at times Purnea and parts of West Bengal.
Ruled by three dynasties Varmanas (c. 350–650 CE), Mlechchha dynasty (c. 655–900 CE) and Kamarupa-Palas (c. 900–1100 CE), from their capitals in present-day Guwahati (Pragjyotishpura), Tezpur (Haruppeswara) and North Gauhati (Durjaya) respectively. All three dynasties claimed their descent from Narakasura. In the reign of the Varman king, Bhaskar Varman (c. 600–650 CE), the Chinese traveller Xuanzang visited the region and recorded his travels. Later, after weakening and disintegration (after the Kamarupa-Palas), the Kamarupa tradition was somewhat extended until c. 1255 CE by the Lunar I (c. 1120–1185 CE) and Lunar II (c. 1155–1255 CE) dynasties. The Kamarupa kingdom came to an end in the middle of the 13th century when the Khen dynasty under Sandhya of Kamarupanagara (North Guwahati), moved his capital to Kamatapur (North Bengal) after the invasion of Muslim Turks, and established the Kamata kingdom.
Pallava Empire
The Pallavas, during the 4th to 9th centuries were, alongside the Guptas of the North, great patronisers of Sanskrit development in the South of the Indian subcontinent. The Pallava reign saw the first Sanskrit inscriptions in a script called Grantha. Early Pallavas had different connexions to Southeast Asian countries. The Pallavas used Dravidian architecture to build some very important Hindu temples and academies in Mamallapuram, Kanchipuram and other places; their rule saw the rise of great poets. The practice of dedicating temples to different deities came into vogue followed by fine artistic temple architecture and sculpture style of Vastu Shastra.
Pallavas reached the height of power during the reign of Mahendravarman I (571–630 CE) and Narasimhavarman I (630–668 CE) and dominated the Telugu and northern parts of the Tamil region until the end of the 9th century.
Kadamba Empire
Kadambas originated from Karnataka, was founded by Mayurasharma in 345 CE which at later times showed the potential of developing into imperial proportions. King Mayurasharma defeated the armies of Pallavas of Kanchi possibly with help of some native tribes. The Kadamba fame reached its peak during the rule of Kakusthavarma, a notable ruler with whom the kings of Gupta Dynasty of northern India cultivated marital alliances. The Kadambas were contemporaries of the Western Ganga Dynasty and together they formed the earliest native kingdoms to rule the land with absolute autonomy. The dynasty later continued to rule as a feudatory of larger Kannada empires, the Chalukya and the Rashtrakuta empires, for over five hundred years during which time they branched into minor dynasties (Kadambas of Goa, Kadambas of Halasi and Kadambas of Hangal).
Empire of Harsha
Harsha ruled northern India from 606 to 647 CE. He was the son of Prabhakarvardhana and the younger brother of Rajyavardhana, who were members of the Vardhana dynasty and ruled Thanesar, in present-day Haryana.
After the downfall of the prior Gupta Empire in the middle of the 6th century, North India reverted to smaller republics and monarchical states. The power vacuum resulted in the rise of the Vardhanas of Thanesar, who began uniting the republics and monarchies from the Punjab to central India. After the death of Harsha's father and brother, representatives of the empire crowned Harsha emperor in April 606 CE, giving him the title of Maharaja. At the peak, his Empire covered much of North and Northwestern India, extended East until Kamarupa, and South until Narmada River; and eventually made Kannauj (in present Uttar Pradesh) his capital, and ruled until 647 CE.
The peace and prosperity that prevailed made his court a centre of cosmopolitanism, attracting scholars, artists and religious visitors. During this time, Harsha converted to Buddhism from Surya worship. The Chinese traveller Xuanzang visited the court of Harsha and wrote a very favourable account of him, praising his justice and generosity. His biography Harshacharita ("Deeds of Harsha") written by Sanskrit poet Banabhatta, describes his association with Thanesar and the palace with a two-storied Dhavalagriha (White Mansion).
Early medieval period (mid 6th – 1200)
Early medieval India began after the end of the Gupta Empire in the 6th century CE. This period also covers the "Late Classical Age" of Hinduism, which began after the collapse of the Empire of Harsha in the 7th century, and ended in the 13th century with the rise of the Delhi Sultanate in Northern India; the beginning of Imperial Kannauj, leading to the Tripartite struggle; and the end of the Later Cholas with the death of Rajendra Chola III in 1279 in Southern India; however some aspects of the Classical period continued until the fall of the Vijayanagara Empire in the south around the 17th century.
From the fifth century to the thirteenth, Śrauta sacrifices declined, and support for Shaivism, Vaishnavism and Shaktism expanded in royal courts, while the support for Buddhism declined. Lack of appeal among the rural masses, who instead embraced Brahmanical Hinduism formed in the Hindu synthesis, and dwindling financial support from trading communities and royal elites, were major factors in the decline of Buddhism.
In the 7th century, Kumārila Bhaṭṭa formulated his school of Mimamsa philosophy and defended the position on Vedic rituals.
From the 8th to the 10th century, three dynasties contested for control of northern India: the Gurjara Pratiharas of Malwa, the Palas of Bengal, and the Rashtrakutas of the Deccan. The Sena dynasty would later assume control of the Pala Empire; the Gurjara Pratiharas fragmented into various states, notably the Kingdom of Malwa, the Kingdom of Bundelkhand, the Kingdom of Dahala, the Tomaras of Haryana, and the Kingdom of Sambhar, these states were some of the earliest Rajput kingdoms; while the Rashtrakutas were annexed by the Western Chalukyas. During this period, the Chaulukya dynasty emerged; the Chaulukyas constructed the Dilwara Temples, Modhera Sun Temple, Rani ki vav in the style of Māru-Gurjara architecture, and their capital Anhilwara (modern Patan, Gujarat) was one of the largest cities in the Indian subcontinent, with the population estimated at 100,000 in 1000.
The Chola Empire emerged as a major power during the reign of Raja Raja Chola I and Rajendra Chola I who successfully invaded parts of Southeast Asia and Sri Lanka in the 11th century. Lalitaditya Muktapida (r. 724–760 CE) was an emperor of the Kashmiri Karkoṭa dynasty, which exercised influence in northwestern India from 625 until 1003, and was followed by Lohara dynasty. Kalhana in his Rajatarangini credits king Lalitaditya with leading an aggressive military campaign in Northern India and Central Asia.
The Hindu Shahi dynasty ruled portions of eastern Afghanistan, northern Pakistan, and Kashmir from the mid-7th century to the early 11th century. While in Odisha, the Eastern Ganga Empire rose to power; noted for the advancement of Hindu architecture, most notable being Jagannath Temple and Konark Sun Temple, as well as being patrons of art and literature.
Later Gupta dynasty
The Later Gupta dynasty ruled the Magadha region in eastern India between the 6th and 7th centuries AD. The Later Guptas succeeded the imperial Guptas as the rulers of Magadha, but there is no evidence connecting the two dynasties; these appear to be two distinct families. The Later Guptas are so-called because the names of their rulers ended with the suffix "-gupta", which they might have adopted to portray themselves as the legitimate successors of the imperial Guptas.
Chalukya Empire
The Chalukya Empire ruled large parts of southern and central India between the 6th and the 12th centuries, as three related yet individual dynasties. The earliest dynasty, known as the "Badami Chalukyas", ruled from Vatapi (modern Badami) from the middle of the 6th century. The Badami Chalukyas began to assert their independence at the decline of the Kadamba kingdom of Banavasi and rapidly rose to prominence during the reign of Pulakeshin II. The rule of the Chalukyas marks an important milestone in the history of South India and a golden age in the history of Karnataka. The political atmosphere in South India shifted from smaller kingdoms to large empires with the ascendancy of Badami Chalukyas. A Southern India-based kingdom took control and consolidated the entire region between the Kaveri and the Narmada Rivers. The rise of this empire saw the birth of efficient administration, overseas trade and commerce and the development of new style of architecture called "Chalukyan architecture". The Chalukya dynasty ruled parts of southern and central India from Badami in Karnataka between 550 and 750, and then again from Kalyani between 970 and 1190.
Rashtrakuta Empire
Founded by Dantidurga around 753, the Rashtrakuta Empire ruled from its capital at Manyakheta for almost two centuries. At its peak, the Rashtrakutas ruled from the Ganges-Yamuna Doab in the north to Cape Comorin in the south, a fruitful time of architectural and literary achievements.
The early rulers of this dynasty were Hindu, but the later rulers were strongly influenced by Jainism. Govinda III and Amoghavarsha were the most famous of the long line of able administrators produced by the dynasty. Amoghavarsha was also an author and wrote Kavirajamarga, the earliest known Kannada work on poetics. Architecture reached a milestone in the Dravidian style, the finest example of which is seen in the Kailasanath Temple at Ellora. Other important contributions are the Kashivishvanatha temple and the Jain Narayana temple at Pattadakal in Karnataka.
The Arab traveller Suleiman described the Rashtrakuta Empire as one of the four great Empires of the world. The Rashtrakuta period marked the beginning of the golden age of southern Indian mathematics. The great south Indian mathematician Mahāvīra had a huge impact on medieval south Indian mathematicians. The Rashtrakuta rulers also patronised men of letters in a variety of languages.
Gurjara-Pratihara Empire
The Gurjara-Pratiharas were instrumental in containing Arab armies moving east of the Indus River. Nagabhata I defeated the Arab army under Junaid and Tamin during the Umayyad campaigns in India. Under Nagabhata II, the Gurjara-Pratiharas became the most powerful dynasty in northern India. He was succeeded by his son Ramabhadra, who ruled briefly before being succeeded by his son, Mihira Bhoja. Under Bhoja and his successor Mahendrapala I, the Pratihara Empire reached its peak of prosperity and power. By the time of Mahendrapala, its territory stretched from the border of Sindh in the west to Bihar in the east and from the Himalayas in the north to around the Narmada River in the south. The expansion triggered a tripartite power struggle with the Rashtrakuta and Pala empires for control of the Indian subcontinent.
By the end of the 10th century, several feudatories of the empire took advantage of the temporary weakness of the Gurjara-Pratiharas to declare their independence, notably the Kingdom of Malwa, the Kingdom of Bundelkhand, the Tomaras of Haryana, and the Kingdom of Sambhar and the Kingdom of Dahala.
Gahadavala dynasty
Gahadavala dynasty ruled parts of the present-day Indian states of Uttar Pradesh and Bihar, during 11th and 12th centuries. Their capital was located at Varanasi.
Karnat dynasty
In 1097 AD, the Karnat dynasty of Mithila emerged on the Bihar/Nepal border area and maintained capitals in Darbhanga and Simraongadh. The dynasty was established by Nanyadeva, a military commander of Karnataka origin. Under this dynasty, the Maithili language started to develop with the first piece of Maithili literature, the Varna Ratnakara being produced in the 14th century by Jyotirishwar Thakur. The Karnats also carried out raids into Nepal. They fell in 1324 following the invasion of Ghiyasuddin Tughlaq.
Pala Empire
The Pala Empire was founded by Gopala I. It was ruled by a Buddhist dynasty from Bengal. The Palas reunified Bengal after the fall of Shashanka's Gauda Kingdom.
The Palas were followers of the Mahayana and Tantric schools of Buddhism, they also patronised Shaivism and Vaishnavism. The empire reached its peak under Dharmapala and Devapala. Dharmapala is believed to have conquered Kanauj and extended his sway up to the farthest limits of India in the north-west.
The Pala Empire can be considered as the golden era of Bengal. Dharmapala founded the Vikramashila and revived Nalanda, considered one of the first great universities in recorded history. Nalanda reached its height under the patronage of the Pala Empire. The Palas also built many viharas. They maintained close cultural and commercial ties with countries of Southeast Asia and Tibet. Sea trade added greatly to the prosperity of the Pala Empire.
Cholas
Medieval Cholas rose to prominence during the middle of the 9th century and established the greatest empire South India had seen. They successfully united the South India under their rule and through their naval strength extended their influence in the Southeast Asian countries such as Srivijaya. Under Rajaraja Chola I and his successors Rajendra Chola I, Rajadhiraja Chola, Virarajendra Chola and Kulothunga Chola I the dynasty became a military, economic and cultural power in South Asia and South-East Asia. Rajendra Chola I's navies occupied the sea coasts from Burma to Vietnam, the Andaman and Nicobar Islands, the Lakshadweep (Laccadive) islands, Sumatra, and the Malay Peninsula. The power of the new empire was proclaimed to the eastern world by the expedition to the Ganges which Rajendra Chola I undertook and by the occupation of cities of the maritime empire of Srivijaya in Southeast Asia, as well as by the repeated embassies to China.
They dominated the political affairs of Sri Lanka for over two centuries through repeated invasions and occupation. They also had continuing trade contacts with the Arabs and the Chinese empire. Rajaraja Chola I and his son Rajendra Chola I gave political unity to the whole of Southern India and established the Chola Empire as a respected sea power. Under the Cholas, the South India reached new heights of excellence in art, religion and literature. In all of these spheres, the Chola period marked the culmination of movements that had begun in an earlier age under the Pallavas. Monumental architecture in the form of majestic temples and sculpture in stone and bronze reached a finesse never before achieved in India.
Western Chalukya Empire
The Western Chalukya Empire ruled most of the western Deccan, South India, between the 10th and 12th centuries. Vast areas between the Narmada River in the north and Kaveri River in the south came under Chalukya control. During this period the other major ruling families of the Deccan, the Hoysalas, the Seuna Yadavas of Devagiri, the Kakatiya dynasty and the Southern Kalachuris, were subordinates of the Western Chalukyas and gained their independence only when the power of the Chalukya waned during the latter half of the 12th century.
The Western Chalukyas developed an architectural style known today as a transitional style, an architectural link between the style of the early Chalukya dynasty and that of the later Hoysala empire. Most of its monuments are in the districts bordering the Tungabhadra River in central Karnataka. Well known examples are the Kasivisvesvara Temple at Lakkundi, the Mallikarjuna Temple at Kuruvatti, the Kallesvara Temple at Bagali, Siddhesvara Temple at Haveri, and the Mahadeva Temple at Itagi. This was an important period in the development of fine arts in Southern India, especially in literature as the Western Chalukya kings encouraged writers in the native language of Kannada, and Sanskrit like the philosopher and statesman Basava and the great mathematician Bhāskara II.
Late medieval period ( 1200–1526)
The late medieval period is marked by repeated invasions of the Muslim Central Asian nomadic clans, the rule of the Delhi sultanate, and by the growth of other dynasties and empires, built upon military technology of the Sultanate. It turned from a turkic Monopoly to an Indianized Indo-Muslim polity
Delhi Sultanate
The Delhi Sultanate was a series of successive Islamic states based in Delhi, ruled by several dynasties of Turkic, Indic, Turko-Indian and Pashtun origins. It ruled large parts of the Indian subcontinent from the 13th to the early 16th century. In the 12th and 13th centuries, Central Asian Turks invaded parts of northern India and established the Delhi Sultanate in the former Hindu holdings. The subsequent Mamluk dynasty of Delhi managed to conquer large areas of northern India, while the Khalji dynasty conquered most of central India while forcing the principal Hindu kingdoms of South India to become vassal states.
The Sultanate ushered in a period of Indian cultural renaissance. The resulting "Indo-Muslim" fusion of cultures left lasting syncretic monuments in architecture, music, literature, religion, and clothing. It is surmised that the language of Urdu was born during the Delhi Sultanate period. The Delhi Sultanate is the only Indo-Islamic empire to enthrone one of the few female rulers in India, Razia Sultana (1236–1240).
While initially disruptive due to the passing of power from native Indian elites to Turkic Muslim, Indic muslim and Pashtun muslim elites, the Delhi Sultanate was responsible for integrating the Indian subcontinent into a growing world system, drawing India into a wider international network, which had a significant impact on Indian culture and society. However, the Delhi Sultanate also caused large-scale destruction and desecration of temples in the Indian subcontinent.
The Mongol invasions of India were successfully repelled by the Delhi Sultanate during the rule of Alauddin Khalji. A major factor in their success was their Turkic Mamluk slave army, who were highly skilled in the same style of nomadic cavalry warfare as the Mongols. It is possible that the Mongol Empire may have expanded into India were it not for the Delhi Sultanate's role in repelling them. By repeatedly repulsing the Mongol raiders, the sultanate saved India from the devastation visited on West and Central Asia. Soldiers from that region and learned men and administrators fleeing Mongol invasions of Iran migrated into the subcontinent, thereby creating a syncretic Indo-Islamic culture in the north.
A Turco-Mongol conqueror in Central Asia, Timur (Tamerlane), attacked the reigning Sultan Nasir-u Din Mehmud of the Tughlaq dynasty in the north Indian city of Delhi. The Sultan's army was defeated on 17 December 1398. Timur entered Delhi and the city was sacked, destroyed, and left in ruins after Timur's army had killed and plundered for three days and nights. He ordered the whole city to be sacked except for the sayyids, scholars, and the "other Muslims" (artists); 100,000 war prisoners were put to death in one day. The Sultanate suffered significantly from the sacking of Delhi. Though revived briefly under the Lodi dynasty, it was but a shadow of the former.
Vijayanagara Empire
The Vijayanagara Empire was established in 1336 by Harihara I and his brother Bukka Raya I of Sangama Dynasty, which originated as a political heir of the Hoysala Empire, Kakatiya Empire, and the Pandyan Empire. The empire rose to prominence as a culmination of attempts by the south Indian powers to ward off Islamic invasions by the end of the 13th century. It lasted until 1646, although its power declined after a major military defeat in 1565 by the combined armies of the Deccan sultanates. The empire is named after its capital city of Vijayanagara, whose ruins surround present day Hampi, now a World Heritage Site in Karnataka, India.
In the first two decades after the founding of the empire, Harihara I gained control over most of the area south of the Tungabhadra river and earned the title of Purvapaschima Samudradhishavara ("master of the eastern and western seas"). By 1374 Bukka Raya I, successor to Harihara I, had defeated the chiefdom of Arcot, the Reddys of Kondavidu, and the Sultan of Madurai and had gained control over Goa in the west and the Tungabhadra-Krishna doab in the north.
Harihara II, the second son of Bukka Raya I, further consolidated the kingdom beyond the Krishna River and brought the whole of South India under the Vijayanagara umbrella. The next ruler, Deva Raya I, emerged successful against the Gajapatis of Odisha and undertook important works of fortification and irrigation. Italian traveller Niccolo de Conti wrote of him as the most powerful ruler of India. Deva Raya II succeeded to the throne in 1424 and was possibly the most capable of the Sangama Dynasty rulers. He quelled rebelling feudal lords as well as the Zamorin of Calicut and Quilon in the south. He invaded the island of Sri Lanka and became overlord of the kings of Burma at Pegu and Tanasserim.
The Vijayanagara Emperors were tolerant of all religions and sects, as writings by foreign visitors show. The kings used titles such as Gobrahamana Pratipalanacharya (literally, "protector of cows and Brahmins") and Hindurayasuratrana (lit, "upholder of Hindu faith") that testified to their intention of protecting Hinduism and yet were at the same time staunchly Islamicate in their court ceremonials and dress. The empire's founders, Harihara I and Bukka Raya I, were devout Shaivas (worshippers of Shiva), but made grants to the Vaishnava order of Sringeri with Vidyaranya as their patron saint, and designated Varaha (an avatar of Vishnu) as their emblem. Nobles from Central Asia's Timurid kingdoms also came to Vijayanagara. The later Saluva and Tuluva kings were Vaishnava by faith, but worshipped at the feet of Lord Virupaksha (Shiva) at Hampi as well as Lord Venkateshwara (Vishnu) at Tirupati. A Sanskrit work, Jambavati Kalyanam by King Krishnadevaraya, called Lord Virupaksha Karnata Rajya Raksha Mani ("protective jewel of Karnata Empire"). The kings patronised the saints of the dvaita order (philosophy of dualism) of Madhvacharya at Udupi.
The empire's legacy includes many monuments spread over South India, the best known of which is the group at Hampi. The previous temple building traditions in South India came together in the Vijayanagara Architecture style. The mingling of all faiths and vernaculars inspired architectural innovation of Hindu temple construction. South Indian mathematics flourished under the protection of the Vijayanagara Empire in Kerala. The south Indian mathematician Madhava of Sangamagrama founded the famous Kerala School of Astronomy and Mathematics in the 14th century which produced a lot of great south Indian mathematicians like Parameshvara, Nilakantha Somayaji and Jyeṣṭhadeva. Efficient administration and vigorous overseas trade brought new technologies such as water management systems for irrigation. The empire's patronage enabled fine arts and literature to reach new heights in Kannada, Telugu, Tamil, and Sanskrit, while Carnatic music evolved into its current form.
Vijayanagara went into decline after the defeat in the Battle of Talikota (1565). After the death of Aliya Rama Raya in the Battle of Talikota, Tirumala Deva Raya started the Aravidu dynasty, moved and founded a new capital of Penukonda to replace the destroyed Hampi, and attempted to reconstitute the remains of Vijayanagara Empire. Tirumala abdicated in 1572, dividing the remains of his kingdom to his three sons, and pursued a religious life until his death in 1578. The Aravidu dynasty successors ruled the region but the empire collapsed in 1614, and the final remains ended in 1646, from continued wars with the Bijapur sultanate and others. During this period, more kingdoms in South India became independent and separate from Vijayanagara. These include the Mysore Kingdom, Keladi Nayaka, Nayaks of Madurai, Nayaks of Tanjore, Nayakas of Chitradurga and Nayak Kingdom of Gingee – all of which declared independence and went on to have a significant impact on the history of South India in the coming centuries.
Other kingdoms
For two and a half centuries from the mid-13th century, politics in Northern India was dominated by the Delhi Sultanate, and in Southern India by the Vijayanagar Empire. However, there were other regional powers present as well. After fall of Pala Empire, the Chero dynasty ruled much of Eastern Uttar Pradesh, Bihar and Jharkhand from the 12th to the 18th centuries. The Reddy dynasty successfully defeated the Delhi Sultanate and extended their rule from Cuttack in the north to Kanchi in the south, eventually being absorbed into the expanding Vijayanagara Empire.
In the north, the Rajput kingdoms remained the dominant force in Western and Central India. The Mewar dynasty under Maharana Hammir defeated and captured Muhammad Tughlaq with the Bargujars as his main allies. Tughlaq had to pay a huge ransom and relinquish all of Mewar's lands. After this event, the Delhi Sultanate did not attack Chittor for a few hundred years. The Rajputs re-established their independence, and Rajput states were established as far east as Bengal and north into teh Punjab. The Tomaras established themselves at Gwalior, and Man Singh Tomar reconstructed the Gwalior Fort. During this period, Mewar emerged as the leading Rajput state; and Rana Kumbha expanded his kingdom at the expense of the Sultanates of Malwa and Gujarat. The next great Rajput ruler, Rana Sanga of Mewar, became the principal player in Northern India. His objectives grew in scope – he planned to conquer Delhi. But, his defeat in the Battle of Khanwa consolidated the new Mughal dynasty in India. The Mewar dynasty under Maharana Udai Singh II faced further defeat by Mughal emperor Akbar, with their capital Chittor being captured. Due to this event, Udai Singh II founded Udaipur, which became the new capital of the Mewar kingdom. His son, Maharana Pratap of Mewar, firmly resisted the Mughals. Akbar sent many missions against him. He survived to ultimately gain control of all of Mewar, excluding the Chittor Fort.
In the south, the Bahmani Sultanate in the Deccan, born from a rebellion in 1347 against the Tughlaq dynasty, was the chief rival of Vijayanagara, and frequently created difficulties for them. Starting in 1490, the Bahmani Sultanate's governors revolted, their independent states composing the five Deccan sultanates; Ahmadnagar declared independence, followed by Bijapur and Berar in the same year; Golkonda became independent in 1518 and Bidar in 1528. Although generally rivals, they allied against the Vijayanagara Empire in 1565, permanently weakening Vijayanagar in the Battle of Talikota.
In the East, the Gajapati Kingdom remained a strong regional power to reckon with, associated with a high point in the growth of regional culture and architecture. Under Kapilendradeva, Gajapatis became an empire stretching from the lower Ganga in the north to the Kaveri in the south. In Northeast India, the Ahom Kingdom was a major power for six centuries; led by Lachit Borphukan, the Ahoms decisively defeated the Mughal army at the Battle of Saraighat during the Ahom-Mughal conflicts. Further east in Northeastern India was the Kingdom of Manipur, which ruled from their seat of power at Kangla Fort and developed a sophisticated Hindu Gaudiya Vaishnavite culture.
The Sultanate of Bengal was the dominant power of the Ganges–Brahmaputra Delta, with a network of mint towns spread across the region. It was a Sunni Muslim monarchy with Indo-Turkic, Arab, Abyssinian and Bengali Muslim elites. The sultanate was known for its religious pluralism where non-Muslim communities co-existed peacefully. The Bengal Sultanate had a circle of vassal states, including Odisha in the southwest, Arakan in the southeast, and Tripura in the east. In the early 16th century, the Bengal Sultanate reached the peak of its territorial growth with control over Kamrup and Kamata in the northeast and Jaunpur and Bihar in the west. It was reputed as a thriving trading nation and one of Asia's strongest states. The Bengal Sultanate was described by contemporary European and Chinese visitors as a relatively prosperous kingdom and the "richest country to trade with". The Bengal Sultanate left a strong architectural legacy. Buildings from the period show foreign influences merged into a distinct Bengali style. The Bengal Sultanate was also the largest and most prestigious authority among the independent medieval Muslim-ruled states in the history of Bengal. Its decline began with an interregnum by the Suri Empire, followed by Mughal conquest and disintegration into petty kingdoms.
Bhakti movement and Sikhism
The Bhakti movement refers to the theistic devotional trend that emerged in medieval Hinduism and later revolutionised in Sikhism. It originated in the seventh-century south India (now parts of Tamil Nadu and Kerala), and spread northwards. It swept over east and north India from the 15th century onwards, reaching its zenith between the 15th and 17th century.
The Bhakti movement regionally developed around different gods and goddesses, such as Vaishnavism (Vishnu), Shaivism (Shiva), Shaktism (Shakti goddesses), and Smartism. The movement was inspired by many poet-saints, who championed a wide range of philosophical positions ranging from theistic dualism of Dvaita to absolute monism of Advaita Vedanta.
Sikhism is a monotheistic and panentheistic religion based on the spiritual teachings of Guru Nanak, the first Guru, and the ten successive Sikh gurus. After the death of the tenth Guru, Guru Gobind Singh, the Sikh scripture, Guru Granth Sahib, became the literal embodiment of the eternal, impersonal Guru, where the scripture's word serves as the spiritual guide for Sikhs.
Buddhism in India flourished in the Himalayan kingdoms of Namgyal Kingdom in Ladakh, Sikkim Kingdom in Sikkim, and Chutia Kingdom in Arunachal Pradesh of the Late medieval period.
Early modern period (1526–1858)
The early modern period of Indian history is dated from 1526 to 1858, corresponding to the rise and fall of the Mughal Empire, which inherited from the Timurid Renaissance. During this age India's economy expanded, relative peace was maintained and arts were patronised. This period witnessed the further development of Indo-Islamic architecture; the growth of Marathas and Sikhs enabled them to rule significant regions of India in the waning days of the Mughal empire. With the discovery of the Cape route in the 1500s, the first Europeans to arrive by sea and establish themselves, were the Portuguese in Goa and Bombay.
Mughal Empire
In 1526, Babur swept across the Khyber Pass and established the Mughal Empire, which at its zenith covered much of South Asia. However, his son Humayun was defeated by the Afghan warrior Sher Shah Suri in 1540, and Humayun was forced to retreat to Kabul. After Sher Shah's death, his son Islam Shah Suri and his Hindu general Hemu Vikramaditya established secular rule in North India from Delhi until 1556, when Akbar, grandson of Babur, defeated Hemu in the Second Battle of Panipat on 6 November 1556 after winning Battle of Delhi. Akbar tried to establish a good relationship with the Hindus. Akbar declared "Amari" or non-killing of animals in the holy days of Jainism. He rolled back the jizya tax for non-Muslims. The Mughal emperors married local royalty, allied themselves with local maharajas, and attempted to fuse their Turko-Persian culture with ancient Indian styles, creating a unique Indo-Persian culture and Indo-Saracenic architecture.
Akbar married a Rajput princess, Mariam-uz-Zamani, and they had a son, Jahangir. Jahangir followed his father's policy. The Mughal dynasty ruled most of the Indian subcontinent by 1600. The reign of Shah Jahan was the golden age of Mughal architecture. He erected several large monuments, the most famous of which is the Taj Mahal at Agra.
It was one of the largest empires to have existed in the Indian subcontinent, and surpassed China to become the world's largest economic power, controlling 24.4% of the world economy, and the world leader in manufacturing, producing 25% of global industrial output. The economic and demographic upsurge was stimulated by Mughal agrarian reforms that intensified agricultural production, and a relatively high degree of urbanisation.
The Mughal Empire reached the zenith of its territorial expanse during the reign of Aurangzeb, under whose reign India surpassed Qing China as the world's largest economy. Aurangzeb was less tolerant than his predecessors, reintroducing the jizya tax and destroying several historical temples, while at the same time building more Hindu temples than he destroyed, employing significantly more Hindus in his imperial bureaucracy than his predecessors, and advancing administrators based on ability rather than religion. However, he is often blamed for the erosion of the tolerant syncretic tradition of his predecessors, as well as increasing religious controversy and centralisation. The English East India Company suffered a defeat in the Anglo-Mughal War.
The Mughals suffered several blows due to invasions from Marathas, Rajputs, Jats and Afghans. In 1737, the Maratha general Bajirao of the Maratha Empire invaded and plundered Delhi. Under the general Amir Khan Umrao Al Udat, the Mughal Emperor sent 8,000 troops to drive away the 5,000 Maratha cavalry soldiers. Baji Rao easily routed the novice Mughal general. In 1737, in the final defeat of Mughal Empire, the commander-in-chief of the Mughal Army, Nizam-ul-mulk, was routed at Bhopal by the Maratha army. This essentially brought an end to the Mughal Empire. While Bharatpur State under Jat ruler Suraj Mal, overran the Mughal garrison at Agra and plundered the city. In 1739, Nader Shah, emperor of Iran, defeated the Mughal army at the Battle of Karnal. After this victory, Nader captured and sacked Delhi, carrying away treasures including the Peacock Throne. Mughal rule was further weakened by constant native Indian resistance; Banda Singh Bahadur led the Sikh Khalsa against Mughal religious oppression; Hindu Rajas of Bengal, Pratapaditya and Raja Sitaram Ray revolted; and Maharaja Chhatrasal, of Bundela Rajputs, fought the Mughals and established the Panna State. The Mughal dynasty was reduced to puppet rulers by 1757. Vadda Ghalughara took place under the Muslim provincial government based at Lahore to wipe out the Sikhs, with 30,000 Sikhs being killed, an offensive that had begun with the Mughals, with the Chhota Ghallughara, and lasted several decades under its Muslim successor states.
Maratha Empire
The Maratha kingdom was founded and consolidated by Chatrapati Shivaji. However, the credit for making the Marathas formidable power nationally goes to Peshwa (chief minister) Bajirao I. Historian K.K. Datta wrote that Bajirao I "may very well be regarded as the second founder of the Maratha Empire".
In the early 18th century, under the Peshwas, the Marathas consolidated and ruled over much of South Asia. The Marathas are credited to a large extent for ending Mughal rule in India. In 1737, the Marathas defeated a Mughal army in their capital, in the Battle of Delhi. The Marathas continued their military campaigns against the Mughals, Nizam, Nawab of Bengal and the Durrani Empire to further extend their boundaries. At its peak, the domain of the Marathas encompassed most of the Indian subcontinent. The Marathas even attempted to capture Delhi and discussed putting Vishwasrao Peshwa on the throne there in place of the Mughal emperor.
The Maratha empire at its peak stretched from Tamil Nadu in the south, to Peshawar (modern-day Khyber Pakhtunkhwa, Pakistan ) in the north, and Bengal in the east. The Northwestern expansion of the Marathas was stopped after the Third Battle of Panipat (1761). However, the Maratha authority in the north was re-established within a decade under Peshwa Madhavrao I.
Under Madhavrao I, the strongest knights were granted semi-autonomy, creating a confederacy of United Maratha states under the Gaekwads of Baroda, the Holkars of Indore and Malwa, the Scindias of Gwalior and Ujjain, the Bhonsales of Nagpur and the Puars of Dhar and Dewas. In 1775, the East India Company intervened in a Peshwa family succession struggle in Pune, which led to the First Anglo-Maratha War, resulting in a Maratha victory. The Marathas remained a major power in India until their defeat in the Second and Third Anglo-Maratha Wars (1805–1818).
Sikh Empire
The Sikh Empire was a political entity that governed the Northwestern regions of the Indian subcontinent, based around the Punjab, from 1799 to 1849. It was forged, on the foundations of the Khalsa, under the leadership of Maharaja Ranjit Singh (1780–1839).
Maharaja Ranjit Singh consolidated much of northern India into an empire using his Sikh Khalsa Army, trained in European military techniques and equipped with modern military technologies. Ranjit Singh proved himself to be a master strategist and selected well-qualified generals for his army. He successfully ended the Afghan-Sikh Wars. In stages, he added central Punjab, the provinces of Multan and Kashmir, and the Peshawar Valley to his empire.
At its peak in the 19th century, the empire extended from the Khyber Pass in the west, to Kashmir in the north, to Sindh in the south, running along Sutlej river to Himachal in the east. After the death of Ranjit Singh, the empire weakened, leading to conflict with the British East India Company. The First Anglo-Sikh War and Second Anglo-Sikh War marked the downfall of the Sikh Empire, making it among the last areas of the Indian subcontinent to be conquered by the British.
Other kingdoms
The Kingdom of Mysore in southern India expanded to its greatest extent under Hyder Ali and his son Tipu Sultan in the later half of the 18th century. Under their rule, Mysore fought series of wars against the Marathas and British or their combined forces. The Maratha–Mysore War ended in April 1787, following the finalising of treaty of Gajendragad, in which Tipu Sultan was obligated to pay tribute to the Marathas. Concurrently, the Anglo-Mysore Wars took place, where the Mysoreans used the Mysorean rockets. The Fourth Anglo-Mysore War (1798–1799) saw the death of Tipu. Mysore's alliance with the French was seen as a threat to the British East India Company, and Mysore was attacked from all four sides. The Nizam of Hyderabad and the Marathas launched an invasion from the north. The British won a decisive victory at the Siege of Seringapatam (1799).
Hyderabad was founded by the Qutb Shahi dynasty of Golconda in 1591. Following a brief Mughal rule, Asif Jah, a Mughal official, seized control of Hyderabad and declared himself Nizam-al-Mulk of Hyderabad in 1724. The Nizams lost considerable territory and paid tribute to the Maratha Empire after being routed in multiple battles, such as the Battle of Palkhed. However, the Nizams maintained their sovereignty from 1724 until 1948 through paying tributes to the Marathas, and later, being vassals of the British. Hyderabad State became a princely state in British India in 1798.
The Nawabs of Bengal had become the de facto rulers of Bengal following the decline of Mughal Empire. However, their rule was interrupted by Marathas who carried out six expeditions in Bengal from 1741 to 1748, as a result of which Bengal became a tributary state of Marathas. On 23 June 1757, Siraj ud-Daulah, the last independent Nawab of Bengal was betrayed in the Battle of Plassey by Mir Jafar. He lost to the British, who took over the charge of Bengal in 1757, installed Mir Jafar on the Masnad (throne) and established itself to a political power in Bengal. In 1765 the system of Dual Government was established, in which the Nawabs ruled on behalf of the British and were mere puppets to the British. In 1772 the system was abolished and Bengal was brought under the direct control of the British. In 1793, when the Nizamat (governorship) of the Nawab was also taken away, they remained as mere pensioners of the British East India Company.
In the 18th century, the whole of Rajputana was virtually subdued by the Marathas. The Second Anglo-Maratha War distracted the Marathas from 1807 to 1809, but afterward Maratha domination of Rajputana resumed. In 1817, the British went to war with the Pindaris, raiders who were fled in Maratha territory, which quickly became the Third Anglo-Maratha War, and the British government offered its protection to the Rajput rulers from the Pindaris and the Marathas. By the end of 1818 similar treaties had been executed between the other Rajput states and Britain. The Maratha Sindhia ruler of Gwalior gave up the district of Ajmer-Merwara to the British, and Maratha influence in Rajasthan came to an end. Most of the Rajput princes remained loyal to Britain in the Revolt of 1857, and few political changes were made in Rajputana until Indian independence in 1947. The Rajputana Agency contained more than 20 princely states, most notable being Udaipur State, Jaipur State, Bikaner State and Jodhpur State.
After the fall of the Maratha Empire, many Maratha dynasties and states became vassals in a subsidiary alliance with the British. With the decline of the Sikh Empire, after the First Anglo-Sikh War in 1846, under the terms of the Treaty of Amritsar, the British government sold Kashmir to Maharaja Gulab Singh and the princely state of Jammu and Kashmir, the second-largest princely state in British India, was created by the Dogra dynasty. While in eastern and north-eastern India, the Hindu and Buddhist states of Cooch Behar Kingdom, Twipra Kingdom and Kingdom of Sikkim were annexed by the British and made vassal princely state.
After the fall of the Vijayanagara Empire, Polygar states emerged in Southern India; and managed to weather invasions and flourished until the Polygar Wars, where they were defeated by the British East India Company forces. Around the 18th century, the Kingdom of Nepal was formed by Rajput rulers.
European exploration
In 1498, a Portuguese fleet under Vasco da Gama discovered a new sea route from Europe to India, which paved the way for direct Indo-European commerce. The Portuguese soon set up trading posts in Velha Goa, Damaon, Dio island, and Bombay. The Portuguese instituted the Goa Inquisition, where new Indian converts were punished for suspected heresy against Christianity and non-Christians were condemned. Goa remained the main Portuguese territory until it was annexed by India in 1961.
The next to arrive were the Dutch, with their main base in Ceylon. They established ports in Malabar. However, their expansion into India was halted after their defeat in the Battle of Colachel by the Kingdom of Travancore during the Travancore-Dutch War. The Dutch never recovered from the defeat and no longer posed a large colonial threat to India.
The internal conflicts among Indian kingdoms gave opportunities to the European traders to gradually establish political influence and appropriate lands. Following the Dutch, the British — who set up in the west coast port of Surat in 1619 — and the French both established trading outposts in India. Although continental European powers controlled various coastal regions of southern and eastern India during the ensuing century, they eventually lost all their territories in India to the British, with the exception of the French outposts of Pondichéry and Chandernagore, and the Portuguese colonies of Goa, Daman and Diu.
East India Company rule in India
The English East India Company was founded in 1600. It gained a foothold in India with the establishment of a factory in Masulipatnam on the Eastern coast of India in 1611 and a grant of rights by the Mughal emperor Jahangir to establish a factory in Surat in 1612. In 1640, after receiving similar permission from the Vijayanagara ruler farther south, a second factory was established in Madras on the southeastern coast. The islet of Bom Bahia in present-day Mumbai (Bombay), was a Portuguese outpost not far from Surat, it was presented to Charles II of England as dowry, in his marriage to Catherine of Braganza; Charles in turn leased Bombay to the Company in 1668. Two decades later, the company established a trade post in the River Ganges delta. During this time other companies established by the Portuguese, Dutch, French, and Danish were similarly expanding in the subcontinent.
The company's victory under Robert Clive in the 1757 Battle of Plassey and another victory in the 1764 Battle of Buxar (in Bihar), consolidated the company's power, and forced emperor Shah Alam II to appoint it the diwan, or revenue collector, of Bengal, Bihar, and Orissa. The company thus became the de facto ruler of large areas of the lower Gangetic plain by 1773. It also proceeded by degrees to expand its dominions around Bombay and Madras. The Anglo-Mysore Wars (1766–99) and the Anglo-Maratha Wars (1772–1818) left it in control of large areas of India south of the Sutlej River. With the defeat of the Marathas, no native power represented a threat for the company any longer.
The expansion of the company's power chiefly took two forms. The first of these was the outright annexation of Indian states and subsequent direct governance of the underlying regions that collectively came to comprise British India. The annexed regions included the North-Western Provinces (comprising Rohilkhand, Gorakhpur, and the Doab) (1801), Delhi (1803), Assam (Ahom Kingdom 1828) and Sindh (1843). Punjab, North-West Frontier Province, and Kashmir were annexed after the Anglo-Sikh Wars in 1849–56 (Period of tenure of Marquess of Dalhousie Governor General). However, Kashmir was immediately sold under the Treaty of Amritsar (1850) to the Dogra Dynasty of Jammu and thereby became a princely state. In 1854, Berar was annexed along with the state of Oudh two years later.
The second form of asserting power involved treaties in which Indian rulers acknowledged the company's hegemony in return for limited internal autonomy. Since the company operated under financial constraints, it had to set up political underpinnings for its rule. The most important such support came from the subsidiary alliances with Indian princes. In the early 19th century, the territories of these princes accounted for two-thirds of India. When an Indian ruler who was able to secure his territory wanted to enter such an alliance, the company welcomed it as an economical method of indirect rule that did not involve the economic costs of direct administration or the political costs of gaining the support of alien subjects.
In return, the company undertook the "defense of these subordinate allies and treated them with traditional respect and marks of honor." Subsidiary alliances created the Princely States of the Hindu maharajas and the Muslim nawabs. Prominent among the princely states were Cochin (1791), Jaipur (1794), Travancore (1795), Hyderabad (1798), Mysore (1799), Cis-Sutlej Hill States (1815), Central India Agency (1819), Cutch and Gujarat Gaikwad territories (1819), Rajputana (1818), and Bahawalpur (1833).
Indian indenture system
The Indian indenture system was an ongoing system of indenture, a form of debt bondage, by which 3.5 million Indians were transported to colonies of European powers to provide labour for the (mainly sugar) plantations. It started from the end of slavery in 1833 and continued until 1920. This resulted in the development of a large Indian diaspora that spread from the Caribbean to the Pacific Ocean and the growth of large Indo-Caribbean and Indo-African populations.
Late modern and contemporary period (1857 – 1947)
Rebellion of 1857 and its consequences
The Indian rebellion of 1857 was a large-scale rebellion by soldiers employed by the British East India Company in northern and central India against the company's rule. The spark that led to the mutiny was the issue of new gunpowder cartridges for the Enfield rifle, which was insensitive to local religious prohibition. The key mutineer was Mangal Pandey. In addition, the underlying grievances over British taxation, the ethnic gulf between the British officers and their Indian troops and land annexations played a significant role in the rebellion. Within weeks after Pandey's mutiny, dozens of units of the Indian army joined peasant armies in widespread rebellion. The rebel soldiers were later joined by Indian nobility, many of whom had lost titles and domains under the Doctrine of Lapse and felt that the company had interfered with a traditional system of inheritance. Rebel leaders such as Nana Sahib and the Rani of Jhansi belonged to this group.
After the outbreak of the mutiny in Meerut, the rebels very quickly reached Delhi. The rebels had also captured large tracts of the North-Western Provinces and Awadh (Oudh). Most notably, in Awadh, the rebellion took on the attributes of a patriotic revolt against British presence. However, the British East India Company mobilised rapidly with the assistance of friendly Princely states, but it took the British the better part of 1858 to suppress the rebellion. Due to the rebels being poorly equipped and having no outside support or funding, they were brutally subdued.
In the aftermath, all power was transferred from the British East India Company to the British Crown, which began to administer most of India as provinces. The Crown controlled the company's lands directly and had considerable indirect influence over the rest of India, which consisted of the Princely states ruled by local royal families. There were officially 565 princely states in 1947, but only 21 had actual state governments, and only three were large (Mysore, Hyderabad, and Kashmir). They were absorbed into the independent nation in 1947–48.
British Raj (1858–1947)
After 1857, the colonial government strengthened and expanded its infrastructure via the court system, legal procedures, and statutes. The Indian Penal Code came into being. In education, Thomas Babington Macaulay had made schooling a priority for the Raj in 1835 and succeeded in implementing the use of English for instruction. By 1890 some 60,000 Indians had matriculated. The Indian economy grew at about 1% per year from 1880 to 1920, and the population also grew at 1%. However, from 1910s Indian private industry began to grow significantly. India built a modern railway system in the late 19th century which was the fourth largest in the world. Historians have been divided on issues of economic history, with the Nationalist school arguing that India was poorer due to British rule.
In 1905, Lord Curzon split the large province of Bengal into a largely Hindu western half and "Eastern Bengal and Assam", a largely Muslim eastern half. The British goal was said to be efficient administration but the people of Bengal were outraged at the apparent "divide and rule" strategy. It also marked the beginning of the organised anti-colonial movement. When the Liberal party in Britain came to power in 1906, he was removed. Bengal was reunified in 1911. The new Viceroy Gilbert Minto and the new Secretary of State for India John Morley consulted with Congress leaders on political reforms. The Morley-Minto reforms of 1909 provided for Indian membership of the provincial executive councils as well as the Viceroy's executive council. The Imperial Legislative Council was enlarged from 25 to 60 members and separate communal representation for Muslims was established in a dramatic step towards representative and responsible government. Several socio-religious organisations came into being at that time. Muslims set up the All India Muslim League in 1906 to protect the interests of the aristocratic Muslims. The Hindu Mahasabha and Rashtriya Swayamsevak Sangh (RSS) sought to represent Hindu interests though the latter always claimed it to be a "cultural" organisation. Sikhs founded the Shiromani Akali Dal in 1920. However, the largest and oldest political party Indian National Congress, founded in 1885, attempted to keep a distance from the socio-religious movements and identity politics.
Indian Renaissance
The Bengali Renaissance refers to a social reform movement, dominated by Bengali Hindus, in the Bengal region of the Indian subcontinent during the nineteenth and early twentieth centuries, a period of British rule. Historian Nitish Sengupta describes the renaissance as having started with reformer and humanitarian Raja Ram Mohan Roy (1775–1833), and ended with Asia's first Nobel laureate Rabindranath Tagore (1861–1941). This flowering of religious and social reformers, scholars, and writers is described by historian David Kopf as "one of the most creative periods in Indian history."
During this period, Bengal witnessed an intellectual awakening that is in some way similar to the Renaissance. This movement questioned existing orthodoxies, particularly with respect to women, marriage, the dowry system, the caste system, and religion. One of the earliest social movements that emerged during this time was the Young Bengal movement, which espoused rationalism and atheism as the common denominators of civil conduct among upper caste educated Hindus. It played an important role in reawakening Indian minds and intellect across the Indian subcontinent.
Famines
During British East India Company and British Crown rule, India experienced some of deadliest ever recorded famines. These famines, usually resulting from crop failures and often exacerbated by policies of the colonial government, included the Great Famine of 1876–1878 in which 6.1 million to 10.3 million people died, the Great Bengal famine of 1770 where between 1 and 10 million people died, the Indian famine of 1899–1900 in which 1.25 to 10 million people died, and the Bengal famine of 1943 where between 2.1 and 3.8 million people died. The Third plague pandemic in the mid-19th century killed 10 million people in India. Despite persistent diseases and famines, the population of the Indian subcontinent, which stood at up to 200 million in 1750, had reached 389 million by 1941.
World War I
During World War I, over 800,000 volunteered for the army, and more than 400,000 volunteered for non-combat roles, compared with the pre-war annual recruitment of about 15,000 men. The Army saw early action on the Western Front at the First Battle of Ypres. After a year of front-line duty, sickness and casualties had reduced the Indian Corps to the point where it had to be withdrawn. Nearly 700,000 Indians fought the Turks in the Mesopotamian campaign. Indian formations were also sent to East Africa, Egypt, and Gallipoli.
Indian Army and Imperial Service Troops fought during the Sinai and Palestine Campaign's defence of the Suez Canal in 1915, at Romani in 1916 and to Jerusalem in 1917. India units occupied the Jordan Valley and after the German spring offensive they became the major force in the Egyptian Expeditionary Force during the Battle of Megiddo and in the Desert Mounted Corps' advance to Damascus and on to Aleppo. Other divisions remained in India guarding the North-West Frontier and fulfilling internal security obligations.
One million Indian troops served abroad during the war. In total, 74,187 died, and another 67,000 were wounded. The roughly 90,000 soldiers who died fighting in World War I and the Afghan Wars are commemorated by the India Gate.
World War II
British India officially declared war on Nazi Germany in September 1939. The British Raj, as part of the Allied Nations, sent over two and a half million volunteer soldiers to fight under British command against the Axis powers. Additionally, several Princely States provided large donations to support the Allied campaign. India also provided the base for American operations in support of China in the China Burma India Theatre.
Indians fought throughout the world, including in the European theatre against Germany, in North Africa against Germany and Italy, against the Italians in East Africa, in the Middle East against the Vichy French, in the South Asian region defending India against the Japanese and fighting the Japanese in Burma. Indians also aided in liberating British colonies such as Singapore and Hong Kong after the Japanese surrender in August 1945. Over 87,000 soldiers from the subcontinent died in World War II.
The Indian National Congress denounced Nazi Germany but would not fight it or anyone else until India was independent. Congress launched the Quit India Movement in August 1942, refusing to co-operate in any way with the government until independence was granted. The government immediately arrested over 60,000 national and local Congress leaders. The Muslim League rejected the Quit India movement and worked closely with the Raj authorities.
Subhas Chandra Bose (also called Netaji) broke with Congress and tried to form a military alliance with Germany or Japan to gain independence. The Germans assisted Bose in the formation of the Indian Legion; however, it was Japan that helped him revamp the Indian National Army (INA), after the First Indian National Army under Mohan Singh was dissolved. The INA fought under Japanese direction, mostly in Burma. Bose also headed the Provisional Government of Free India (or Azad Hind), a government-in-exile based in Singapore.
By 1942, neighbouring Burma was invaded by Japan, which by then had already captured the Indian territory of Andaman and Nicobar Islands. Japan gave nominal control of the islands to the Provisional Government of Free India on 21 October 1943, and in the following March, the Indian National Army with the help of Japan crossed into India and advanced as far as Kohima in Nagaland. This advance on the mainland of the Indian subcontinent reached its farthest point on Indian territory, retreating from the Battle of Kohima in June and from that of Imphal on 3 July 1944.
The region of Bengal in British India suffered a devastating famine during 1940–1943. An estimated 2.1–3 million died from the famine, frequently characterised as "man-made", with most sources asserting that wartime colonial policies exacerbated the crisis.
Indian independence movement (1885–1947)
The numbers of British in India were small, yet they were able to rule 52% of the Indian subcontinent directly and exercise considerable leverage over the princely states that accounted for 48% of the area.
One of the most important events of the 19th century was the rise of Indian nationalism, leading Indians to seek first "self-rule" and later "complete independence". However, historians are divided over the causes of its rise. Probable reasons include a "clash of interests of the Indian people with British interests", "racial discriminations", and "the revelation of India's past".
The first step toward Indian self-rule was the appointment of councillors to advise the British viceroy in 1861 and the first Indian was appointed in 1909. Provincial Councils with Indian members were also set up. The councillors' participation was subsequently widened into legislative councils. The British built a large British Indian Army, with the senior officers all British and many of the troops from small minority groups such as Gurkhas from Nepal and Sikhs. The civil service was increasingly filled with natives at the lower levels, with the British holding the more senior positions.
Bal Gangadhar Tilak, an Indian nationalist leader, declared Swaraj (home rule) as the destiny of the nation. His popular sentence "Swaraj is my birthright, and I shall have it" became the source of inspiration. Tilak was backed by rising public leaders like Bipin Chandra Pal and Lala Lajpat Rai, who held the same point of view, notably they advocated the Swadeshi movement involving the boycott of imported items and the use of Indian-made goods; the triumvirate were popularly known as Lal Bal Pal. In 1907, the Congress was split into two factions: The radicals, led by Tilak, advocated civil agitation and direct revolution to overthrow the British Empire and the abandonment of all things British. The moderates, led by leaders like Dadabhai Naoroji and Gopal Krishna Gokhale, on the other hand, wanted reform within the framework of British rule.
The partition of Bengal in 1905 further increased the revolutionary movement for Indian independence. The disenfranchisement lead some to take violent action.
The British themselves adopted a "carrot and stick" approach in response to renewed nationalist demands. The means of achieving the proposed measure were later enshrined in the Government of India Act 1919, which introduced the principle of a dual mode of administration, or diarchy, in which elected Indian legislators and appointed British officials shared power. In 1919, Colonel Reginald Dyer ordered his troops to fire their weapons on peaceful protestors, including unarmed women and children, resulting in the Jallianwala Bagh massacre; which led to the Non-cooperation Movement of 1920–1922. The massacre was a decisive episode towards the end of British rule in India.
From 1920 leaders such as Mahatma Gandhi began highly popular mass movements to campaign against the British Raj using largely peaceful methods. The Gandhi-led independence movement opposed the British rule using non-violent methods like non-co-operation, civil disobedience and economic resistance. However, revolutionary activities against the British rule took place throughout the Indian subcontinent and some others adopted a militant approach like the Hindustan Republican Association, that sought to overthrow British rule by armed struggle.
The All India Azad Muslim Conference gathered in Delhi in April 1940 to voice its support for an independent and united India. Its members included several Islamic organisations in India, as well as 1,400 nationalist Muslim delegates. The pro-separatist All-India Muslim League worked to try to silence those nationalist Muslims who stood against the partition of India, often using "intimidation and coercion". The murder of the All India Azad Muslim Conference leader Allah Bakhsh Soomro also made it easier for the pro-separatist All-India Muslim League to demand the creation of a Pakistan.
After World War II (c. 1946–1947)
In January 1946, several mutinies broke out in the armed services, starting with that of RAF servicemen frustrated with their slow repatriation. The mutinies came to a head with mutiny of the Royal Indian Navy in Bombay in February 1946, followed by others in Calcutta, Madras, and Karachi. The mutinies were rapidly suppressed. In early 1946, new elections were called and Congress candidates won in eight of the eleven provinces.
Late in 1946, the Labour government decided to end British rule of India, and in early 1947 it announced its intention of transferring power no later than June 1948 and participating in the formation of an interim government.
Along with the desire for independence, tensions between Hindus and Muslims had also been developing over the years. Muslim League leader Muhammad Ali Jinnah proclaimed 16 August 1946 as Direct Action Day, with the stated goal of highlighting, peacefully, the demand for a Muslim homeland in British India, which resulted in the outbreak of the cycle of violence that would be later called the "Great Calcutta Killing of August 1946". The communal violence spread to Bihar, Noakhali in Bengal, Garhmukteshwar in the United Provinces, and on to Rawalpindi in March 1947 in which Sikhs and Hindus were attacked or driven out by Muslims.
Independence and partition (1947–present)
In August 1947, the British Indian Empire was partitioned into the Union of India and Dominion of Pakistan. In particular, the partition of the Punjab and Bengal led to rioting between Hindus, Muslims, and Sikhs in these provinces and spread to other nearby regions, leaving some 500,000 dead. The police and army units were largely ineffective. The British officers were gone, and the units were beginning to tolerate if not actually indulge in violence against their religious enemies. Also, this period saw one of the largest mass migrations anywhere in modern history, with a total of 12 million Hindus, Sikhs and Muslims moving between the newly created nations of India and Pakistan (which gained independence on 15 and 14 August 1947 respectively). In 1971, Bangladesh, formerly East Pakistan and East Bengal, seceded from Pakistan.
See also
Adivasi
Early Indians
List of Indian periods
Economic history of India
Historiography of India
Foreign relations of India
Indian maritime history
Linguistic history of India
Military history of India
Outline of ancient India
Taxation in medieval India
The Cambridge History of India
Timeline of Indian history
Traditional games of South Asia
References
Notes
Citations
Sources
Printed sources
Further reading
General
Basham, A.L., ed. The Illustrated Cultural History of India (Oxford University Press, 2007)
Buckland, C.E. Dictionary of Indian Biography (1906) 495pp full text
Chakrabarti D.K. 2009. India, an archaeological history : palaeolithic beginnings to early historic foundations.
Dharma Kumar and Meghnad Desai, eds. The Cambridge Economic History of India: Volume 2, c. 1751–1970 (2nd ed. 2010), 1114pp of scholarly articles
Fisher, Michael. An Environmental History of India: From Earliest Times to the Twenty-First Century (Cambridge UP, 2018)
Guha, Ramachandra. India After Gandhi: The History of the World's Largest Democracy (2007), 890pp; since 1947
James, Lawrence. Raj: The Making and Unmaking of British India (2000) online
Khan, Yasmin. The Raj At War: A People's History Of India's Second World War (2015); also published as India At War: The Subcontinent and the Second World War India At War: The Subcontinent and the Second World War.
Khan, Yasmin. The Great Partition: The Making of India and Pakistan (2n d ed. Yale UP 2017) excerpt
Mcleod, John. The History of India (2002) excerpt and text search
Majumdar, R.C. : An Advanced History of India. London, 1960.
Majumdar, R.C. (ed.) : The History and Culture of the Indian People, Bombay, 1977 (in eleven volumes).
Mansingh, Surjit The A to Z of India (2010), a concise historical encyclopedia
Markovits, Claude, ed. A History of Modern India, 1480–1950 (2002) by a team of French scholars
Metcalf, Barbara D. and Thomas R. Metcalf. A Concise History of Modern India (2006)
Peers, Douglas M. India under Colonial Rule: 1700–1885 (2006), 192pp
Riddick, John F. The History of British India: A Chronology (2006) excerpt
Riddick, John F. Who Was Who in British India (1998); 5000 entries excerpt
Rothermund, Dietmar. An Economic History of India: From Pre-Colonial Times to 1991 (1993)
Sharma, R.S., India's Ancient Past, (Oxford University Press, 2005)
Sarkar, Sumit. Modern India, 1885–1947 (2002)
Singhal, D.P. A History of the Indian People (1983)
Smith, Vincent. The Oxford History of India (3rd ed. 1958), old-fashioned
Spear, Percival. A History of India. Volume 2. Penguin Books. (1990) [First published 1965]
Stein, Burton. A History of India (1998)
Thapar, Romila. Early India: From the Origins to AD 1300 (2004) excerpt and text search
Thompson, Edward, and G.T. Garratt. Rise and Fulfilment of British Rule in India (1934) 690 pages; scholarly survey, 1599–1933 excerpt and text search
Tomlinson, B.R. The Economy of Modern India, 1860–1970 (The New Cambridge History of India) (1996)
Tomlinson, B.R. The political economy of the Raj, 1914–1947 (1979) online
Wolpert, Stanley. A New History of India (8th ed. 2008) online 7th edition
Historiography
Bose, Mihir. "India's Missing Historians: Mihir Bose Discusses the Paradox That India, a Land of History, Has a Surprisingly Weak Tradition of Historiography", History Today 57#9 (2007) pp. 34–. online
Primary
Highly detailed description of all of India in 1901.
Online resources
India | 0.769871 | 0.999769 | 0.769693 |
Cultural analysis | As a discipline, cultural analysis is based on using qualitative research methods of the arts, humanities, social sciences, in particular ethnography and anthropology, to collect data on cultural phenomena and to interpret cultural representations and practices; in an effort to gain new knowledge or understanding through analysis of that data and cultural processes. This is particularly useful for understanding and mapping trends, influences, effects, and affects within cultures.
There are four themes to sociological cultural analysis:
1. Adaptation and Change
This refers to how well a certain culture adapts to its surroundings by being used and developed. Some examples of this are foods, tools, home, surroundings, art, etc. that show how the given culture adapted. Also, this aspect aims to show how the given culture makes the environment more accommodating.
2. How culture is used to survive
How the given culture helps its members survive the environment.
3. Holism, Specificity
The ability to put the observations into a single collection, and presenting it in a coherent manner.
4. Expressions
This focuses on studying the expressions and performance of everyday culture.
Cultural analysis in the humanities
This developed at the intersection of cultural studies, history, comparative literature, art history, fine art, philosophy, literary theory, theology, anthropology, economy. It developed an interdisciplinary approach to the study of texts, images, films, and all related cultural practices. It offers an interdisciplinary approach to the analysis of cultural representations and practices.
Cultural analysis is also a method for rethinking our relation to history because it makes visible the position of researcher, writer or student. The social and cultural present from which we look at past cultural practices—history— shapes the interpretations that are made of the past, while cultural analysis also reveals how the past shapes the present through the role of cultural memory for instance. Cultural analysis understands culture, therefore, as a constantly changing set of practices that are in dialogue with the past as it has been registered through texts, images, buildings, documents, stories, myths.
In addition to having a relation to disciplines also interested in cultures as what people do and say, believe and think, such as ethnography and anthropology, cultural analysis as a practice in the humanities considers the texts and images, the codes and behaviours, the beliefs and imaginings that you might study in literature, philosophy, art history. But cultural analysis does not confine the meanings to the disciplinary methods. It allows and requires dialogue across many ways of understanding what people have done and what people are doing through acts, discourses, practices, statements. Cultural analysis crosses the boundaries between disciplines but also between formal and informal cultural activities.
The major purpose of cultural analysis is to develop analytical tools for reading and understanding a wide range of cultural practices and forms, past and present.
See also
Girl Heroes
Semiotics of culture
Tartu–Moscow Semiotic School
Daniel Seddiqui
References
External links
Amsterdam School for Cultural Analysis
Cultural Analysis: An Interdisciplinary Forum on Folklore and Popular Culture
Institute for Cultural Analysis, Nottingham
http://www.centrecath.leeds.ac.uk Centre for Cultural Analysis, Theory and History, University of Leeds
Master of Applied Cultural Analysis Lund University, University of Copenhagen
Cultural anthropology | 0.794891 | 0.968266 | 0.769666 |
Viking Age | The Viking Age (about ) was the period during the Middle Ages when Norsemen known as Vikings undertook large-scale raiding, colonising, conquest, and trading throughout Europe and reached North America. The Viking Age applies not only to their homeland of Scandinavia but also to any place significantly settled by Scandinavians during the period. The Scandinavians of the Viking Age are often referred to as Vikings as well as Norsemen, although few of them were Vikings in the sense of being engaged in piracy.
Voyaging by sea from their homelands in Denmark, Norway, and Sweden, the Norse people settled in the British Isles, Ireland, the Faroe Islands, Iceland, Greenland, Normandy, and the Baltic coast and along the Dnieper and Volga trade routes in eastern Europe, where they were also known as Varangians. They also briefly settled in Newfoundland, becoming the first Europeans to reach North America. The Norse-Gaels, Normans, Rus' people, Faroese, and Icelanders emerged from these Norse colonies. The Vikings founded several kingdoms and earldoms in Europe: the Kingdom of the Isles (Suðreyjar), Orkney (Norðreyjar), York (Jórvík) and the Danelaw (Danalǫg), Dublin (Dyflin), Normandy, and Kievan Rus' (Garðaríki). The Norse homelands were also unified into larger kingdoms during the Viking Age, and the short-lived North Sea Empire included large swathes of Scandinavia and Britain. In 1021, the Vikings achieved the feat of reaching North America—the date of which was not determined until a millennium later.
Several things drove this expansion. The Vikings were drawn by the growth of wealthy towns and monasteries overseas and weak kingdoms. They may also have been pushed to leave their homeland by overpopulation, lack of good farmland, and political strife arising from the unification of Norway. The aggressive expansion of the Carolingian Empire and forced conversion of the neighbouring Saxons to Christianity may also have been a factor. Sailing innovations had allowed the Vikings to sail farther and longer to begin with.
Information about the Viking Age is drawn largely from primary sources written by those the Vikings encountered, as well as archaeology, supplemented with secondary sources such as the Icelandic Sagas.
Context
In England, the Viking attack of 8 June 793 that destroyed the abbey on Lindisfarne, a centre of learning on an island off the northeast coast of England in Northumberland, is regarded as the beginning of the Viking Age. Judith Jesch has argued that the start of the Viking Age can be pushed back to 700–750, as it was unlikely that the Lindisfarne attack was the first attack, and given archeological evidence that suggests contacts between Scandinavia and the British isles earlier in the century. The earliest raids were most likely small in scale, but expanded in scale during the 9th century.
In the Lindisfarne attack, monks were killed in the abbey, thrown into the sea to drown, or carried away as slaves along with the church treasures, giving rise to the traditional (but unattested) prayer—, "Free us from the fury of the Northmen, Lord." Three Viking ships had beached in Weymouth Bay four years earlier (although due to a scribal error the Anglo-Saxon Chronicle dates this event to 787 rather than 789), but that incursion may have been a trading expedition that went wrong rather than a piratical raid. Lindisfarne was different. The Viking devastation of Northumbria's Holy Island was reported by the Northumbrian scholar Alcuin of York, who wrote: "Never before in Britain has such a terror appeared". Vikings were portrayed as wholly violent and bloodthirsty by their enemies. Robert of Gloucester's Chronicle, c. 1300, mentions Viking attacks on the people of East Anglia wherein they are described as "wolves among sheep".
The first challenges to the many negative depictions of Vikings in Britain emerged in the 17th century. Pioneering scholarly works on the Viking Age reached only a small readership there, while linguists traced the Viking Age origins of rural idioms and proverbs. New dictionaries and grammars of the Old Icelandic language appeared, enabling more Victorian scholars to read the primary texts of the Icelandic Sagas.
In Scandinavia, the 17th-century Danish scholars Thomas Bartholin and Ole Worm and Swedish scholar Olaus Rudbeck were the first to use runic inscriptions and Icelandic Sagas as primary historical sources. During the Enlightenment and Nordic Renaissance, historians such as the Icelandic-Norwegian Thormodus Torfæus, Danish-Norwegian Ludvig Holberg, and Swedish Olof von Dalin developed a more "rational" and "pragmatic" approach to historical scholarship.
By the latter half of the 18th century, while the Icelandic sagas were still used as important historical sources, the Viking Age had again come to be regarded as a barbaric and uncivilised period in the history of the Nordic countries. Scholars outside Scandinavia did not begin to extensively reassess the achievements of the Vikings until the 1890s, recognising their artistry, technological skills, and seamanship.
Background
The Vikings who invaded western and eastern Europe were mainly pagans from the same area as present-day Denmark, Norway, and Sweden. They also settled in the Faroe Islands, Ireland, Iceland, peripheral Scotland (Caithness, the Hebrides and the Northern Isles), Greenland, and Canada.
Their North Germanic language, Old Norse, became the precursor to present-day Scandinavian languages. By 801, a strong central authority appears to have been established in Jutland, and the Danes were beginning to look beyond their own territory for land, trade, and plunder.
In Norway, mountainous terrain and fjords formed strong natural boundaries. Communities remained independent of each other, unlike the situation in lowland Denmark. By 800, some 30 small kingdoms existed in Norway.
The sea was the easiest way of communication between the Norwegian kingdoms and the outside world. In the eighth century, Scandinavians began to build ships of war and send them on raiding expeditions which started the Viking Age. The North Sea rovers were traders, colonisers, explorers, and plunderers who were notorious in England, Scotland, Ireland, Wales and other places in Europe for being brutal.
Probable causes
Many theories are posited for the cause of the Viking invasions; the will to explore likely played a major role. At the time, England, Wales, and Ireland were vulnerable to attack, being divided into many different warring kingdoms in a state of internal disarray, while the Franks were well defended. Overpopulation, especially near the Scandes, was a possible reason, although some disagree with this theory. Technological advances like the use of iron and a shortage of women due to selective female infanticide also likely had an impact. Tensions caused by Frankish expansion to the south of Scandinavia, and their subsequent attacks upon the Viking peoples, may have also played a role in Viking pillaging. Harald I of Norway ("Harald Fairhair") had united Norway around this time and displaced many peoples. As a result, these people sought for new bases to launch counter-raids against Harald.
Debate among scholars is ongoing as to why the Scandinavians began to expand from the eighth through 11th centuries. Various factors have been highlighted: demographic, economic, ideological, political, technological, and environmental models.
Demographic models
Barrett considers that prior scholarship having examined causes of the Viking Age in terms of demographic determinism, the resulting explanations have generated a "wide variety of possible models". While admitting that Scandinavia did share in the general European population and settlement expansion at the end of the first millennium, he dismisses 'population pressure' as a realistic cause of the Viking Age. Bagge alludes to the evidence of demographic growth at the time, manifested in an increase of new settlements, but he declares that a warlike people do not require population pressure to resort to plundering abroad. He grants that although population increase was a factor in this expansion, it was not the incentive for such expeditions. According to Ferguson, the proliferation of the use of iron in Scandinavia at the time increased agricultural yields, allowing for demographic growth that strained the limited capacity of the land. As a result, many Scandinavians found themselves with no property and no status. To remedy this, these landless men took to piracy to obtain material wealth. The population continued to grow, and the pirates looked further and further beyond the borders of the Baltic, and eventually into all of Europe. Historian Anders Winroth has also challenged the "overpopulation" thesis, arguing that scholars are "simply repeating an ancient cliché that has no basis in fact."
Economic model
The economic model states that the Viking Age was the result of growing urbanism and trade throughout mainland Europe. As the Islamic world grew, so did its trade routes, and the wealth which moved along them was pushed further and further north. In Western Europe, proto-urban centres such as those with names ending in wich, the so-called -wich towns of Anglo-Saxon England, began to boom during the prosperous era known as the "Long Eighth Century". The Scandinavians, like many other Europeans, were drawn to these wealthier "urban" centres, which soon became frequent targets of Viking raids. The connection of the Scandinavians to larger and richer trade networks lured the Vikings into Western Europe, and soon the rest of Europe and parts of the Middle East. In England, hoards of Viking silver, such as the Cuerdale Hoard and the Vale of York Hoard, offer insight into this phenomenon. Barrett rejects this model, arguing that the earliest recorded Viking raids were in Western Norway and northern Britain, which were not highly economically integrated areas. He proposes a version of the economic model that points to new economic incentives stemming from a "bulge" in the population of young Scandinavian men, impelling them to engage in maritime activity due to limited economic alternatives.
Ideological model
This era coincided with the Medieval Warm Period (800–1300) and stopped with the start of the Little Ice Age (about 1250–1850). The start of the Viking Age, with the sack of Lindisfarne, also coincided with Charlemagne's Saxon Wars, or Christian wars with pagans in Saxony. Bruno Dumézil theorises that the Viking attacks may have been in response to the spread of Christianity among pagan peoples. Because of the penetration of Christianity in Scandinavia, serious conflict divided Norway for almost a century.
Political model
The first of two main components to the political model is the external "pull" factor, which suggests that the weak political bodies of Britain and Western Europe made for an attractive target for Viking raiders. The reasons for these weaknesses vary, but generally can be simplified into decentralised polities, or religious sites. As a result, Viking raiders found it easy to sack and then retreat from these areas which were thus frequently raided. The second case is the internal "push" factor, which coincides with a period just before the Viking Age in which Scandinavia was undergoing a mass centralisation of power in the modern-day countries of Denmark, Sweden, and especially Norway. This centralisation of power forced hundreds of chieftains from their lands, which were slowly being appropriated by the kings and dynasties that began to emerge. As a result, many of these chiefs sought refuge elsewhere, and began harrying the coasts of the British Isles and Western Europe. Anders Winroth argues that purposeful choices by warlords "propelled the Viking Age movement of people from Scandinavia."
Technological model This model suggests that the Viking Age occurred as a result of technological innovations that allowed the Vikings to go on their raids in the first place. There is no doubt that piracy existed in the Baltic before the Viking Age, but developments in sailing technology and practice made it possible for early Viking raiders to attack lands farther away. Among these developments are included the use of larger sails, tacking practices, and 24-hour sailing. Anders Winroth writes, "If early medieval Scandinavians had not become exquisite shipwrights, there would have been no Vikings and no Viking Age."
These models constitute much of what is known about the motivations for and the causes of the Viking Age. In all likelihood, the beginning of this age was the result of some combination of the aforementioned hypotheses.
The Viking colonisation of islands in the North Atlantic has in part been attributed to a period of favourable climate (the Medieval Climactic Optimum), as the weather was relatively stable and predictable, with calm seas. Sea ice was rare, harvests were typically strong, and fishing conditions were good.
Overview
The earliest date given for the coming of Vikings to England is 789 during the reign of King Beorhtric of Wessex. According to the Anglo-Saxon Chronicle three Norwegian boats from Hordaland (Old Norse: Hǫrðalandi) landed at the Isle of Portland off the coast of Dorset. They apparently were mistaken for merchants by a royal official, Beaduhard, a king's reeve who attempted to force them to come to the king's manor, whereupon they killed the reeve and his men. The beginning of the Viking Age in the British Isles is often set at 793. It was recorded in the Anglo–Saxon Chronicle that the Northmen raided the important island monastery of Lindisfarne (the generally accepted date is actually 8 June, not January):
In 794, according to the Annals of Ulster, a serious attack was made on Lindisfarne's mother-house of Iona, which was followed in 795 by raids upon the northern coast of Ireland. From bases there, the Norsemen attacked Iona again in 802, causing great slaughter amongst the Céli Dé Brethren, and burning the abbey to the ground.
The Vikings primarily targeted Ireland until 830, as England and the Carolingian Empire were able to fight the Vikings off. However, after , the Vikings had considerable success against England, the Carolingian Empire, and other parts of Western Europe. After 830, the Vikings exploited disunity within the Carolingian Empire, as well as pitting the English kingdoms against each other.
The Kingdom of the Franks under Charlemagne was particularly devastated by these raiders, who could sail up the Seine with near impunity. Near the end of Charlemagne's reign (and throughout the reigns of his sons and grandsons), a string of Norse raids began, culminating in a gradual Scandinavian conquest and settlement of the region now known as Normandy in 911. Frankish King Charles the Simple granted the Duchy of Normandy to Viking warleader Rollo (a chieftain of disputed Norwegian or Danish origins) in order to stave off attacks by other Vikings. Charles gave Rollo the title of duke. In return, Rollo swore fealty to Charles, converted to Christianity, and undertook to defend the northern region of France against the incursions of other Viking groups. Several generations later, the Norman descendants of these Viking settlers not only identified themselves as Norman, but also carried the Norman language (either a French dialect or a Romance language which can be classified as one of the Oïl languages along with French, Picard and Walloon), and their Norman culture, into England in 1066. With the Norman Conquest, they became the ruling aristocracy of Anglo–Saxon England.
The clinker-built longships used by the Scandinavians were uniquely suited to both deep and shallow waters. They extended the reach of Norse raiders, traders, and settlers along coastlines and along the major river valleys of north-western Europe. Rurik also expanded to the east, and in 859 became ruler either by conquest or invitation by local people of the city of Novgorod (which means "new city") on the Volkhov River. His successors moved further, founding the early East Slavic state of Kievan Rus' with the capital in Kiev. This persisted until 1240, when the Mongols invaded Kievan Rus'.
Other Norse people continued south to the Black Sea and then on to Constantinople. The eastern connections of these "Varangians" brought Byzantine silk, a cowrie shell from the Red Sea, and even coins from Samarkand, to Viking York.
In 884, an army of Danish Vikings was defeated at the Battle of Norditi (also called the Battle of Hilgenried Bay) on the Germanic North Sea coast by a Frisian army under Archbishop Rimbert of Bremen-Hamburg, which precipitated the complete and permanent withdrawal of the Vikings from East Frisia. In the 10th and 11th centuries, Saxons and Slavs began to use trained mobile cavalry successfully against Viking foot soldiers, making it hard for Viking invaders to fight inland.
In Scandinavia, the Viking Age is considered by some scholars to have ended with the establishment of royal authority and the establishment of Christianity as the dominant religion. Scholars have proposed different end dates for the Viking Age, but many argue it ended in the 11th century. The year 1000 is sometimes used, as that was the year in which Iceland converted to Christianity, marking the conversion of all of Scandinavia to Christianity. The death of Harthacnut, the Danish King of England, in 1042 has also been used as an end date. History does not often allow such clear-cut separation between arbitrary "ages", and it is not easy to pin down a single date that applies to all the Viking world. The Viking Age was not a "monolithic chronological period" across three or four hundred years, but was characterised by various distinct phases of Viking activity. It is unlikely that the Viking Age could be so neatly assigned a terminal event. The end of the Viking era in Norway is marked by the Battle of Stiklestad in 1030, in which Óláfr Haraldsson (later known as Olav the Holy), a fervent Christianiser who dealt harshly with those suspected of clinging to pagan cult, was killed. Although Óláfr's army lost the battle, Christianity continued to spread, and after his death he became one of the subjects of the three miracle stories given in the Manx Chronicle. In Sweden, the reign of king Olof Skötkonung is considered to be the transition from the Viking Age to the Middle Ages, because he was the first Christian king of the Swedes, and he is associated with a growing influence of the church in what is today southwestern and central Sweden. Norse beliefs persisted until the 12th century; Olof was the last king in Scandinavia to adopt Christianity.
The end of the Viking Age is traditionally marked in England by the failed invasion attempted by the Norwegian king Harald III (Haraldr Harðráði), who was defeated by Saxon King Harold Godwinson in 1066 at the Battle of Stamford Bridge; in Ireland, the capture of Dublin by Strongbow and his Hiberno-Norman forces in 1171; and 1263 in Scotland by the defeat of King Hákon Hákonarson at the Battle of Largs by troops loyal to Alexander III. Godwinson was subsequently defeated within a month by another Viking descendant, William, Duke of Normandy. Scotland took its present form when it regained territory from the Norse between the 13th and the 15th centuries; the Western Isles and the Isle of Man remained under Scandinavian authority until 1266. Orkney and Shetland belonged to the king of Norway as late as 1469. Consequently, a "long Viking Age" may stretch into the 15th century.
Northern Europe
England
According to the Anglo-Saxon Chronicles, Viking raiders struck England in 793 and raided Lindisfarne, the monastery that held Saint Cuthbert's relics, killing the monks and capturing the valuables. The raid marked the beginning of the "Viking Age of Invasion". Great but sporadic violence continued on England's northern and eastern shores, with raids continuing on a small scale across coastal England. While the initial raiding groups were small, a great amount of planning is believed to have been involved. The Vikings raided during the winter of 840–841, rather than the usual summer, having waited on an island off Ireland.
In 850, the Vikings overwintered for the first time in England, on the island of Thanet, Kent. In 854, a raiding party overwintered a second time, at the Isle of Sheppey in the Thames estuary. In 864, they reverted to Thanet for their winter encampment.
The following year, the Great Heathen Army, led by brothers Ivar the Boneless, Halfdan and Ubba, and also by another Viking Guthrum, arrived in East Anglia. They proceeded to cross England into Northumbria and captured York, establishing a Viking community in Jorvik, where some settled as farmers and craftsmen. Most of the English kingdoms, being in turmoil, could not stand against the Vikings. In 867, Northumbria became the northern kingdom of the coalescing Danelaw, after its conquest by the Ragnarsson brothers, who installed an Englishman, Ecgberht, as a puppet king. By 870, the "Great Summer Army" arrived in England, led by a Viking leader called Bagsecg and his five earls. Aided by the Great Heathen Army (which had already overrun much of England from its base in Jorvik), Bagsecg's forces, and Halfdan's forces (through an alliance), the combined Viking forces raided much of England until 871, when they planned an invasion of Wessex. On 8 January 871, Bagsecg was killed at the Battle of Ashdown along with his earls. As a result, many of the Vikings returned to northern England, where Jorvic had become the centre of the Viking kingdom, but Alfred of Wessex managed to keep them out of his country. Alfred and his successors continued to drive back the Viking frontier and take York. A new wave of Vikings appeared in England in 947, when Eric Bloodaxe captured York.
In 1003, the Danish King Sweyn Forkbeard started a series of raids against England to avenge the St. Brice's Day massacre of England's Danish inhabitants, culminating in a full-scale invasion that led to Sweyn being crowned king of England in 1013. Sweyn was also king of Denmark and parts of Norway at this time. The throne of England passed to Edmund Ironside of Wessex after Sweyn's death in 1014. Sweyn's son, Cnut the Great, won the throne of England in 1016 through conquest. When Cnut the Great died in 1035 he was a king of Denmark, England, Norway, and parts of Sweden. Harold Harefoot became king of England after Cnut's death, and Viking rule of England ceased.
The Viking presence declined until 1066, when they lost their final battle with the English at Stamford Bridge. The death in the battle of King Harald Hardrada of Norway ended any hope of reviving Cnut's North Sea Empire, and it is because of this, rather than the Norman conquest, that 1066 is often taken as the end of the Viking Age. Nineteen days later, a large army containing and led by senior Normans, themselves mostly male-line descendants of Norsemen, invaded England and defeated the weakened English army at the Battle of Hastings. The army invited others from across Norman gentry and ecclesiastical society to join them. There were several unsuccessful attempts by Scandinavian kings to regain control of England, the last of which took place in 1086.
In 1152, Eystein II of Norway led a plundering raid down the east coast of Britain.
Ireland
In 795, small bands of Vikings began plundering monastic settlements along the coast of Gaelic Ireland. The Annals of Ulster state that in 821 the Vikings plundered Howth and "carried off a great number of women into captivity". From 840 the Vikings began building fortified encampments, longphorts, on the coast and overwintering in Ireland. The first were at Dublin and Linn Duachaill. Their attacks became bigger and reached further inland, striking larger monastic settlements such as Armagh, Clonmacnoise, Glendalough, Kells, and Kildare, and also plundering the ancient tombs of Brú na Bóinne. Viking chief Thorgest is said to have raided the whole midlands of Ireland until he was killed by Máel Sechnaill I in 845.
In 853, Viking leader Amlaíb (Olaf) became the first king of Dublin. He ruled along with his brothers Ímar (possibly Ivar the Boneless) and Auisle. Over the following decades, there was regular warfare between the Vikings and the Irish, and between two groups of Vikings: the Dubgaill and Finngaill (dark and fair foreigners). The Vikings also briefly allied with various Irish kings against their rivals. In 866, Áed Findliath burnt all Viking longphorts in the north, and they never managed to establish permanent settlements in that region. The Vikings were driven from Dublin in 902.
They returned in 914, now led by the Uí Ímair (House of Ivar). During the next eight years the Vikings won decisive battles against the Irish, regained control of Dublin, and founded settlements at Waterford, Wexford, Cork, and Limerick, which became Ireland's first large towns. They were important trading hubs, and Viking Dublin was the biggest slave port in western Europe.
These Viking territories became part of the patchwork of kingdoms in Ireland. Vikings intermarried with the Irish and adopted elements of Irish culture, becoming the Norse-Gaels. Some Viking kings of Dublin also ruled the kingdom of the Isles and York; such as Sitric Cáech, Gofraid ua Ímair, Olaf Guthfrithson, and Olaf Cuaran. Sigtrygg Silkbeard was "a patron of the arts, a benefactor of the church, and an economic innovator" who established Ireland's first mint, in Dublin.
In , Máel Sechnaill Mór defeated the Dublin Vikings and forced them into submission. Over the following thirty years, Brian Boru subdued the Viking territories and made himself High King of Ireland. The Dublin Vikings, together with Leinster, twice rebelled against him, but they were defeated in the battles of Glenmama and Clontarf. After the battle of Clontarf, the Dublin Vikings could no longer "single-handedly threaten the power of the most powerful kings of Ireland". Brian's rise to power and conflict with the Vikings is chronicled in Cogad Gáedel re Gallaib ("The War of the Irish with the Foreigners").
Scotland
While few records are known, the Vikings are thought to have led their first raids in Scotland on the holy island of Iona in 794, the year following the raid on the other holy island of Lindisfarne, Northumbria.
In 839, a large Norse fleet invaded via the River Tay and River Earn, both of which were highly navigable, and reached into the heart of the Pictish kingdom of Fortriu. They defeated Eogán mac Óengusa, king of the Picts, his brother Bran, and the king of the Scots of Dál Riata, Áed mac Boanta, along with many members of the Pictish aristocracy in battle. The sophisticated kingdom that had been built fell apart, as did the Pictish leadership, which had been stable for more than 100 years since the time of Óengus mac Fergusa (The accession of Cináed mac Ailpín as king of both Picts and Scots can be attributed to the aftermath of this event).
In 870, the Britons of the Old North around the Firth of Clyde came under Viking attack as well. The fortress atop Alt Clut ("Rock of the Clyde", the Brythonic name for Dumbarton Rock, which had become the metonym for their kingdom) was besieged by the Viking kings Amlaíb and Ímar. After four months, its water supply failed, and the fortress fell. The Vikings are recorded to have transported a vast prey of British, Pictish, and English captives back to Ireland. These prisoners may have included the ruling family of Alt Clut including the king Arthgal ap Dyfnwal, who was slain the following year under uncertain circumstances. The fall of Alt Clut marked a watershed in the history of the realm. Afterwards, the capital of the restructured kingdom was relocated about 12miles (20km) up the River Clyde to the vicinity of Govan and Partick (within present-day Glasgow), and became known as the Kingdom of Strathclyde, which persisted as a major regional political player for another 150 years.
The land that now comprises most of the Scottish Lowlands had previously been the northernmost part of the Anglo-Saxon kingdom of Northumbria, which fell apart with its Viking conquest; these lands were never regained by the Anglo-Saxons, or England. The upheaval and pressure of Viking raiding, occupation, conquest and settlement resulted in alliances among the formerly enemy peoples that comprised what would become present-day Scotland. Over the subsequent 300 years, this Viking upheaval and pressure led to the unification of the previously contending Gaelic, Pictish, British, and English kingdoms, first into the Kingdom of Alba, and finally into the greater Kingdom of Scotland. The Viking Age in Scotland came to an end after another 100 years. The last vestiges of Norse power in the Scottish seas and islands were completely relinquished after another 200 years.
Earldom of Orkney
By the mid-9th century, the Norsemen had settled in Shetland, Orkney (the Nordreys- Norðreyjar), the Hebrides and Isle of Man, (the Sudreys- Suðreyjar—this survives in the Diocese of Sodor and Man) and parts of mainland Scotland. The Norse settlers were to some extent integrating with the local Gaelic population (see Norse-Gaels) in the Hebrides and Man. These areas were ruled over by local Jarls, originally captains of ships or hersirs. The Jarl of Orkney and Shetland, however, claimed supremacy.
In 875, King Harald Fairhair led a fleet from Norway to Scotland. In his attempt to unite Norway, he found that many of those opposed to his rise to power had taken refuge in the Isles. From here, they were raiding not only foreign lands but were also attacking Norway itself. After organising a fleet, Harald was able to subdue the rebels, and in doing so brought the independent Jarls under his control, many of the rebels having fled to Iceland. He found himself ruling not only Norway, but also the Isles, Man, and parts of Scotland.
Kings of the Isles
In 876, the Norse-Gaels of Mann and the Hebrides rebelled against Harald. A fleet was sent against them led by Ketil Flatnose to regain control. On his success, Ketil was to rule the Sudreys as a vassal of King Harald. His grandson, Thorstein the Red, and Sigurd the Mighty, Jarl of Orkney, invaded Scotland and were able to exact tribute from nearly half the kingdom until their deaths in battle. Ketil declared himself King of the Isles. Ketil was eventually outlawed and, fearing the bounty on his head, fled to Iceland.
The Norse-Gaelic Kings of the Isles continued to act semi independently, in 973 forming a defensive pact with the Kings of Scotland and Strathclyde. In 1095, the King of Mann and the Isles Godred Crovan was killed by Magnus Barelegs, King of Norway. Magnus and King Edgar of Scotland agreed on a treaty. The islands would be controlled by Norway, but mainland territories would go to Scotland. The King of Norway nominally continued to be king of the Isles and Man. However, in 1156, The kingdom was split into two. The Western Isles and Man continued as to be called the "Kingdom of Man and the Isles", but the Inner Hebrides came under the influence of Somerled, a Gaelic speaker, who was styled 'King of the Hebrides'. His kingdom was to develop latterly into the Lordship of the Isles.
In eastern Aberdeenshire, the Danes invaded at least as far north as the area near Cruden Bay.
The Jarls of Orkney continued to rule much of northern Scotland until 1196, when Harald Maddadsson agreed to pay tribute to William the Lion, King of Scots, for his territories on the mainland.
The end of the Viking Age proper in Scotland is generally considered to be in 1266. In 1263, King Haakon IV of Norway, in retaliation for a Scots expedition to Skye, arrived on the west coast with a fleet from Norway and Orkney. His fleet linked up with those of King Magnus of Man and King Dougal of the Hebrides. After peace talks failed, his forces met with the Scots at Largs, in Ayrshire. The battle proved indecisive, but it did ensure that the Norse were not able to mount a further attack that year. Haakon died overwintering in Orkney, and by 1266, his son Magnus the Law-Mender ceded the Kingdom of Man and the Isles, with all territories on mainland Scotland to Alexander III, through the Treaty of Perth.
Orkney and Shetland continued to be ruled as autonomous Jarldoms under Norway until 1468, when King Christian I pledged them as security on the dowry of his daughter, who was betrothed to James III of Scotland. Although attempts were made during the 17th and 18th centuries to redeem Shetland, without success, and Charles II ratifying the pawning in the Act for annexation of Orkney and Shetland to the Crown 1669, explicitly exempting them from any "dissolution of His Majesty's lands", they are currently considered as being officially part of the United Kingdom.
Wales
Incursions in Wales were decisively reversed at the Battle of Buttington in Powys, in 893, when a combined Welsh and Mercian army under Æthelred, Lord of the Mercians, defeated a Danish band.
Wales was not colonised by the Vikings as heavily as eastern England. The Vikings did, however, settle in the south around St. David's, Haverfordwest, and Gower, among other places. Place names such as Skokholm, Skomer, and Swansea remain as evidence of the Norse settlement. The Vikings, however, did not subdue the Welsh mountain kingdoms.
Iceland
According to the Icelandic sagas, Iceland was discovered by Naddodd, a Viking from the Faroe Islands, after which it was settled by mostly Norwegians fleeing the oppressive rule of Harald Fairhair in . While harsh, the land allowed for a pastoral farming life familiar to the Norse. According to the saga of Erik the Red, when Erik was exiled from Iceland, he sailed west and pioneered Greenland.
Kvenland
Kvenland, known as Cwenland, Kænland, and similar terms in medieval sources, is an ancient name for an area in Scandinavia and Fennoscandia. A contemporary reference to Kvenland is provided in an Old English account written in the 9th century. It used the information provided by the Norwegian adventurer and traveller named Ohthere. Kvenland, in that or close to that spelling, is also known from Nordic sources, primarily Icelandic, but also one that was possibly written in the modern-day area of Norway.
All the remaining Nordic sources discussing Kvenland, using that or close to that spelling, date to the 12th and 13th centuries, but some of them—in part at least—are believed to be rewrites of older texts. Other references and possible references to Kvenland by other names and/or spellings are discussed in the main article of Kvenland.
Estonia
During the Viking Age, Estonia was a Finnic area divided between two major cultural regions, a coastal and an inland one, corresponding to the historical cultural and linguistic division between Northern and Southern Estonian. These two areas were further divided between loosely allied regions. The Viking Age in Estonia is considered to be part of the Iron Age period which started around and ended . Some 16th-century Swedish chronicles attribute the Pillage of Sigtuna in 1187 to Estonian raiders.
The society, economy, settlement and culture of the territory of what is in the present-day the country of Estonia is studied mainly through archaeological sources. The era is seen to have been a period of rapid change. The Estonian peasant culture came into existence by the end of the Viking Age. The overall understanding of the Viking Age in Estonia is deemed to be fragmentary and superficial, because of the limited amount of surviving source material. The main sources for understanding the period are remains of the farms and fortresses of the era, cemeteries and a large amount of excavated objects.
The landscape of Ancient Estonia featured numerous hillforts, some later hillforts on Saaremaa heavily fortified during the Viking Age and on to the 12th century. There were a number of late prehistoric or medieval harbour sites on the coast of Saaremaa, but none have been found that are large enough to be international trade centres. The Estonian islands also have a number of graves from the Viking Age, both individual and collective, with weapons and jewellery. Weapons found in Estonian Viking Age graves are common to types found throughout Northern Europe and Scandinavia.
Curonians
The Curonians were known as fierce warriors, excellent sailors and pirates. They were involved in several wars and alliances with Swedish, Danish, and Icelandic Vikings.
In , according to Norna-Gests þáttr saga from , Sigurd Hring ("ring"), a legendary king of Denmark and Sweden, fought against the invading Curonians and Kvens (Kvænir) in the southern part of what today is Sweden:
"Sigurd Ring (Sigurðr) was not there, since he had to defend his land, Sweden (Svíþjóð), since Curonians (Kúrir) and Kvænir were raiding there."
Curonians are mentioned among other participants of the Battle of Brávellir.
Grobin (Grobiņa) was the main centre of the Curonians during the Vendel Age. From the 10th to 13th century, Palanga served as an important economical, political and cultural centre for the Curonians. Chapter 46 of Egils Saga describes one Viking expedition by the Vikings Thorolf and Egill Skallagrímsson in Courland. According to some opinions, they took part in attacking Sweden's main city Sigtuna in 1187. Curonians established temporary settlements near Riga and in overseas regions including eastern Sweden and the islands of Gotland and Bornholm.
Scandinavian settlements existed along the southeastern Baltic coast in Truso and Kaup (Old Prussia), Palanga (Samogitia, Lithuania) as well as Grobin (Courland, Latvia).
Eastern Europe
The Varangians or Varyagi were Scandinavians, often Swedes, who migrated eastwards and southwards through what is now Belarus, Russia, and Ukraine, mainly in the 9th and 10th centuries. Engaging in trade, piracy, and mercenary activities, they roamed the river systems and portages of Gardariki, reaching the Caspian Sea and Constantinople.
Contemporary English publications also use the name "Viking" for early Varangians in some contexts.
The term Varangian remained in usage in the Byzantine Empire until the 13th century, largely disconnected from its Scandinavian roots by then. Having settled Aldeigja (Ladoga) in the 750s, Scandinavian colonists were probably an element in the early ethnogenesis of the Rus' people, and likely played a role in the formation of the Rus' Khaganate. The Varangians are first mentioned by the Primary Chronicle as having exacted tribute from the Slavic and Finnic tribes in . It was the time of rapid expansion of the Vikings in Northern Europe; England began to pay Danegeld in , and the Curonians of Grobin faced an invasion by the Swedes at about the same date.
The text of the Primary Chronicle says that in 860–862, the Finnic and Slavic tribes rebelled against the Varangian Rus', driving them back to Scandinavia, but soon started to conflict with each other. The disorder prompted the tribes to invite back the Varangian Rus' to "Come and rule and reign over us" and bring peace to the region. This was a somewhat bilateral relation with the Varangians defending the cities that they ruled. Led by Rurik and his brothers Truvor and Sineus, the Varangians settled around the town of Novgorod (Holmgarðr)..
In the 9th century, the Rus' operated the Volga trade route, which connected northern Russia (Gardariki) with the Middle East (Serkland). As the Volga route declined by the end of the century, the trade route from the Varangians to the Greeks rapidly overtook it in popularity. Apart from Ladoga and Novgorod, Gnezdovo and Gotland were major centres for Varangian trade.
The consensus among western scholars, disputed by Russian scholars, who believe them to be a Slavic tribe, is that the Rus' people originated in what is currently coastal eastern Sweden around the 8th century, and that their name has the same origin as that of Roslagen in Sweden. The maritime districts of East Götland and Uppland were known in earlier times as Roþer or Roþin, and later as Roslagen. According to Thorsten Andersson, the Russian folk name Rus''' ultimately derives from the noun roþer ('rowing'), a word also used in naval campaigns in the leþunger (Old Norse: leiðangr) system of organizing a coastal fleet. The Old Swedish place name Roþrin, in the older iteration Roþer, contains the word roþer and is still used in the form of Roden as a historical name for the coastal areas of Svealand. In modern times the name still exists as Roslagen, the name of the coastal area of Uppland province. According to Stefan Brink, the name Rus derives from the words ro (row) and rodd (a rowing session).
The term "Varangian" became more common from the 11th century onwards. In these years, Swedish men left to enlist in the Byzantine Varangian Guard in such numbers that a medieval Swedish law, Västgötalagen, used in the province Västergötland, declared that no one could inherit while staying in "Greece"—the then Scandinavian term for the Byzantine Empire—to stop the emigration, especially as two other European courts simultaneously also recruited Scandinavians: Kievan Rus' and London 1018–1066 (the Þingalið).
In contrast to the notable Scandinavian influence in Normandy and the British Isles, Varangian culture did not survive to a great extent in the East. Instead, the Varangian ruling classes of the two powerful city-states of Novgorod and Kiev were thoroughly Slavicised by the beginning of the 11th century. Some evidence suggests that Old Norse may have been spoken amongst the Rus' later, however. Old East Norse was probably still spoken in Kievan Rus' at Novgorod until the 13th century, according to the Nationalencyklopedin (Swedish National Encyclopedia).
Central Europe
Viking Age Scandinavian settlements were set up along the southern coast of the Baltic Sea, primarily for trade purposes. Their emergence appears to coincide with the settlement and consolidation of the coastal Slavic tribes in the respective areas. The archaeological record indicates that substantial cultural exchange between Scandinavian and Slavic traditions and technologies occurred. It is known that Slavic and Scandinavian craftsmen had different processes in crafts and productions. In the lagoons and delta of the eastern and southern Baltic there is evidence of Slavic boatbuilding practices somewhat divergent from the Viking tradition, and of a fusion of the two in a shipyard site from the Viking Age on the island of Falster in Denmark.
Slavic-Scandinavian settlements on the Mecklenburgian coast include the maritime trading center Reric (Groß Strömkendorf) on the eastern coast of Wismar Bay, and the multi-ethnic trade emporium Dierkow (near Rostock). Reric was set up around the year 700, but following later warfare between Obodrites and Danes, the inhabitants, who were subject to the Danish king, were resettled to Haithabu by him. Dierkow apparently belongs to the late 8th to the early 9th century.
Scandinavian settlements on the Pomeranian coast include Wolin (on the isle of Wolin), Ralswiek (on the isle of Rügen), Altes Lager Menzlin (on the lower Peene river), and Bardy-Świelubie near modern Kołobrzeg. Menzlin was set up in the mid-8th century. Wolin and Ralswiek began to prosper in the course of the 9th century. A merchants' settlement has also been suggested near Arkona, but no archeological evidence supports this theory. Menzlin and Bardy-Świelubie were vacated in the late 9th century, Ralswiek survived into the new millennium, but by the time written chronicles reported news of the island of Rügen in the 12th century, it had lost all its importance. Wolin, thought to be identical with the legendary Vineta and the semilegendary Jomsborg, base of the Jomsvikings, was destroyed in 1043 by Dano-Norwegian king Magnus the Good, according to the Heimskringla. Castle building by the Slavs seems to have reached a high level on the southern Baltic coast in the 8th and 9th centuries, possibly explained by a threat coming from the sea or from the trade emporiums, as Scandinavian arrowheads found in the area indicate advances penetrating as far as the lake chains in the Mecklenburgian and Pomeranian hinterlands.
Western Europe
Frisia
Frisia was a region which spanned from around modern-day Bruges to the islands on the west coast of Jutland—including large parts of the Low Countries. This region was progressively brought under Frankish control (Frisian-Frankish wars), but the Christianization of the local population and cultural assimilation was a slow process. However, several Frisian towns, most notably Dorestad were raided by Vikings. Rorik of Dorestad was a famous Viking raider in Frisia. On Wieringen the Vikings most likely had a base of operations. Viking leaders took an active role in Frisian politics, such as Godfrid, Duke of Frisia, as well as Rorik.
France
The French region of Normandy takes its name from the Viking invaders who were called Normanni, which means 'men of the North'.
The first Viking raids began between 790 and 800 along the coasts of western France. They were carried out primarily in the summer, as the Vikings wintered in Scandinavia. Several coastal areas were lost to Francia during the reign of Louis the Pious (814–840). But the Vikings took advantage of the quarrels in the royal family caused after the death of Louis the Pious to settle their first colony in the south-west (Gascony) of the kingdom of Francia, which was more or less abandoned by the Frankish kings after their two defeats at Roncevaux. The incursions in caused severe damage to Rouen and Jumièges. The Viking attackers sought to capture the treasures stored at monasteries, easy prey given the monks' lack of defensive capacity. In an expedition up the Seine reached Paris. The presence of Carolingian deniers of , found in 1871 among a hoard at Mullaghboden, County Limerick, where coins were neither minted nor normally used in trade, probably represents booty from the raids of 843–846.
However, from 885 to 886, Odo of Paris (Eudes de Paris) succeeded in defending Paris against Viking raiders. His military success allowed him to replace the Carolingians. In 911, a band of Viking warriors attempted to siege Chartres but was defeated by Robert I of France. Robert's victory later paved way for the baptism, and settlement in Normandy, of Viking leader Rollo. Rollo reached an agreement with Charles the Simple to sign the Treaty of Saint-Clair-sur-Epte, under which Charles gave Rouen and the area of present-day Upper Normandy to Rollo, establishing the Duchy of Normandy. In exchange, Rollo pledged vassalage to Charles in 940, agreed to be baptised, and vowed to guard the estuaries of the Seine from further Viking attacks. During Rollo's baptism Robert I of France stood as his godfather. The Duchy of Normandy also annexed further areas in Northern France, expanding the territory which was originally negotiated.
The Scandinavian expansion included Danish and Norwegian as well as Swedish elements, all under the leadership of Rollo. By the end of the reign of Richard I of Normandy in 996 (aka Richard the Fearless / Richard sans Peur), all descendants of Vikings became, according to Cambridge Medieval History (Volume 5, Chapter XV), 'not only Christians but in all essentials Frenchmen'. During the Middle Ages, the Normans created one of the most powerful feudal states of Western Europe. The Normans conquered England and southern Italy in 11th century, and played a key role in the Crusades.
Southern Europe
Italy
In 860, according to an account by the Norman monk Dudo of Saint-Quentin, a Viking fleet, probably under Björn Ironside and Hastein, landed at the Ligurian port of Luni and sacked the city. The Vikings then moved another 60 miles down the Tuscan coast to the mouth of the Arno, sacking Pisa and then, following the river upstream, also the hill-town of Fiesole above Florence, among other victories around the Mediterranean (including in Sicily and Nekor (Morocco), North Africa).
Many Anglo-Danish and Varangian mercenaries fought in Southern Italy, including Harald Hardrada and William de Hauteville who conquered parts of Sicily between 1038 and 1040, and Edgar the Ætheling who fought in the Norman conquest of southern Italy. Runestones were raised in Sweden in memory of warriors who died in Langbarðaland (Land of the Lombards), the Old Norse name for southern Italy.
Several Anglo-Danish and Norwegian nobles participated in the Norman conquest of southern Italy, like Edgar the Ætheling, who left England in 1086, and Jarl Erling Skakke, who won his nickname ("Skakke", meaning bent head) after a battle against Arabs in Sicily. On the other hand, many Anglo-Danish rebels fleeing William the Conqueror, joined the Byzantines in their struggle against the Robert Guiscard, duke of Apulia, in Southern Italy.
Spain
After 842, the Vikings set up a permanent base at the mouth of the river Loire from whence they could strike as far as northern Spain. These Vikings were Hispanicised in all the Christian kingdoms, while they kept their ethnic identity and culture in al-Andalus.
The southern coast of the Mediterranean Sea, both sides of the Strait of Gibraltar, and much of the Iberian peninsula were under Muslim rule when Vikings first entered the Mediterranean in the 9th century. The Vikings launched their campaigns from their strongholds in Francia into this realm of Muslim influence; following the coastline of the Kingdom of Asturias they sailed through the Gibraltar strait (known to them as Nǫrvasund, the 'Narrow Sound') into what they called Miðjarðarhaf, literally 'Middle of the earth' sea, with the same meaning as the Late Latin Mare Mediterrāneum.
The first Viking attacks in al-Andalus in AD 844 greatly affected the region. Medieval texts such as the Chronicon albeldense and the Annales Bertiniani tell of a Viking fleet that left Toulouse and made raids in Asturias and Galicia. According to the Historia silense it had 60 ships. Being repulsed in Galicia (Ghilīsīa), the fleet sailed southward around the peninsula, raiding coastal towns along the way.
In Irene García Losquiño's telling, these Vikings navigated their boats up the river Guadalquivir towards Išbīliya (Seville) and destroyed Qawra (Coria del Río), a small town about 15 km south of the city. Then they took Išbīliya, from which they controlled the region for several weeks. Their attack on the city forced its inhabitants to flee to Qarmūnâ (Carmona), a fortified city. The Emirate of Qurṭuba made great exertions to recover Išbīliya, and succeeded with the assistance of Qurṭuba (Córdoba) and the Banu Qasi, who ruled over a semi-autonomous state in the Upper March of the Ebro Valley. Consequently, defensive walls were built at Išbīliya, and the emir Abd al-Raḥmān II invested in the construction of a large fleet of ships to protect the entrance of the Guadalquivir and the coast of southern al-Andalus, after which Viking fleets had difficulties battling the Andalusī armada.
Gwyn Jones writes that this Viking raid had occurred on 1 October 844, when most of the Iberian peninsula was controlled by the emirate. His account says a flotilla of about 80 Viking ships, after attacking Asturias, Galicia and Lisbon, had ascended the Guadalquivir to Išbīliya, and besieged it for seven days, inflicting many casualties and taking numerous hostages with the intent to ransom them. Another group of Vikings had gone to Qādis (Cádiz) to plunder while those in Išbīliya waited on Qubtil (Isla Menor), an island in the river, for the ransom money to arrive. Meantime, the emir of Qurṭuba, Abd ar-Rahman II, prepared a military contingent to meet them, and on 11 November a pitched battle ensued on the grounds of Talayata (Tablada). The Vikings held their ground, but the results were catastrophic for the invaders, who suffered a thousand casualties; four hundred were captured and executed, some thirty ships were destroyed. It was not a total victory for the emir's forces, but the Viking survivors had to negotiate a peace to leave the area, surrendering their plunder and the hostages they had taken to sell as slaves, in exchange for food and clothing. According to the Arabist Lévi-Provençal, over time, the few Norse survivors converted to Islam and settled as farmers in the area of Qawra, Qarmūnâ, and Moron, where they engaged in animal husbandry and made dairy products (reputedly the origin of Sevillian cheese). Knutson and Caitlin write that Lévi-Provençal offered no sources for the proposition of conversion to Islam by northern Europeans in al-Andalus and thus it "remains unsubstantiated".
By the year 859 a large Viking force again invaded al-Andalus, beginning a campaign along the coast of the Iberian Peninsula with smaller groups that assaulted various locations. They attacked Išbīliya (Seville), but were driven off and they returned down the Guadalquivir to the Strait of Gibraltar. The Vikings then sailed round Cape Gata and followed the coastline to the Kūra (cora) of Tudmir, raiding various settlements, as mentioned by the 10th-century historian Ibn Hayyān. They finally ventured inland, entering the mouth of the river Segura and sailing towards ḥiṣn Ūriyūla (Orihuela), whose inhabitants had fled. The assailants sacked this important town, and according to the Arab sources, they attacked the fortress and burnt it to the ground. There is only brief mention in the historical record of this Viking army's attacks on south-eastern al-Andalus, including at al-Jazīra al-Khadrā (Algeciras), Ūriyūla, and the Juzur al-Balyār (جزُر البليار) (Balearic Islands).
Ibn Hayyān wrote about the Viking campaign of 859–861 in al-Andalus, perhaps relying on the account given by Muslim historian Aḥmad al-Rāzī, who tells of a Viking fleet of sixty-two ships that sailed up to Išbīliya and occupied al-Jazīra al-Khadrā. The Muslims seized two of their ships, laden with goods and coins, off the coast of Shidūnah (Sidonia). The ships were destroyed and their Viking crews killed. The remaining vessels continued up the Atlantic coast and landed near (Pampeluna), called Banbalūna in Arabic, where they took their emir Gharsīa ibn Wanaqu (García Iñiquez) prisoner until in 861 he was ransomed for 70,000 dinars.
According to the Annales Bertiniani, Danish Vikings embarked on a long voyage in 859, sailing eastward through the Strait of Gibraltar then up the river Rhône, where they raided monasteries and towns and established a base in the Camargue. Afterwards they raided Nakūr in what is now Morocco, kidnapped women of the royal family, and returned them when the emir of Córdoba paid their ransoms.
The Vikings made several incursions in the years 859, 966 and 971, with intentions more diplomatic than bellicose, although an invasion in 971 was repelled when the Viking fleet was totally annihilated. Vikings attacked Talayata again in 889 at the instigation of Kurayb ibn Khaldun of Išbīliya. In 1015, a Viking fleet entered the river Minho and sacked the episcopal city of Tui Galicia; no new bishop was appointed until 1070.
Portugal
In 844, a fleet of several dozen Viking longships with square brown sails appeared in the Mar da Palha ("Sea of Straw"), i.e., the mouth of the river Tagus. At the time, the city later called Lisbon was under Muslim rule and known in Arabic as al-Us̲h̲būna or al-ʾIšbūnah (الأشبونة). After a thirteen-day siege in which they plundered the surrounding countryside, the Vikings conquered al-Us̲h̲būna, but eventually retreated in the face of continued resistance by the townspeople led by their governor, Wahb Allah ibn Hazm.
The chronicler Ibn Hayyān, who wrote the most reliable early history of al-Andalus, in his Kitāb almuqtabis, quoted the Muslim historian Ahmad ibn Muhammad al-Rāzī:
North America
Greenland
The Viking-Age settlements in Greenland were established in the sheltered fjords of the southern and western coast. They settled in three separate areas along roughly of the western coast. While harsh, the microclimates along some fjords allowed for a pastoral lifestyle similar to that of Iceland, until the climate changed for the worse with the Little Ice Age .
The Eastern Settlement: The remains of about 450 farms have been found here. Erik the Red settled at Brattahlid on Ericsfjord.
The Middle Settlement, near modern Ivigtut, consisted of about 20 farms.
The Western Settlement at modern Godthåbsfjord, was established before the 12th century. It has been extensively excavated by archaeologists.
Mainland North America
In about 986, the Norwegian Vikings Bjarni Herjólfsson, Leif Ericson, and Þórfinnr Karlsefni from Greenland reached Mainland North America, over 500 years before Christopher Columbus, and they attempted to settle the land they called Vinland. They created a small settlement on the northern peninsula of present-day Newfoundland, near L'Anse aux Meadows. Conflict with indigenous peoples and lack of support from Greenland brought the Vinland colony to an end within a few years. The archaeological remains are now a UNESCO World Heritage Site.
Technology
The Vikings were among the most advanced of all societies of the time in their naval technology, and were noted in other technological works as well. The Vikings were equipped with the technologically superior longships; for purposes of conducting trade however, another type of ship, the knarr, wider and deeper in draft, were customarily used. The Vikings were competent sailors, adept in land warfare as well as at sea, and they often struck at accessible and poorly defended targets, usually with near impunity. The effectiveness of these tactics earned Vikings a formidable reputation as raiders and pirates.
The Vikings used their longships to travel vast distances and attain certain tactical advantages in battle. They could perform highly efficient hit-and-run attacks, in which they quickly approached a target, then left as rapidly as possible before a counter-offensive could be launched. Because of the ships' negligible draft, the Vikings could sail in shallow waters, allowing them to invade far inland along rivers. The ships were agile, and light enough to be carried over land from one river system to another. "Under sail, the same boats could tackle open water and cross the unexplored wastes of the North Atlantic." The ships' speed was also prodigious for the time, estimated at a maximum of . The use of the longships ended when technology changed, and ships began to be constructed using saws instead of axes, resulting in inferior vessels.
While battles at sea were rare, they would occasionally occur when Viking ships attempted to board European merchant vessels in Scandinavian waters. When larger scale battles ensued, Viking crews would rope together all nearby ships and slowly proceed towards the enemy targets. While advancing, the warriors hurled spears, arrows, and other projectiles at the opponents. When the ships were sufficiently close, melee combat would ensue using axes, swords, and spears until the enemy ship could be easily boarded. The roping technique allowed Viking crews to remain strong in numbers and act as a unit, but this uniformity also created problems. A Viking ship in the line could not retreat or pursue hostiles without breaking the formation and cutting the ropes, which weakened the overall Viking fleet and was a burdensome task to perform in the heat of battle. In general, these tactics enabled Vikings to quickly destroy the meagre opposition posted during raids.
Together with an increasing centralisation of government in the Scandinavian countries, the old system of leidang—a fleet mobilisation system, where every skipreide (ship community) had to maintain one ship and a crew—was discontinued as a purely military institution, as the duty to build and man a ship soon was converted into a tax. The Norwegian leidang was called under Haakon Haakonson for his 1263 expedition to Scotland during the Scottish–Norwegian War, and the last recorded calling of it was in 1603. However, already by the 11th and 12th centuries, perhaps in response to the longships, European fighting ships were built with raised platforms fore and aft, from which archers could shoot down into the relatively low longships. This led to the defeat of longship navies in most subsequent naval engagements—e.g., with the Hanseatic League.
The Vikings were also said to have fine weapons. Generally, Vikings used axes as weapons due to the lessened amount of iron required for their creations; swords were typically seen as a mark of wealth. Spears were also a common weapon among Vikings. Great amounts of time and artistry were expended in the creation of Viking weapons; ornamentation is commonly seen among them. Scandinavian architecture during the Viking Age most often involved wood, due to the abundance of the material. Longhouses, a form of home, often featuring ornamentation, are commonly seen as the defining building of the Viking Age.
Exactly how the Vikings navigated the open seas with such success is unclear. A study published by the Royal Society in its journal, Proceedings of the Royal Society A: Mathematical and Physical Sciences, suggests that the Vikings made use of an optical compass as a navigation aid, using the light-splitting and polarisation-filtering properties of Iceland spar to find the location of the sun when it was not directly visible. While some evidence points to such use of calcite "sunstones" to find the sun's location, modern reproductions of Viking "sky-polarimetric" navigation have found these sun compasses to be highly inaccurate, and not usable in cloudy or foggy weather.
The archaeological find known as the Visby lenses from the Swedish island of Gotland may be components of a telescope. It appears to date from long before the invention of the telescope in the 17th century.
Religion
For most of the Viking Age, Scandinavian society generally followed Norse paganism. The traditions of this faith, including Valhalla and the Æsir, are sometimes cited as a factor in the creation of Viking warrior culture. However, Scandinavia was eventually Christianised towards the later Viking Age, with early centres of Christianity especially in Denmark.
Trade
Some of the most important trading ports founded by the Norse during the period include both existing and former cities such as Aarhus (Denmark), Ribe (Denmark), Hedeby (Germany), Vineta (Pomerania), Truso (Poland), Bjørgvin (Norway), Kaupang (Norway), Skiringssal (Norway), Birka (Sweden), Bordeaux (France), York (England), Dublin (Ireland) and Aldeigjuborg (Russia).
As Viking ships carried cargo and trade goods throughout the Baltic area and beyond, their active trading centres grew into thriving towns. One important centre of trade was at Hedeby. Close to the border with the Franks, it was effectively a crossroads between the cultures, until its eventual destruction by the Norwegians in an internecine dispute around 1050. York was the centre of the kingdom of Jórvík from 866, and discoveries there (e.g., a silk cap, a counterfeit of a coin from Samarkand, and a cowry shell from the Red Sea or the Persian Gulf) suggest that Scandinavian trade connections in the 10th century reached beyond Byzantium. However, those items could also have been Byzantine imports, and there is no reason to assume that the Varangians travelled significantly beyond Byzantium and the Caspian Sea.
Viking trade routes extended far beyond Scandinavia. As Scandinavian ships penetrated southward on the rivers of Eastern Europe to acquire financial capital, they encountered the nomad peoples of the steppes, leading to the beginning of a trading system that connected Russia and Scandinavia with the northern routes of the Eurasian Silk Road network. During the Middle Ages, the Volga trade route connected Northern Europe and Northwestern Russia with the Caspian Sea, via the Volga River. The international trade routes that enabled the passage of goods by ship from Scandinavia to the east were mentioned in early medieval literature as the Austrrvegr passing through the eastern Baltic region. Ships headed to the river Volga sailed through the Gulf of Finland, while those destined for Byzantium might take one of several routes through present-day north-eastern Poland or the Baltic lands.
The Vikings catered to the demand for slaves in the southern slave markets in the Orthodox Eastern Roman Empire and the Muslim Umayyad Caliphate, both of whom desired slaves of a religion different from their own. The trade route from the Varangians to the Greeks connected Scandinavia, Kievan Rus' and the Eastern Roman Empire. The Rus' were of note as merchants who supplied honey, wax, and slaves to Constantinople. The Varangians served as mercenaries of Russian princes, then of the Swedish princes who founded and ruled Norse kingdoms in Eastern Europe such as at Kiev and Novgorod.
Culture
The Viking Age saw many of the earliest Scandinavian cultural developments. The traditional Icelandic Sagas, still often read today, are seen as characteristic literary works of Northern Europe. Old English works such as Beowulf, written in the tradition of Germanic heroic legend, show Viking influences; in Beowulf, this influence is seen in the language and setting of the poem. Another example of Viking Age cultural influence is the Old Norse influence in the English language; this influence is primarily a legacy of the various Viking invasions of England.
Women in Viking society
According to archaeologist Liv Helga Dommasnes writing in 1998, although archaeological sources pertinent to the study of women's roles in Scandinavia were most plentiful from the Viking Age compared to other historical eras, not many archaeologists took advantage of the opportunities they represented. She alludes to the fact that the picture commonly presented of Viking society during the Viking Age was of a society of men engaged in their various occupations or positions, with scant mention of the women and children who were also part of it.
In her reckoning, given this basic flaw in the modern image of Viking society, how knowledge of the past is organised must be considered. Accordingly, language is an essential part of this organisation of knowledge, and the concepts of modern languages are tools for understanding the realities of the past and for organising that knowledge, even though they are artefacts of our own time and perceived reality. Written sources, although scarce, appear to have been prioritised, even though it is understood that these written sources are biased. Almost all of them originate from other cultures, as literature from Viking societies is sparse. Since it unambiguously transmits meaning in literary terms, it is fairly clear that this meaning is not derived from the ideology of Viking Age people, but rather from that of early northern Christianity. Medieval studies scholar Gro Steinsland argues that the transformation from heathen to Christian religion in Viking society was a "radical break" rather than a gradual transition, and Dommasnes says this should have bearing on consideration of the transformation of late Viking Age traditions before they were recorded in 12th- or 13th-century literature. By this reasoning, changing cultural values necessarily greatly affected perceptions of women particularly and of gender roles generally.
Judith Jesch, professor of Viking Age studies at the University of Nottingham, suggests in her Women in the Viking Age that "If historians' emphasis on vikings as warriors made invisible the women in the background, then it is not always clear where the more visible female counterparts of the new urban vikings have come from." She says it is impossible to study the Vikings without a conception of the entire historical period they lived in, of the culture that produced them, and of other cultures they influenced. By her lights, not accounting for the doings of half the population would be ludicrous.
Published in 1991, Women in the Viking Age develops Jesch's thesis that the texts of the Icelandic sagas (Íslendingasögur) are recordings of mythological narratives preserved in the forms in which they were written by 13th-century antiquaries in Iceland. They cannot be interpreted literally as the "authentic voice of Vikings", embodying as they do the preconceptions of those medieval Icelanders. These sagas, formerly believed to have been based on actual historical traditions, are now commonly regarded as imaginative creations. With their origins in oral traditions, there is little confidence in them as historical truth, but they express what they tell more directly than "the dry bones of archaeology" or the brief messages on runestones. The modern view of the Viking Age is completely entwined with knowledge imparted by the sagas, and they are the main source of a broadly held belief that women in the Viking Age were independent, assertive, and had agency.
Jesch describes the content of runic inscriptions as connecting people who live in modern times with women of the Viking Age similarly to archaeological evidence, often telling more about the lives of women than the material remains revealed in archaeological excavations. She considers these inscriptions as contemporary evidence originating within the culture instead of from the incomplete or prejudiced viewpoint of the cultural outsider, and sees most of them as narratives in a narrow sense that supply details illuminating the overall picture derived from archaeological sources. They allow actual persons to be identified and reveal information about them such as their family relationships, their names, and perhaps facts concerning their individual lives.
Birgit Sawyer says her book The Viking-age Rune-stones aims to show that the corpus of runestones considered as a whole is a fruitful source of knowledge about the religious, political, social, and economic history of Scandinavia in the 10th and 11th centuries. Using data from her database she finds that runestones cast light on settlement patterns, communications, kinship and naming customs, and the evolution of language and poetry. Systematically researching the material leads to her hypothesis that runic inscriptions mirrored inheritance customs entailing not only lands or goods, but also rights, obligations, and rank in society. Although women in Viking society, like men, had tombstones over their graves, runestones were raised primarily to memorialise men, with the lives of few women being commemorated by runestones (Sawyer says only 7 per cent), and half of those were with men. Because there was a much larger per centage of women's graves with rich appointments in the Iron Age, the comparatively smaller number of runestones memorialising women indicates that the trend reflects changes in burial customs and religion only in part. Most of those honoured with runestones were men, and the emphasis was on those who sponsored the monuments. Typical medieval grave monuments name only the deceased, but Viking Age runestones prioritise the sponsors, first and foremost; therefore, they "are monuments to the living as much as to the dead".
Language
The 12th-century Icelandic Gray Goose Laws state that Swedes, Norwegians, Icelanders, and Danes spoke the same language, ("Danish tongue"; speakers of Old East Norse would have said ). Another term was ("northern speech"). Old Norse has developed into the modern North Germanic languages: Icelandic, Faroese, Norwegian, Danish, Swedish, and other North Germanic varieties of which Norwegian, Danish and Swedish retain considerable mutual intelligibility while Icelandic remains the closest to Old Norse. In present-day Iceland schoolchildren are able to read the 12th-century Icelandic sagas in the original language (in editions with normalised spelling).
Written sources of Old Norse from the Viking Age are rare: there are rune stones, but the inscriptions are mostly short. A good deal of the vocabulary, morphology, and phonology of the runic inscriptions (little is known definitely about their syntax) "can be shown to develop regularly into Viking-Age, medieval and modern Scandinavian reflexes", says Michael Barnes.
According to David Arter, Old Norse was for a while during the Viking Age a lingua franca spoken not just in Scandinavia but also in the courts of the Scandinavian rulers in Ireland, Scotland, England, France and Russia. The Norse origin of some words used today is obvious, as in the word haar referring to the cold sea mist on the east coast of Scotland and England; it derives from the Old Norse haárr.
Old Norse influence on other languages
The long-term linguistic effects of the Viking settlements in England were threefold: over a thousand Old Norse words eventually became part of Standard English; numerous places in the East and North-east of England have Danish names, and many English personal names are of Scandinavian origin. Scandinavian words that entered the English language included landing, score, beck, fellow, take, busting, and steersman. The vast majority of loan words did not appear in documents until the early 12th century; these included many modern words which used sk- sounds, such as skirt, sky, and skin; other words appearing in written sources at this time included again, awkward, birth, cake, dregs, fog, freckles, gasp, law, moss, neck, ransack, root, scowl, sister, seat, sly, smile, want, weak and window from Old Norse meaning "wind-eye". Some of the words that came into use are among the most common in English, such as to go, to come, to sit, to listen, to eat, both, same, get and give. The system of personal pronouns was affected, with they, them and their replacing the earlier forms. Old Norse influenced the verb to be; the replacement of sindon by are is almost certainly Scandinavian in origin, as is the third-person-singular ending -s in the present tense of verbs.
There are more than 1,500 Scandinavian place names in England, mainly in Yorkshire and Lincolnshire (within the former boundaries of the Danelaw): over 600 end in -by, the Scandinavian word for "village"—for example Grimsby, Naseby, and Whitby; many others end in -thorpe ("farm"), -thwaite ("clearing"), and -toft ("homestead").
According to an analysis of names ending in -son, the distribution of family names showing Scandinavian influence is still concentrated in the north and east, corresponding to areas of former Viking settlement. Early medieval records indicate that over 60% of personal names in Yorkshire and North Lincolnshire showed Scandinavian influence.
Genetics
A genetic study published at bioRxiv in July 2019 and in Nature in September 2020 examined the population genomics of the Viking Age. The remains of four hundred forty-two ancient humans from across Europe and the North Atlantic were surveyed, stretching from the Bronze Age to the early modern period. In terms of Y-DNA composition, Viking individuals were similar to present-day Scandinavians. The most common Y-DNA haplogroup in the study was I1 (95 samples), R1b (84 samples) and R1a, especially (but not exclusively) of the Scandinavian R1a-Z284 subclade (61 samples). It was found that there was a notable foreign gene flow into Scandinavia in the years preceding the Viking Age and during the Viking Age itself. This gene flow entered Denmark and eastern Sweden, from which it spread into the rest of Scandinavia. The Y-DNA of Viking Age samples suggests that this may partly have been descendants of the Germanic tribes from the Migration Period returning to Scandinavia. The study also found that despite close cultural similarities, there were distinct genetic differences between regional populations in the Viking Age. These differences have persisted into modern times. Inland areas were found to be more genetically homogenous than coastal areas and islands such as Öland and Gotland. These islands were probably important trade settlements. Consistent with historical records, the study found evidence of a major influx of Danish Viking ancestry into England, a Swedish influx into Estonia and Finland; and Norwegian influx into Ireland, Iceland and Greenland during the Viking Age. The Vikings were found to have left a profound genetic imprint in the areas they settled, which has persisted into modern times with, e.g., the contemporary population of the United Kingdom having up to 6% Viking DNA. The study also showed that some local people of Scotland were buried as Vikings and may have taken on Viking identities.
Margaryan et al. 2020 examined the skeletal remains of 42 individuals from the Salme ship burials in Estonia. The skeletal remains belonged to warriors killed in battle who were later buried together with numerous valuable weapons and armour. DNA testing and isotope analysis revealed that the men came from central Sweden.
Margaryan et al. 2020 examined an elite warrior burial from Bodzia (Poland) dated to 1010–1020. The cemetery in Bodzia is exceptional in terms of Scandinavian and Kievian Rus links. The Bodzia man (sample VK157, or burial E864/I) was not a simple warrior from the princely retinue, but he belonged to the princely family himself. His burial is the richest one in the whole cemetery; moreover, strontium analysis of his teeth enamel shows he was not local. It is assumed that he came to Poland with the Prince of Kiev, Sviatopolk the Accursed, and met a violent death in combat. This corresponds to the events of 1018 when Sviatopolk himself disappeared after having retreated from Kiev to Poland. It cannot be excluded that the Bodzia man was Sviatopolk himself, as the genealogy of the Rurikids at this period is extremely dubious, and the dates of birth of many princes of this dynasty may be quite approximative. The Bodzia man carried haplogroup I1-S2077 and had both Scandinavian ancestry and Russian admixture.
The genetic data from these areas affirmed conclusions previously drawn from historical and archaeological evidence.
Settlements outside Scandinavia
Atlantic
Faroe Islands
Iceland
Greenland
Baltic
Seeburg (Latvia)
Polange (Lithuania)
British Isles
England
Danelaw
Jórvík (York)
Cumbria
Ireland
Arklow
Dyflin (Dublin)
Hlymrekr (Limerick)
Veðrafjǫrðr (Waterford)
Víkingr-ló (Wicklow)
Veisafjǫrðr (Wexford)
Isle of Man
Mann
Scotland
Caithness
Galloway
Kintyre
Norðreyjar (Orkney and Shetland)
Ross
Suðreyjar (Hebrides)
Sutherland
Eastern Europe
Garðaríki
Western Europe
Normandy
North America
Norse colonisation of the Americas
L'Anse aux Meadows (and possibly a larger area called Vinland)
Notes
Cited sources
Further reading
Background
Graham-Campbell, J. (2001), The Viking World, London.
General surveys
Ahola, Joonas & Frog with Clive Tolley (eds.) (2014). Fibula, Fabula, Fact – The Viking Age in Finland. Studia Fennica Historica 18. Helsinki: Finnish Literature Society.
Anker, P. (1970). The Art of Scandinavia, Volume I, London and New York
Fuglesang, S.H. (1996). "Viking Art", in Turner, J. (ed.), The Grove Dictionary of Art, Volume 32, London and New York, pp. 514–527, 531–532.
Graham-Campbell, J. (1980). Viking Artefacts: A Select Catalogue, British Museum Publications: London.
Graham-Campbell, James (2013). Viking Art, Thames & Hudson.
Roesdahl, E. and Wilson, D.M. (eds) (1992). From Viking to Crusader: Scandinavia and Europe 800–1200, Copenhagen and New York. [exhibition catalogue].
Williams, G., Pentz, P. and Wemhoff, M. (eds), Vikings: Life and Legend, British Museum Press: London, 2014. [exhibition catalogue].
Wilson, D.M. & Klindt-Jensen, O. (1980). Viking Art, 2nd ed., George Allen and Unwin, 1980.
Carey, Brian Todd. "Technical marvels, Viking longships sailed seas and rivers, or served as floating battlefields", Military History 19, no. 6 (2003): 70–72.
Downham, Clare. Viking Kings of Britain and Ireland: The Dynasty of Ívarr to A.D. 1014. Edinburgh: Dunedin Academic Press, 2007
Hudson, Benjamin. Viking Pirates and Christian Princes: Dynasty, Religion, and Empire in the North Atlantic. Oxford: Oxford University Press, 2005 .
Logan, F. Donald The Vikings in History (London: Hutchison & Co. 1983) .
Maier, Bernhard. The Celts: A history from earliest times to the present''. Notre Dame, Indiana: University of Notre Dame Press, 2003.
External links
Vikings – BBC History (collection of short articles under the headings Overview, Raiders and Settlers, Viking Culture, Evidence)
Vikings: The North Atlantic Saga – Smithsonian website for travelling exhibition, 2000–2003
The Danish Viking Age
Old Norse literature from «Kulturformidlingen norrøne tekster og kvad» Norway
ScienceNordic's article on "How Vikings navigated the world"
History of Scandinavia
Iron Age cultures of Europe
Archaeological cultures in Sweden
Archaeological cultures in Denmark
Archaeological cultures in Norway
Archaeological cultures in Estonia
Archaeological cultures in England
Archaeological cultures in Scotland
Archaeological cultures in Ireland
Archaeological cultures in France
Middle Ages
Historical eras | 0.770118 | 0.999399 | 0.769655 |
Neo-medievalism | Neo-medievalism (or neomedievalism, new medievalism) is a term with a long history that has acquired specific technical senses in two branches of scholarship. In political theory about modern international relations, where the term is originally associated with Hedley Bull, it sees the political order of a globalized world as analogous to high-medieval Europe, where neither states nor the Church, nor other territorial powers, exercised full sovereignty, but instead participated in complex, overlapping and incomplete sovereignties.
In literary theory regarding the use and abuse of texts and tropes from the Middle Ages in postmodernity, the term neomedieval was popularized by the Italian medievalist Umberto Eco in his 1986 essay "Dreaming of the Middle Ages".
Political theory
The idea of neomedievalism in political theory was first discussed in 1977 by theorist Hedley Bull in The Anarchical Society: A Study of Order in World Politics to describe the erosion of state sovereignty in the contemporary globalized world:It is also conceivable that sovereign states might disappear and be replaced not by a world government but by a modern and secular equivalent of the kind of universal political organisation that existed in Western Christendom in the Middle Ages. In that system no ruler or state was sovereign in the sense of being supreme over a given territory and a given segment of the Christian population; each had to share authority with vassals beneath, and with the Pope and (in Germany and Italy) the Holy Roman Emperor above. The universal political order of Western Christendom represents an alternative to the system of states which does not yet embody universal government.Thus Bull suggested society might move towards "a new mediaevalism" or a "neo-mediaeval form of universal political order", in which individual notions of rights and a growing sense of a "world common good" were undermining national sovereignty. He proposed that such a system might help "avoid the classic dangers of the system of sovereign states by a structure of overlapping structures and cross-cutting loyalties that hold all peoples together in a universal society while at the same time avoiding the concentration inherent in a world government", though "if it were anything like the precedent of Western Christendom, it would contain more ubiquitous and continuous violence and insecurity than does the modern states system".
In this reading, globalization has resulted in an international system which resembles the medieval one, where political authority was exercised by a range of non-territorial and overlapping agents, such as religious bodies, principalities, empires and city-states, instead of by a single political authority in the form of a state which has complete sovereignty over its territory. Comparable processes characterising Bull's "new medievalism" include the increasing powers held by regional organisations such as the European Union, as well as the spread of sub-national and devolved governments, such as those of Scotland and Catalonia. These challenge the exclusive authority of the state. Private military companies, multinational corporations and the resurgence of worldwide religious movements (e.g. political Islam) similarly indicate a reduction in the role of the state and a decentralisation of power and authority.
Stephen J. Kobrin in 1998 added the forces of the digital world economy to the picture of neomedievalism. In an article entitled "Back to the Future: Neomedievalism and the Postmodern Digital World Economy" in the Journal of International Affairs, he argued that the sovereign state as we know itdefined within certain territorial bordersis about to change profoundly, if not to wither away, due in part to the digital world economy created by the Internet, suggesting that cyberspace is a trans-territorial domain operating outside of the jurisdiction of national law.
Anthony Clark Arend also argued in his 1999 book Legal Rules and International Society that the international system is moving toward a "neo-medieval" system. He claimed that the trends that Bull noted in 1977 had become even more pronounced by the end of the twentieth century. Arend argues that the emergence of a "neo-medieval" system would have profound implications for the creation and operation of international law.
Although Bull originally envisioned neomedievalism as a positive trend, it has its critics. Bruce Holsinger in Neomedievalism, Neoconservatism, and the War on Terror argues that neoconservatives "have exploited neomedievalism's conceptual slipperiness for their own tactical ends." Similarly, Philip G. Cerny's "Neomedievalism, Civil War and the New Security Dilemma" (1998) also sees neomedievalism as a negative development and claims that the forces of globalization increasingly undermine nation-states and interstate forms of governance "by cross-cutting linkages among different economic sectors and social bonds," calling globalization a "durable disorder" which eventually leads to the emergence of the new security dilemmas that had analogies in the Middle Ages. Cerny identifies six characteristics of a neomedieval world that contribute to this disorder: multiple competing institutions; lack of exogenous territorializing pressures both on sub-national and international levels; uneven consolidation of new spaces, cleavages, conflicts and inequalities; fragmented loyalties and identities; extensive entrenchment of property rights; and spread of the "grey zones" outside the law as well as black economy.
Medieval studies
An early use of the term neo-medievalism in a sense like Umberto Eco's was in Isaiah Berlin's 1953 "The Hedgehog and the Fox": There is no kinship between him [Joseph de Maistre] and those who really did believe in the possibility of some kind of return neo-medievalists from Wackenroder and Görres and Cobbett to G. K. Chesterton, and Slavophils and Distributists and Pre-Raphaelites and other nostalgic romantics; for he believed, as Tolstoy also did, in the exact opposite: in the "inexorable" power of the present moment: in our inability to do away with the sum of conditions which cumulatively determine our basic categories, an order which we can never fully describe or, otherwise than by some immediate awareness of it, come to know. Then, in 1986, Umberto Eco said "we are at present witnessing, both in Europe and America, a period of renewed interest in the Middle Ages, with a curious oscillation between fantastic neomedievalism and responsible philological examination". Recently, the term has been used by various writers such as medieval historians who see it as the intersection between popular fantasy and medieval history as a term describing the post-modern study of medieval history.
The widespread interest in medieval themes in popular culture, especially computer games such as MMORPGs, films and television, neo-medieval music, and popular literature, has been called neomedieval. Critics have discussed why medieval themes continue to fascinate audiences in a modern, heavily technological world. A possible explanation is the need for a romanticized historical narrative to clarify the confusing panorama of current political and cultural events.
Intersection of neomedievalism in political theory and medieval studies
Some commentators have used the terminological overlap between Hedley Bull's political theory of 'neomedievalism' and Umberto Eco's postmodernist theory of 'neomedievalism' to discuss how cultural discourses about the Middle Ages are used to political ends in the changing international order of the twenty-first century. A key proponent of this argument was Bruce Holsinger, who studied the use of orientalist and medievalist language in the discourse of the post-9/11 'war on terror', arguing that American neoconservatives had harnessed medievalism to win popular support for foreign policy and military actions that undermined state sovereignty and the international rule of law.
Working in Holsinger's wake, others have argued that neomedievalist popular culture, such as the video game The Elder Scrolls V: Skyrim, represents and so in turn helps to normalise a neomedievalist political order, and that states other than the US, for example Iceland, have also used medievalism as a source of soft power to help secure their place in the shifting post-9/11 world order.
Studies
Defining Neomedievalism(s) I, ed. by K. Fugelso, Studies in Medievalism, 19 (Cambridge: Brewer, 2010),
Defining Neomedievalism(s) II, ed. by K. Fugelso, Studies in Medievalism, 20 (Cambridge: Brewer, 2011),
Neo-Medievalism in the Media: Essays on Film, Television, and Electronic Games, ed. by Carol L. Robinson (Mellen, 2012),
Comparative Neomedievalisms, ed. by Daniel Lukes, special issue of Postmedieval, 5.1 (Spring 2014)
Neomedievalism, Popular Culture, and the Academy: From Tolkien to Game of Thrones, by KellyAnn Fitzpatrick (Cambridge: Brewer, 2019),
U.S.-China Rivalry in a Neomedieval World: Security in an Age of Weakening States, by Timothy R. Heath, Weilong Kong, Alexis Dale-Huang (RAND Corporation, 2023), ,
See also
Westphalian sovereignty
Neo-feudalism
Neoliberalism
English school of international relations theory
Refeudalization
Leo Strauss
Notes
External links
NeoMedievalism, a collection of links and a general evaluation
Pulling Back from Neo-Medievalism, a discussion of neo-medievalism in relation to the Hungarian Status Law
NeoMedievalism, academic look at the study of medievalism through a literary criticism lens
Why history matters - and why medieval history also matters
Sutch, P and J Elias, International Relations: The Basics, Routledge, New York, 2007, pp. 102–104
Towards a new Middle Ages? by Roberto Rotondo
Legal Rules and International Society by Anthony Clark Arend
Political theories
Philosophical theories
Middle Ages in popular culture
International relations theory
Sovereignty
Power (social and political) concepts | 0.781456 | 0.98489 | 0.769648 |
Matrilineality | Matrilineality is the tracing of kinship through the female line. It may also correlate with a social system in which each person is identified with their matriline, their mother's lineage, and which can involve the inheritance of property and titles. A matriline is a line of descent from a female ancestor to a descendant of either gender in which the individuals in all intervening generations are mothers. In a matrilineal descent system, an individual is considered to belong to the same descent group as their mother. This ancient matrilineal descent pattern is in contrast to the currently more popular pattern of patrilineal descent from which a family name is usually derived. The matriline of historical nobility was also called their enatic or uterine ancestry, corresponding to the patrilineal or "agnatic" ancestry.
Early human kinship
In the late 19th century, almost all prehistorians and anthropologists believed, following Lewis H. Morgan's influential book Ancient Society, that early human kinship everywhere was matrilineal. This idea was taken up by Friedrich Engels in The Origin of the Family, Private Property and the State. The Morgan-Engels thesis that humanity's earliest domestic institution was not the family but the matrilineal clan soon became incorporated into communist orthodoxy. In reaction, most 20th century social anthropologists considered the theory of matrilineal priority untenable, although during the 1970s and 1980s, a range of feminist scholars often attempted to revive it.
In recent years, evolutionary biologists, geneticists and palaeoanthropologists have been reassessing the issues, many citing genetic and other evidence that early human kinship may have been matrilineal after all. One crucial piece of indirect evidence has been genetic data suggesting that over thousands of years, women among sub-Saharan African hunter-gatherers have chosen to reside postmaritally not with their husbands' family but with their own mother and other natal kin. Another line of argument is that when sisters and their mothers help each other with childcare, the descent line tends to be matrilineal rather than patrilineal. Biological anthropologists are now widely agreed that cooperative childcare was a development crucial in making possible the evolution of the unusually large human brain and characteristically human psychology. Although others refute the claims of supporters of the universality of matrilocality or patrilocality, pointing out that hunter-gatherer societies have a flexible philopatry or practice multilocality, which in turn leads to a more egalitarian society, since both men and women have the right to choose with whom to live. According to some data, pastoralists and farmers strongly gravitate towards patrilocality, so patrilocality is a common phenomenon among non-Pygmies. But among some hunter-gatherers, patrilocality is less common than among farmers. So for example, among the pygmies of Aka, which includes Biaka and Benzene, a young couple usually settles in her husband's camp after the birth of their first child. However, the husband can stay in the wife's community, where one of his brothers or sisters can join him. This can happen in societies where the bride's service is practiced. Or in any other societies. According to the data above, some scientists also say that kinship and residence in hunter-gatherer societies are complex and multifaceted. For example, when re-checking past data (which were not very reliable), the researchers note that about 40% of the groups were bilocal, 22.9% were matrilocal and 25% were patrilocal. A number of scientists also advocate multilocality, refuting the concepts of exceptional matrilocality (matrilineality) or patrilocality (patrilineality).
Matrilineal surname
Matrilineal surnames are names transmitted from mother to daughter, in contrast to the more familiar patrilineal surnames transmitted from father to son, the pattern most common among family names today. For clarity and for brevity, the scientific terms patrilineal surname and matrilineal surname are usually abbreviated as patriname and matriname.
Cultural patterns
There appears to be some evidence for the presence of matrilineality in Pre-Islamic Arabia, in a very limited number of the Arabian peoples (first of all among the Amorites of Yemen, and among some strata of Nabateans in Northern Arabia);
A modern example from South Africa is the order of succession to the position of the Rain Queen in a culture of matrilineal primogeniture: not only is dynastic descent reckoned through the female line, but only females are eligible to inherit.
In some traditional societies and cultures, membership in their groups was – and, in the following list, still is if shown in italics – inherited matrilineally. Examples include the Cherokee, Choctaw, Gitksan, Haida, Hopi, Iroquois, Lenape, Navajo and Tlingit of North America; the Cabécar and Bribri of Costa Rica; the Naso and Kuna people of Panama; the Kogi, Wayuu and Carib of South America; the Minangkabau people of West Sumatra, Indonesia and Negeri Sembilan, Malaysia; the Trobrianders, Dobu and Nagovisi of Melanesia; the Nairs, some Thiyyas & Muslims of Kerala and the Mogaveeras, Billavas & the Bunts of Karnataka in south India; the Khasi, Jaintia and Garo of Meghalaya in northeast India and Bangladesh; the Ngalops and Sharchops of Bhutan; the Mosuo of China; the Kayah of Southeast Asia, the Picti of Scotland, the Basques of Spain and France; the Ainu of Japan, the Akan including the Ashanti, Bono, Akwamu, Fante of Ghana; most groups across the so-called "matrilineal belt" of south-central Africa; the Nubians of Southern Egypt & Sudan and the Tuareg of west and north Africa; the Serer of Senegal, The Gambia and Mauritania.
Clan names vs. surnames
Most of the example cultures in this article are based on (matrilineal) clans. Any clan might possibly contain from one to several or many descent groups or family groups – i.e., any matrilineal clan might be descended from one or several or many unrelated female ancestors. Also, each such descent group might have its own family name or surname, as one possible cultural pattern. The following two example cultures each follow a different pattern, however:
Example 1. Members of the (matrilineal) clan culture Minangkabau do not even have a surname or family name, see this culture's own section below. In contrast, members do have a clan name, which is important in their lives although not included in the member's name. Instead, one's name is just one's given name.
Example 2. Members of the (matrilineal) clan culture Akan, see its own section below, also do not have matrilineal surnames and likewise their important clan name is not included in their name. However, members' names do commonly include second names which are called surnames but which are not routinely passed down from either father or mother to all their children as a family name.
Note well that if a culture did include one's clan name in one's name and routinely handed it down to all children in the descent group then it would automatically be the family name or surname for one's descent group (as well as for all other descent groups in one's clan).
Care of children
While a mother normally takes care of her own children in all cultures, in some matrilineal cultures an "uncle-father" will take care of his nieces and nephews instead: in other words social fathers here are uncles. There is not a necessary connection between the role of father and genitor. In many such matrilineal cultures, especially where residence is also matrilocal, a man will exercise guardianship rights not over the children he fathers but over his sisters' children, who are viewed as 'his own flesh'. These children's biological father – unlike an uncle who is their mother's brother and thus their caregiver – is in some sense a 'stranger' to them, even when affectionate and emotionally close.
According to Steven Pinker, attributing to Kristen Hawkes, among foraging groups matrilocal societies are less likely to commit female infanticide than are patrilocal societies.
Matrilineality in specific ethnic groups
Africa
Akan
Some 20 million Akan live in Africa, particularly in Ghana and Ivory Coast. (See as well their subgroups, the Ashanti, also called Asante, Akyem, Bono, Fante, Akwamu.) Many but not all of the Akan still (2001) practice their traditional matrilineal customs, living in their traditional extended family households, as follows. The traditional Akan economic, political and social organization is based on maternal lineages, which are the basis of inheritance and succession. A lineage is defined as all those related by matrilineal descent from a particular ancestress. Several lineages are grouped into a political unit headed by a chief and a council of elders, each of whom is the elected head of a lineage – which itself may include multiple extended-family households. Public offices are thus vested in the lineage, as are land tenure and other lineage property. In other words, lineage property is inherited only by matrilineal kin.
"The principles governing inheritance stress sex, generation and age – that is to say, men come before women and seniors before juniors." When a woman's brothers are available, a consideration of generational seniority stipulates that the line of brothers be exhausted before the right to inherit lineage property passes down to the next senior genealogical generation of sisters' sons. Finally, "it is when all possible male heirs have been exhausted that the females" may inherit.
Each lineage controls the lineage land farmed by its members, functions together in the veneration of its ancestors, supervises marriages of its members, and settles internal disputes among its members.
The political units above are likewise grouped into eight larger groups called abusua (similar to clans), named Aduana, Agona, Asakyiri, Asenie, Asona, Bretuo, Ekuona and Oyoko. The members of each abusua are united by their belief that they are all descended from the same ancient ancestress. Marriage between members of the same abusua is forbidden. One inherits or is a lifelong member of the lineage, the political unit, and the abusua of one's mother, regardless of one's gender and/or marriage. Note that members and their spouses thus belong to different abusuas, mother and children living and working in one household and their husband/father living and working in a different household.
According to this source of further information about the Akan, "A man is strongly related to his mother's brother (wɔfa) but only weakly related to his father's brother. This must be viewed in the context of a polygamous society in which the mother/child bond is likely to be much stronger than the father/child bond. As a result, in inheritance, a man's nephew (sister's son) will have priority over his own son. Uncle-nephew relationships therefore assume a dominant position."
Certain other aspects of the Akan culture are determined patrilineally rather than matrilineally. There are 12 patrilineal Ntoro (which means spirit) groups, and everyone belongs to their father's Ntoro group but not to his (matrilineal) family lineage and abusua. Each patrilineal Ntoro group has its own surnames, taboos, ritual purifications, and etiquette.
A recent (2001) book provides this update on the Akan: Some families are changing from the above abusua structure to the nuclear family. Housing, childcare, education, daily work, and elder care etc. are then handled by that individual family rather than by the abusua or clan, especially in the city. The above taboo on marriage within one's abusua is sometimes ignored, but "clan membership" is still important, with many people still living in the abusua framework presented above.
Guanches
The Berber inhabitants of Gran Canaria island had developed a matrilineal society by the time the Canary Islands and their people, called Guanches, were conquered by the Spanish.
Serer
The Serer people of Senegal, the Gambia and Mauritania are patrilineal (simanGol in Serer language) as well as matrilineal (tim). There are several Serer matriclans and matriarchs. Some of these matriarchs include Fatim Beye (1335) and Ndoye Demba (1367) – matriarchs of the Joos matriclan which also became a dynasty in Waalo (Senegal). Some matriclans or maternal clans form part of Serer medieval and dynastic history, such as the Guelowars. The most revered clans tend to be rather ancient and form part of Serer ancient history. These proto-Serer clans hold great significance in Serer religion and mythology. Some of these proto-Serer matriclans include the Cegandum and Kagaw, whose historical account is enshrined in Serer religion, mythology and traditions.
In Serer culture, inheritance is both matrilineal and patrilineal. It all depends on the asset being inherited – i.e. whether the asset is a paternal asset – requiring paternal inheritance (kucarla ) or a maternal asset – requiring maternal inheritance (den yaay or ƭeen yaay). The actual handling of these maternal assets (such as jewelry, land, livestock, equipment or furniture, etc.) is discussed in the subsection Role of the Tokoor of one of the above-listed main articles.
Tuareg
The Tuareg (Arabic:طوارق, sometimes spelled Touareg in French, or Twareg in English) are a large Berber ethnic confederation found across several nations in north Africa, including Niger, Mali and Algeria. The Tuareg are clan-based, and are (still, in 2007) "largely matrilineal". The Tuareg are Muslim, but mixed with a "heavy dose" of their pre-existing beliefs including matrilineality.
Tuareg women enjoy high status within their society, compared with their Arab counterparts and with other Berber tribes: Tuareg social status is transmitted through women, with residence often matrilocal. Most women could read and write, while most men were illiterate, concerning themselves mainly with herding livestock and other male activities. The livestock and other movable property were owned by the women, whereas personal property is owned and inherited regardless of gender. In contrast to most other Muslim cultural groups, men wear veils but women do not. This custom is discussed in more detail in the Tuareg article's clothing section, which mentions it may be the protection needed against the blowing sand while traversing the Sahara desert.
Americas
Bororo
The Bororo people of Brazil and Bolivia live in matrilineal clans, with husbands moving to live with their wives' extended families.
Bribri
The clan system of the Bribri people of Costa Rica and Panama is matrilineal; that is, a child's clan is determined by the clan his or her mother belongs to. Only women can inherit land.
Cabécar
The social organization of the Cabécar people of Costa Rica is predicated on matrilineal clans in which the mother is the head of household. Each matrilineal clan controls marriage possibilities, regulates land tenure, and determines property inheritance for its members.
Guna
In the traditional culture of the Guna people of Panama and Colombia, families are matrilinear and matrilocal, with the groom moving to become part of the bride's family. The groom also takes the last name of the bride.
Hopi
The Hopi (in what is now the Hopi Reservation in northeastern Arizona), according to Alice Schlegel, had as its "gender ideology ... one of female superiority, and it operated within a social actuality of sexual equality." According to LeBow (based on Schlegel's work), in the Hopi, "gender roles ... are egalitarian .... [and] [n]either sex is inferior." LeBow concluded that Hopi women "participate fully in ... political decision-making." According to Schlegel, "the Hopi no longer live as they are described here" and "the attitude of female superiority is fading". Schlegel said the Hopi "were and still are matrilinial" and "the household ... was matrilocal".
Schlegel explains why there was female superiority as that the Hopi believed in "life as the highest good ... [with] the female principle ... activated in women and in Mother Earth ... as its source" and that the Hopi "were not in a state of continual war with equally matched neighbors" and "had no standing army" so that "the Hopi lacked the spur to masculine superiority" and, within that, as that women were central to institutions of clan and household and predominated "within the economic and social systems (in contrast to male predominance within the political and ceremonial systems)", the Clan Mother, for example, being empowered to overturn land distribution by men if she felt it was unfair, since there was no "countervailing ... strongly centralized, male-centered political structure".
Iroquois
The Iroquois Confederacy or League, combining five to six Native American Haudenosaunee nations or tribes before the U.S. became a nation, operated by The Great Binding Law of Peace, a constitution by which women retained matrilineal-rights and participated in the League's political decision-making, including deciding whether to proceed to war, through what may have been a matriarchy or "gyneocracy". The dates of this constitution's operation are unknown: the League was formed in approximately 1000–1450, but the constitution was oral until written in about 1880. The League still exists.
Other Iroquoian-speaking peoples such as the Wyandot and the Meherrin, that were never part of the Iroquois League, nevertheless have traditionally possessed a matrilineal family structure.
Kogi
The Kogi people of northern Colombia practice bilateral inheritance, with certain rights, names or associations descending matrilineally.
Lenape
Occupied for 10,000 years by Native Americans, the land that is present-day New Jersey was overseen by clans of the Lenape, who farmed, fished, and hunted upon it. The pattern of their culture was that of a matrilineal agricultural and mobile hunting society that was sustained with fixed, but not permanent, settlements in their matrilineal clan territories. Leadership by men was inherited through the maternal line, and the women elders held the power to remove leaders of whom they disapproved.
Villages were established and relocated as the clans farmed new sections of the land when soil fertility lessened and when they moved among their fishing and hunting grounds by seasons. The area was claimed as a part of the Dutch New Netherland province dating from 1614, where active trading in furs took advantage of the natural pass west, but the Lenape prevented permanent settlement beyond what is now Jersey City.
"Early Europeans who first wrote about these Indians found matrilineal social organization to be unfamiliar and perplexing. As a result, the early records are full of 'clues' about early Lenape society, but were usually written by observers who did not fully understand what they were seeing."
Mandan
The Mandan people of the northern Great Plains of the United States historically lived in matrilineal extended family lodges.
Naso
The Naso (Teribe or Térraba) people of Panama and Costa Rica describe themselves as a matriarchal community, although their monarchy has traditionally been inherited in the male line.
Navajo
The Navajo people of the American southwest are a matrilineal society in which kinship, children, livestock and family histories are passed down through the female. In marriage the groom moved to live with the brides family. Children also came from their mother's clan living in hogans of the females family.
Tanana Athabaskan
The Tanana Athabaskan people, the original inhabitants of the Tanana River basin in Alaska and Canada, traditionally lived in matrilineal semi-nomadic bands.
Tsenacommacah (Powhatan Confederacy)
The Powhatan and other tribes of the Tsenacommacah, also known as the Powhatan Confederacy, practiced a version of male-preference matrilineal seniority, favoring brothers over sisters in the current generation (but allowing sisters to inherit if no brothers remained), but passing to the next generation through the eldest female line. In A Map of Virginia John Smith of Jamestown explains:His [Chief Powhatan's] kingdome descendeth not to his sonnes nor children: but first to his brethren, whereof he hath 3 namely Opitchapan, Opechancanough, and Catataugh; and after their decease to his sisters. First to the eldest sister, then to the rest: and after them to the heires male and female of the eldest sister; but never to the heires of the males.
Upper Kuskokwim
The Upper Kuskokwim people are the original inhabitants of the Upper Kuskokwim River basin. They speak an Athabaskan language more closely related to Tanana than to the language of the Lower Kuskokkwim River basin. They were traditionally hunter-gatherers who lived in matrilineal semi-nomadic bands.
Wayuu
The Wayuu people of Colombia and Venezuela live in matrilineal clans, with paternal relationships in the background.
Asia
China
Originally, Chinese surnames were derived matrilineally, although by the time of the Shang dynasty (1600 to 1046 BCE) they had become patrilineal.
Archaeological data supports the theory that during the Neolithic period (7000 to 2000 BCE) in China, Chinese matrilineal clans evolved into the usual patrilineal families by passing through a transitional patrilineal clan phase. Evidence includes some "richly furnished" tombs for young women in the early Neolithic Yangshao culture, whose multiple other collective burials imply a matrilineal clan culture. Toward the late Neolithic period, when burials were apparently of couples, "a reflection of patriarchy", an increasing elaboration of presumed chiefs' burials is reported.
Relatively isolated ethnic minorities such as the Mosuo (Na) in southwestern China are highly matrilineal.
India
Of communities recognized in the national Constitution as Scheduled Tribes, "some ... [are] matriarchal and matrilineal" "and thus have been known to be more egalitarian." Several Hindu communities in South India practiced matrilineality, especially the Nair (or Nayar) and Tiyyas in the state of Kerala, and the Bunts and Billava in the states of Karnataka. The system of inheritance was known as Marumakkathayam in the Nair community or Aliyasantana in the Bunt and the Billava community, and both communities were subdivided into clans. This system was exceptional in the sense that it was one of the few traditional systems in western historical records of India that gave women some liberty and the right to property.
In the matrilineal system, the family lived together in a tharavadu which was composed of a mother, her brothers and younger sisters, and her children. The oldest male member was known as the karanavar and was the head of the household, managing the family estate. Lineage was traced through the mother, and the children belonged to the mother's family. In earlier days, surnames would be of the maternal side. All family property was jointly owned. In the event of a partition, the shares of the children were clubbed with that of the mother. The karanavar's property was inherited by his sisters' sons rather than his own sons. (For further information see the articles Nair and ambalavasi and Bunts and Billava.) Amitav Ghosh has stated that, although there were numerous other matrilineal succession systems in communities of the south Indian coast, the Nairs "achieved an unparalleled eminence in the anthropological literature on matrilineality".
In the northeast Indian state Meghalaya, the Khasi, Garo, Jaintia people have a long tradition of a largely matrilinear system in which the youngest daughter inherits the wealth of the parents and takes over their care.
Indonesia
In the Minangkabau matrilineal clan culture in Indonesia, a person's clan name is important in their marriage and their other cultural-related events. Two totally unrelated people who share the same clan name can never be married because they are considered to be from the same clan mother (unless they come from distant villages). Likewise, when Minangs meet total strangers who share the same clan name, anywhere in Indonesia, they could theoretically expect to feel that they are distant relatives. Minang people do not have a family name or surname; neither is one's important clan name included in one's name; instead one's given name is the only name one has.
The Minangs are one of the world's largest matrilineal societies/cultures/ethnic groups, with a population of 4 million in their home province West Sumatra in Indonesia and about 4 million elsewhere, mostly in Indonesia. The Minang people are well known within their country for their tradition of matrilineality and for their "dedication to Islam" – despite Islam being "supposedly patrilineal". This well-known accommodation, between their traditional complex of customs, called adat, and their religion, was actually worked out to help end the Minangkabau 1821–37 Padri War.
The Minangkabau are a prime example of a matrilineal culture with female inheritance. With Islamic religious background of complementarianism and places a greater number of men than women in positions of religious and political power. Inheritance and proprietorship pass from mother to daughter.
Besides Minangkabau, several other ethnics in Indonesia are also matrilineal and have similar culture as the Minangkabau. They are Suku Melayu Bebilang, Suku Kubu and Kerinci people. Suku Melayu Bebilang live in Kota Teluk Kuantan, Kabupaten Kuantan Singingi (also known as Kuansing), Riau. They have similar culture as the Minang. Suku Kubu people live in Jambi and South Sumatera. They are around 200 000 people. Suku Kerinci people mostly live in Kabupaten Kerinci, Jambi. They are around 300 000 people
Kurds
Matrilineality was occasionally practiced by mainstream Sorani, Zaza, Feyli, Gorani, and Alevi Kurds, though the practice was much rarer among non-Alevi Kurmanji-speaking Kurds.
The Mangur clan of the, Culturally, Mokri tribal confederation and, politically, Bolbas Federation is an enatic clan, meaning members of the clan can only inherit their mothers last name and are considered to be a part of the mothers family. The entire Mokri tribe may have also practiced this form of enaticy before the collapse of their emirate and its direct rule from the Iranian or Ottoman state, or perhaps the tradition started because of depopulation in the area due to raids.
Malaysia
A culture similar to lareh bodi caniago, practiced by the Minangkabau, is the basis for adat perpatih practices in the state of Negeri Sembilan and parts of Malacca as a product of West Sumatran migration into the Malay Peninsula in the 15th century.
Sri Lanka
Matrilineality among the Muslims and Tamils in the Eastern Province of Sri Lanka arrived from Kerala, India via Muslim traders before 1200 CE. Matrilineality here includes kinship and social organization, inheritance and property rights. For example, "the mother's dowry property and/or house is passed on to the eldest daughter." The Sinhalese people are the third ethnic group in eastern Sri Lanka, and have a kinship system which is "intermediate" between that of matrilineality and that of patrilineality, along with "bilateral inheritance", intermediate between matrilineal and patrilineal inheritance. While the first two groups speak the Tamil language, the third group speaks the Sinhala language. The Tamils largely identify with Hinduism, the Sinhalese being primarily Buddhist. The three groups are about equal in population size.
Patriarchal social structures apply to all of Sri Lanka, but in the Eastern Province are mixed with the matrilineal features summarized in the paragraph above and described more completely in the following subsection:
According to Kanchana N. Ruwanpura, Eastern Sri Lanka "is highly regarded even among" feminist economists "for the relatively favourable position of its women, reflected" in women's equal achievements in Human Development Indices "(HDIs) as well as matrilineal and" bilateral "inheritance patterns and property rights".
She also conversely argues that "feminist economists need to be cautious in applauding Sri Lanka's gender-based achievements and/or matrilineal communities", because these matrilineal communities coexist with "patriarchal structures and ideologies" and the two "can be strange but ultimately compatible bedfellows", as follows:
She "positions Sri Lankan women within gradations of patriarchy by beginning with a brief overview of the main religious traditions," Buddhism, Hinduism, and Islam, "and the ways in which patriarchal interests are promoted through religious practice" in Eastern Sri Lanka (but without being as repressive as classical patriarchy). Thus, "feminists have claimed that Sri Lankan women are relatively well positioned in the" South Asian region, despite "patriarchal institutional laws that ... are likely to work against the interests of women," which is a "co-operative conflict" between women and these laws. (Clearly "female-heads have no legal recourse" from these laws which state "patriarchal interests".) For example, "the economic welfare of female-heads [heads of households] depends upon networks" ("of kin and [matrilineal] community"), "networks that mediate the patriarchal-ideological nexus." She wrote that "some female heads possessed" "feminist consciousness" and, at the same time, that "in many cases female-heads are not vociferous feminists ... but rather 'victims' of patriarchal relations and structures that place them in precarious positions.... [while] they have held their ground ... [and] provided for their children".
On the other hand, she also wrote that feminists including Malathi de Alwis and Kumari Jayawardena have criticized a romanticized view of women's lives in Sri Lanka put forward by Yalman, and mentioned the Sri Lankan case "where young women raped (usually by a man) are married-off/required to cohabit with the rapists!"
Vietnam
Most ethnic groups classified as "(Montagnards, Malayo-Polynesian and Austroasian)" are matrilineal.
On North Vietnam, according to Alessandra Chiricosta, the legend of Âu Cơ is said to be evidence of "the presence of an original 'matriarchy' ... and [it] led to the double kinship system, which developed there .... [and which] combined matrilineal and patrilineal patterns of family structure and assigned equal importance to both lines."
Europe
Ancient Greece
While men held positions of religious and political power, the Spartan constitution mandated that inheritance and proprietorship pass from mother to daughter.
Ancient Scotland
In Pictish society, succession in leadership (later kingship) was matrilineal (through the mother's side), with the reigning chief succeeded by either his brother or perhaps a nephew but not through patrilineal succession of father to son.
Oceania
Some oceanic societies, such as the Marshallese and the Trobrianders, the Palauans, the Yapese and the Siuai, are characterized by matrilineal descent. The sister's sons or the brothers of the decedent are commonly the successors in these societies.
Matrilineal identification within Judaism
Matrilineality in Judaism or matrilineal descent in Judaism is the tracing of Jewish descent through the maternal line. Close to all Jewish communities have followed matrilineal descent from at least early Tannaitic (c. 10–70 CE) times through modern times.
The origins and date-of-origin of matrilineal descent in Judaism are uncertain. Orthodox Judaism maintains that matrilineal descent is an Oral Law from at least the time of the Receiving of the Torah on Mount Sinai (c. 1310 BCE). According to some modern academic opinions, it was likely instituted in either the early Tannaitic period (c. 10–70 CE) or the time of Ezra (c. 460 BCE).
In practice, Jewish denominations define "Who is a Jew?" via descent in different ways. All denominations of Judaism have protocols for conversion for those who are not Jewish by descent.
Orthodox Judaism and Conservative Judaism still practice matrilineal descent. Karaite Judaism, which rejects the Oral Law, generally practices patrilineal descent. Reconstructionist Judaism has recognized Jews of patrilineal descent since 1968.
In 1983, the Central Conference of American Rabbis of Reform Judaism passed a resolution waiving the need for formal conversion for anyone with at least one Jewish parent, provided that either (a) one is raised as a Jew, by Reform standards, or (b) one engages in an appropriate act of public identification, formalizing a practice that had been common in Reform synagogues for at least a generation. This 1983 resolution departed from the Reform Movement's previous position requiring formal conversion to Judaism for children without a Jewish mother. However, the closely associated Israel Movement for Reform and Progressive Judaism has rejected this resolution and requires formal conversion for anyone without a Jewish mother.
Exception for the enslaved in the United States
In the United States, the offspring of enslaved women inherited their mother's status. A significant consequence of this is that children resulting from rape or unions between enslaved women and their owners did not have any of the rights of the father as they would have had under the patrilineal succession that applied to everyone but the enslaved.
In mythology
Certain ancient myths have been argued to expose ancient traces of matrilineal customs that existed before historical records.
The ancient historian Herodotus is cited by Robert Graves in his translations of Greek myths as attesting that the Lycians of their times "still reckoned" by matrilineal descent, or were matrilineal, as were the Carians.
In Greek mythology, while the royal function was a male privilege, power devolution often came through women, and the future king inherited power through marrying the queen heiress. This is illustrated in the Homeric myths where all the noblest men in Greece vie for the hand of Helen (and the throne of Sparta), as well as the Oedipian cycle where Oedipus weds the recently widowed queen at the same time he assumes the Theban kingship.
This trend also is evident in many Celtic myths, such as the (Welsh) mabinogi stories of Culhwch and Olwen, or the (Irish) Ulster Cycle, most notably the key facts to the Cúchulainn cycle that Cúchulainn gets his final secret training with a warrior woman, Scáthach, and becomes the lover of her daughter; and the root of the Táin Bó Cuailnge, that while Ailill may wear the crown of Connacht, it is his wife Medb who is the real power, and she needs to affirm her equality to her husband by owning chattels as great as he does.
The Picts are widely cited as being matrilineal.
A number of other Breton stories also illustrate the motif. Even the King Arthur legends have been interpreted in this light by some. For example, the Round Table, both as a piece of furniture and as concerns the majority of knights belonging to it, was a gift to Arthur from Guinevere's father Leodegrance.
Arguments also have been made that matrilineality lay behind various fairy tale plots which may contain the vestiges of folk traditions not recorded.
For instance, the widespread motif of a father who wishes to marry his own daughter—appearing in such tales as Allerleirauh, Donkeyskin, The King who Wished to Marry His Daughter, and The She-Bear—has been explained as his wish to prolong his reign, which he would lose after his wife's death to his son-in-law. More mildly, the hostility of kings to their daughter's suitors is explained by hostility to their successors. In such tales as The Three May Peaches, Jesper Who Herded the Hares, or The Griffin, kings set dangerous tasks in an attempt to prevent the marriage.
Fairy tales with hostility between the mother-in-law and the heroine—such as Mary's Child, The Six Swans, and Perrault's Sleeping Beauty—have been held to reflect a transition between a matrilineal society, where a man's loyalty was to his mother, and a patrilineal one, where his wife could claim it, although this interpretation is predicated on such a transition being a normal development in societies.
See also
Ruth Bré, advocate for matrilineality
List of matrilineal or matrilocal societies
Married and maiden names
Mater semper certa est, "the mother is always certain" – until 1978 and in vitro pregnancies.
Matriarchy
Matrifocal family
Partus sequitur ventrem
Wehali
Notes
References
Further reading
Cameron, Anne (1981) Daughters of Copper Woman. Press Gang Publishers.
Freud, Sigmund: Totem and Taboo, Leipzig, 1913 (and translations in many languages) gives a counter-position, insisting on patrilineality as the "natural" way.
Holden, C. J. & Mace, R. (2003). Spread of cattle led to the loss of matrilineal descent in Africa: a coevolutionary analysis. The Royal Society Full text
Holden, C.J., Sear, R. & Mace, R. (2003) Matriliny as daughter-biased investment. Evolution & Human Behavior 24: 99–112. Full text
Knight, C. 2008. Early human kinship was matrilineal. In N. J. Allen, H. Callan, R. Dunbar and W. James (eds.), Early Human Kinship. Oxford: Blackwell, pp. 61–82.Full text
Reed, Evelyn (1975) Woman's Evolution, from matriarchal clan to patriarchal family. Pathfinder Press, New York, 1975. ISBN cloth 0-87348-421-5; paper 0-87348-422-3 (also available in Spanish, Farsi, and Indonesian)
Ethnology
Jewish marital law
Kinship and descent
Matriarchy
Order of succession | 0.772578 | 0.996184 | 0.76963 |
Human science | Human science (or human sciences in the plural) studies the philosophical, biological, social, justice, and cultural aspects of human life. Human science aims to expand the understanding of the human world through a broad interdisciplinary approach. It encompasses a wide range of fields - including history, philosophy, sociology, psychology, justice studies, evolutionary biology, biochemistry, neurosciences, folkloristics, and anthropology. It is the study and interpretation of the experiences, activities, constructs, and artifacts associated with human beings. The study of human sciences attempts to expand and enlighten the human being's knowledge of its existence, its interrelationship with other species and systems, and the development of artifacts to perpetuate the human expression and thought. It is the study of human phenomena. The study of the human experience is historical and current in nature. It requires the evaluation and interpretation of the historic human experience and the analysis of current human activity to gain an understanding of human phenomena and to project the outlines of human evolution. Human science is an objective, informed critique of human existence and how it relates to reality.Underlying human science is the relationship between various humanistic modes of inquiry within fields such as history, sociology, folkloristics, anthropology, and economics and advances in such things as genetics, evolutionary biology, and the social sciences for the purpose of understanding our lives in a rapidly changing world. Its use of an empirical methodology that encompasses psychological experience in contrasts with the purely positivistic approach typical of the natural sciences which exceeds all methods not based solely on sensory observations. Modern approaches in the human sciences integrate an understanding of human structure, function on and adaptation with a broader exploration of what it means to be human. The term is also used to distinguish not only the content of a field of study from that of the natural science, but also its methodology.
Meaning of 'science'
Ambiguity and confusion regarding the usage of the terms 'science', 'empirical science', and 'scientific method' have complicated the usage of the term 'human science' with respect to human activities. The term 'science' is derived from the Latin scientia, meaning 'knowledge'. 'Science' may be appropriately used to refer to any branch of knowledge or study dealing with a body of facts or truths systematically arranged to show the operation of general laws.
However, according to positivists, the only authentic knowledge is scientific knowledge, which comes from the positive affirmation of theories through strict scientific methods the application of knowledge, or mathematics. As a result of the positivist influence, the term science is frequently employed as a synonym for empirical science. Empirical science is knowledge based on the scientific method, a systematic approach to verification of knowledge first developed for dealing with natural physical phenomena and emphasizing the importance of experience based on sensory observation. However, even with regard to the natural sciences, significant differences exist among scientists and philosophers of science with regard to what constitutes valid scientific method—for example, evolutionary biology, geology and astronomy, studying events that cannot be repeated, can use the method of historical narratives. More recently, usage of the term has been extended to the study of human social phenomena. Thus, natural and social sciences are commonly classified as science, whereas the study of classics, languages, literature, music, philosophy, history, religion, and the visual and performing arts are referred to as the humanities. Ambiguity with respect to the meaning of the term science is aggravated by the widespread use of the term formal science with reference to any one of several sciences that is predominantly concerned with abstract form that cannot be validated by physical experience through the senses, such as logic, mathematics, and the theoretical branches of computer science, information theory, and statistics.
History
The phrase 'human science' in English was used during the 17th-century scientific revolution, for example by Theophilus Gale, to draw a distinction between supernatural knowledge (divine science) and study by humans (human science). John Locke also uses 'human science' to mean knowledge produced by people, but without the distinction. By the 20th century, this latter meaning was used at the same time as 'sciences that make human beings the topic of research'.
Early development
The term "moral science" was used by David Hume (1711–1776) in his Enquiry concerning the Principles of Morals to refer to the systematic study of human nature and relationships. Hume wished to establish a "science of human nature" based upon empirical phenomena, and excluding all that does not arise from observation. Rejecting teleological, theological and metaphysical explanations, Hume sought to develop an essentially descriptive methodology; phenomena were to be precisely characterized. He emphasized the necessity of carefully explicating the cognitive content of ideas and vocabulary, relating these to their empirical roots and real-world significance.
A variety of early thinkers in the humanistic sciences took up Hume's direction. Adam Smith, for example, conceived of economics as a moral science in the Humean sense.
Later development
Partly in reaction to the establishment of positivist philosophy and the latter's Comtean intrusions into traditionally humanistic areas such as sociology, non-positivistic researchers in the humanistic sciences began to carefully but emphatically distinguish the methodological approach appropriate to these areas of study, for which the unique and distinguishing characteristics of phenomena are in the forefront (e.g., for the biographer), from that appropriate to the natural sciences, for which the ability to link phenomena into generalized groups is foremost. In this sense, Johann Gustav Droysen contrasted the humanistic science's need to comprehend the phenomena under consideration with natural science's need to explain phenomena, while Windelband coined the terms idiographic for a descriptive study of the individual nature of phenomena, and nomothetic for sciences that aim to defthe generalizing laws.
Wilhelm Dilthey brought nineteenth-century attempts to formulate a methodology appropriate to the humanistic sciences together with Hume's term "moral science", which he translated as Geisteswissenschaft - a term with no exact English equivalent. Dilthey attempted to articulate the entire range of the moral sciences in a comprehensive and systematic way. Meanwhile, his conception of “Geisteswissenschaften” encompasses also the abovementioned study of classics, languages, literature, music, philosophy, history, religion, and the visual and performing arts. He characterized the scientific nature of a study as depending upon:
The conviction that perception gives access to reality
The self-evident nature of logical reasoning
The principle of sufficient reason
But the specific nature of the Geisteswissenschaften is based on the "inner" experience (Erleben), the "comprehension" (Verstehen) of the meaning of expressions and "understanding" in terms of the relations of the part and the whole – in contrast to the Naturwissenschaften, the "explanation" of phenomena by hypothetical laws in the "natural sciences".
Edmund Husserl, a student of Franz Brentano, articulated his phenomenological philosophy in a way that could be thought as a bthesis of Dilthey's attempt. Dilthey appreciated Husserl's Logische Untersuchungen (1900/1901, the first draft of Husserl's Phenomenology) as an “ep"epoch-making"istemological foundation of fors conception of Geisteswissenschaften.
In recent years, 'human science' has been used to refer to "a philosophy and approach to science that seeks to understand human experience in deeply subjective, personal, historical, contextual, cross-cultural, political, and spiritual terms. Human science is the science of qualities rather than of quantities and closes the subject-object split in science. In particular, it addresses the ways in which self-reflection, art, music, poetry, drama, language and imagery reveal the human condition. By being interpretive, reflective, and appreciative, human science re-opens the conversation among science, art, and philosophy."
Objective vs. subjective experiences
Since Auguste Comte, the positivistic social sciences have sought to imitate the approach of the natural sciences by emphasizing the importance of objective external observations and searching for universal laws whose operation is predicated on external initial conditions that do not take into account differences in subjective human perception and attitude. Critics argue that subjective human experience and intention plays such a central role in determining human social behavior that an objective approach to the social sciences is too confining. Rejecting the positivist influence, they argue that the scientific method can rightly be applied to subjective, as well as objective, experience. The term subjective is used in this context to refer to inner psychological experience rather than outer sensory experience. It is not used in the sense of being prejudiced by personal motives or beliefs.
Human science in universities
Since 1878, the University of Cambridge has been home to the Moral Sciences Club, with strong ties to analytic philosophy.
The Human Science degree is relatively young. It has been a degree subject at Oxford since 1969. At University College London, it was proposed in 1973 by Professor J. Z. Young and implemented two years later. His aim was to train general science graduates who would be scientifically literate, numerate and easily able to communicate across a wide range of disciplines, replacing the traditional classical training for higher-level government and management careers. Central topics include the evolution of humans, their behavior, molecular and population genetics, population growth and aging, ethnic and cultural diversity ,and human interaction with the environment, including conservation, disease ,and nutrition. The study of both biological and social disciplines, integrated within a framework of human diversity and sustainability, should enable the human scientist to develop professional competencies suited to address such multidimensional human problems.
In the United Kingdom, Human Science is offered at the degree level at several institutions which include:
University of Oxford
University College London (as Human Sciences and as Human Sciences and Evolution)
King's College London (as Anatomy, Developmental & Human Biology)
University of Exeter
Durham University (as Health and Human Sciences)
Cardiff University (as Human and Social Sciences)
In other countries:
Osaka University
Waseda University
Tokiwa University
Senshu University
Aoyama Gakuin University (As College of Community Studies)
Kobe University
Kanagawa University
Bunkyo University
Sophia University
Ghent University (in the narrow sense, as Moral sciences, "an integrated empirical and philosophical study of values, norms and world views")
See also
History of the Human Sciences (journal)
Social science
Humanism
Humanities
References
Bibliography
Flew, A. (1986). David Hume: Philosopher of Moral Science, Basil Blackwell, Oxford
Hume, David, An Enquiry Concerning the Principles of Morals
External links
Institute for Comparative Research in Human and Social Sciences (ICR) -Japan
Human Science Lab -London
Human Science(s) across Global Academies
Marxism philosophy | 0.7759 | 0.991869 | 0.769591 |
Stratocracy | A stratocracy, also called stratiocracy, is a form of government headed by military chiefs. The branches of government are administered by military forces, the government is legal under the laws of the jurisdiction at issue, and is usually carried out by military workers.
Description of stratocracy
The word stratocracy first appeared in 1652 from the political theorist Robert Filmer, being preceded in 1649 by used by Claudius Salmasius in reference to the newly declared Commonwealth of England. John Bouvier and Daniel Gleason describe a stratocracy as one where citizens with mandatory or voluntary military service, or veterans who have been honorably discharged, have the right to elect or govern. The military's administrative, judicial, and/or legislative powers are supported by law, the constitution, and the society. It does not necessarily need to be autocratic or oligarchic by nature in order to preserve its right to rule. The political scientist Samuel Finer distinguished between stratocracy which was rule by the army and military regimes where the army did not rule but enforced the rule of the civil leaders. Peter Lyon wrote that through history stratocracies have been relatively rare, and that in the latter half of the twentieth century there has been a noticeable increase in the number of stratocratic states due to the "rapid collapse of the West European thalassocracies".
Notable examples of stratocracies
Historical stratocracies
Sparta
The Diarchy of Sparta was a stratocratic kingdom. From a young age, male Spartans were put through the agoge, necessary for full-citizenship, which was a rigorous education and training program to prepare them to be warriors. Aristotle describes the kingship at Sparta as "a kind of unlimited and perpetual generalship" (Pol. iii. 1285a), while Isocrates refers to the Spartans as "subject to an oligarchy at home, to a kingship on campaign" (iii. 24).
Rome
One of the most notable and long-lived examples of a stratocratic state is Ancient Rome, though the stratocratic system developed over time. Following the disposition of the last Roman king Lucius Tarquinius Superbus, Rome became an oligarchic Republic. However, with the gradual expansion of the empire and conflicts with its rival Carthage, culminating in the Punic Wars, the Roman political and military system experienced drastic changes. Following the so-called "Marian reforms", de facto political power became concentrated under military leadership, as the loyalty of the legionaries shifted from the Senate to its generals.
Under the First Triumvirate and during the subsequent civil wars, militarism influenced the formation of the Roman Empire, the head of which was acclaimed as "Imperator", previously an honorary title for distinguished military commanders. The Roman Army either approved of or acquiesced in the accession of every Roman emperor, with the Praetorian Guard having a decisive role in Imperial succession until Emperor Constantine abolished it. Militarization of the Empire increased over time and emperors were increasingly beholden to their armies and fleets, yet how active emperors were in actually commanding in the field in military campaigns varied from emperor to emperor, even from dynasty to dynasty. The vital political importance of the army persisted up until the destruction of the Eastern (Byzantine) Empire with the fall of Constantinople in 1453.
Goryeo
From 1170 to 1270, the kingdom of Goryeo was under effective military rule, with puppet kings on the throne serving mainly as figureheads. The majority of this period was spent under the rule of the Choe family, who set up a parallel system of private administrative systems from their military forces.
Cossacks
Cossacks were predominantly East Slavic people who became known as members of democratic, semi-military and semi-naval communities, predominantly located in Ukraine and in Southern Russia. They inhabited sparsely populated areas and islands in the lower Dnieper, Don, Terek, and Ural river basins, and played an important role in the historical and cultural development of both Russia and Ukraine. The Zaporozhian Sich was a Cossack semi-autonomous polity and proto-state that existed between the 16th and 18th centuries, and existed as an independent stratocratic state as the Cossack Hetmanate for over a hundred years.
Military frontier of the Habsburg monarchy
The Military Frontier was a borderland of the Habsburg monarchy (which became the Austrian Empire and later the Austro-Hungarian Empire). The military frontier acted as the cordon sanitaire against incursions from the Ottoman Empire. Located in the southern part of Hungarian crown land, the frontier was separated from local jurisdiction and was under direct Viennese central military administration from the 1500s to 1872. Unlike the rest of the Catholic dominated territory of the empire, the frontier area had relatively freer religious laws in order to attract settlements into the area.
Modern stratocracies
The closest modern equivalent to a stratocracy, the State Peace and Development Council of Myanmar (Burma), which ruled from 1997 to 2011, arguably differed from most other military dictatorships in that it completely abolished the civilian constitution and legislature. A new constitution that came into effect in 2010 cemented the Tatmadaw's hold on power through mechanisms such as reserving 25% of the seats in the legislature for military personnel. The civilian constitutional government was dissolved again in the 2021 Myanmar coup d'état, with power being transferred back to the Tatmadaw through the State Administration Council.
The United Kingdom overseas territory, the Sovereign Base Areas of Akrotiri and Dhekelia on the island of Cyprus, provides another example of a stratocracy: British Forces Cyprus governs the territory, with Air vice-marshal Peter J. M. Squires serving as administrator from 2022. The territory is subject to unique laws different from both those of the United Kingdom and those of Cyprus.
States argued to be stratocratic
USA
The political scientist Harold Lasswell wrote in 1941 of his concerns that the world was moving towards "a world of 'garrison states with the United States of America being one of the countries moving in that direction. This was supported by the historian Richard Kohn in 1975 commenting on the US's creation of a military state during its early independence, and by the political scientist Samuel Fitch in 1985. The historian Eric Hobsbawm has used the existence and power of the military-industrial complex in the US as evidence of it being a stratocratic state. The expansion and prioritisation of the military during the administrations of Reagan and H. W. Bush have also been described as signs of stratocracy in the US. The futurist Paul Saffo and the researcher Robert Marzec have argued that the post 9/11 projection of the United States was trending towards stratocracy.
USSR
The philosopher and economist Cornelius Castoriadis wrote in his 1980 text, Facing the War, that Russia had become the primary world military power. To sustain this, in the context of the visible economic inferiority of the Soviet Union in the civilian sector, he proposed that the society may no longer be dominated by the one-party state bureaucracy of the Communist Party but by a "stratocracy" describing it as a separate and dominant military sector with expansionist designs on the world. He further argued that this meant there was no internal class dynamic that could lead to social revolution within Russian society and that change could only occur through foreign intervention. Timothy Luke agreed that under the secretaryship of Mikhail Gorbachev this was the USSR moving towards a stratocratic state.
African states
Various countries in post-colonial Africa have been described as stratocracies. The Republic of Egypt under the leadership of Nasser was described by the political theorist P. J. Vatikiotis as a stratocratic state. The recent Egyptian governments since the Arab Spring, including that of Abdel Fattah el-Sisi, have also been called stratocratic. George commented in a 1988 paper that the military dictatorship of Idi Amin in Uganda and the apartheid regime in South Africa should be considered stratocracies. Various previous Nigerian governments have been described as stratocratic in research, including the government under Olusegun Obasanjo, and the Armed Forces Ruling Council led by Ibrahim Babangida. Under the 1978 constitution of eSwatini Sobhuza II appointed the Swazi army commander as the country's prime minister, and the second-in-command of the army as the head of the civil service board. This fusing of military and civil power continued in subsequent appointments, with many of the appointees viewing their civil roles as secondary to their military positions. Ghana under Jerry Rawlings has also been described as being stratocratic in nature. Karl Marx's term of barracks socialism was retermed by the political scientist Michel Martin in their description of socialist stratocracies in the Middle East, Latin America, and Africa, including specifically the People's Republic of Benin. Martin also believes the praetorianism of francophone African republics can be called stratocratic, including the Côte d'Ivoire and the Central African Republic.
Other
The French historian François Raguenet wrote in 1691 of the stratocracy of Oliver Cromwell in the Protectorate, and commented that he believed William III of England was seeking to revive the stratocracy in England.
The Prussian military writer Georg Henirich von Berenhorst wrote in hindsight that ever since the reign of the soldier king, Prussia always remained "not a country with an army, but an army with a country" (a quote often misattributed to Voltaire and Honoré Gabriel Riqueti, comte de Mirabeau). It has been argued the subsequent dominance of the Kingdom of Prussia in the North German Confederation and German Empire and the expansive militarism in their administrations and policies, saw a continuance of the stratocratic Prussian government.
British commentators such as Sir Richard Burton described the pre-Tanzimat Ottoman Empire as a stratocratic state.
The Warlord Era of China is viewed as period of stratocratic struggles with the researcher Peng Xiuliang pointing to the actions and policies of Wang Shizhen, a general and politician of the Republic of China, as an example of the stratocratic forces within the Chinese government of the time.
Occupied Poland in World War I was put under the (general military governments) of Germany and Austria-Hungary. This government was a stratocratic system where the military was responsible for the political administration of Poland.
Various military juntas of central and south America have also been described as stratocracies.
Since 1967, the Israeli occupation of the West Bank, East Jerusalem (both taken from Jordan), Sinai Peninsula, Gaza Strip (taken from Egypt) and the Golan Heights (taken from Syria) after the Six-Day War can be argued to have been under stratocratic rule. While the West Bank and Gaza were governed by the Israeli Military Governorate and Civil Administration which was later given to the Palestinian National Authority that governs the Palestinian territories, only East Jerusalem and the Golan Heights were annexed into Israeli territory from 1980 which is still internationally unrecognized and once referred to these territories by the United Nations as occupied Arab territories.
Fictional stratocracies
Stratocratic forms of government have been popular in fictional stories.
The country of Amestris in the Fullmetal Alchemist manga and anime series is a nominal parliamentary republic without elections, where parliament has been used as a façade to distract from the authoritarian regime, as the government is almost completely centralized by the military, and the majority of government positions are occupied by military personnel.
Bowser from the Super Mario video game franchise is the supreme leader of a stratocratic empire in which he has many other generals working under his militaristic rules such as Kamek, Private Goomp, Sergeant Guy, Corporal Paraplonk and many others.
The Cardassian Union of the Star Trek universe can be described as a stratocracy, with a constitutionally and socially sanctioned, as well as a politically dominant military that nonetheless has immense totalitarian characteristics.
In Bryan Konietzko and Michael Dante DiMartino's Avatar: The Last Airbender, the Earth Kingdom is very divided and during the Hundred Year War relies on an unofficial confederal stratocratic rule of small towns to maintain control from the Fire Nation's military, without the Earth Monarch's assistance.
Both Eldia and Marley from the Japanese manga and anime series Attack on Titan are stratocratic nations ruled by military governments. After a coup d'état, the government of Eldia was displaced in favor of a military-led system with a puppet monarchy as its public front.
The Galactic Empire from the original Star Wars trilogy can be described as a stratocracy. Although ruled by the Sith through its Emperor, Sheev Palpatine, known secretly as Darth Sidious, the functioning of the entire government was controlled by the military and explicitly sanctioned by its leaders. All sectors were controlled by a Moff or Grand Moff who were also high-ranking military officers.
The Global Defense Initiative from the Command & Conquer franchise is another example: initially being a United Nations task force to combat the Brotherhood of Nod and research the alien substance Tiberium, later expanding to a worldwide government led by military leaders after the collapse of society due to Tiberium's devastating effects on Earth.
Blizzard Entertainment's World of Warcraft features an antagonistic group of Orcish clans, which joined in the formation of The Iron Horde, a militaristic clan governed by warlords.
In Robert A. Heinlein's Starship Troopers, the Terran Federation was set up by a group of military veterans in Aberdeen, Scotland when governments collapsed following a world war. While national service is voluntary, earning citizenship in the Federation requires civilians to "enroll in the Federal Service of the Terran Federation for a term of not less than two years and as much longer as may be required by the needs of the Service." While Federal Service is not exclusively military service, that appears to be the dominant form. It is believed that only those willing to sacrifice their lives on the state's behalf are fit to govern. While the government is a representative democracy, the franchise is only granted to people who have completed service, mostly in the military, due to this law (active military can neither vote nor serve in political/non-military offices).
The Turian Hierarchy of Mass Effect is another example of a fictional stratocracy, where the civilian and military populations cannot be distinguished, and the government and the military are the same, and strongly meritocratic, with designated responsibilities for everyone.
The five members of Greater Turkiye in the manga and anime Altair: A Record of Battles are called stratocracies, with them being based on the Ottoman Empire.
See also
Junta (governing body)
Militarism
Political strongman
Military government:
Military dictatorship
Military junta
Military occupation
References
Bibliography
Authoritarianism
Forms of government
Militarism
Military sociology | 0.77309 | 0.995422 | 0.769551 |
World-systems theory | World-systems theory (also known as world-systems analysis or the world-systems perspective) is a multidisciplinary approach to world history and social change which emphasizes the world-system (and not nation states) as the primary (but not exclusive) unit of social analysis. World-systems theorists argue that their theory explains the rise and fall of states, income inequality, social unrest, and imperialism.
"World-system" refers to the inter-regional and transnational division of labor, which divides the world into core countries, semi-periphery countries, and periphery countries. Core countries have higher-skill, capital-intensive industries, and the rest of the world has low-skill, labor-intensive industries and extraction of raw materials. This constantly reinforces the dominance of the core countries. This structure is unified by the division of labour. It is a world-economy rooted in a capitalist economy. For a time, certain countries have become the world hegemon; during the last few centuries, as the world-system has extended geographically and intensified economically, this status has passed from the Netherlands, to the United Kingdom and (most recently) to the United States.
Immanuel Wallerstein is the main proponent of world systems theory. Components of the world-systems analysis are longue durée by Fernand Braudel, "development of underdevelopment" by Andre Gunder Frank, and the single-society assumption. Longue durée is the concept of the gradual change through the day-to-day activities by which social systems are continually reproduced. "Development of underdevelopment" describes the economic processes in the periphery as the opposite of the development in the core. Poorer countries are impoverished to enable a few countries to get richer. Lastly, the single-society assumption opposes the multiple-society assumption and includes looking at the world as a whole.
Background
Immanuel Wallerstein has developed the best-known version of world-systems analysis, beginning in the 1970s. Wallerstein traces the rise of the capitalist world-economy from the "long" 16th century (c. 1450–1640). The rise of capitalism, in his view, was an accidental outcome of the protracted crisis of feudalism (c. 1290–1450). Europe (the West) used its advantages and gained control over most of the world economy and presided over the development and spread of industrialization and capitalist economy, indirectly resulting in unequal development.
Though other commentators refer to Wallerstein's project as world-systems "theory," he consistently rejects that term. For Wallerstein, world-systems analysis is a mode of analysis that aims to transcend the structures of knowledge inherited from the 19th century, especially the definition of capitalism, the divisions within the social sciences, and those between the social sciences and history. For Wallerstein, then, world-systems analysis is a "knowledge movement" that seeks to discern the "totality of what has been paraded under the labels of the... human sciences and indeed well beyond". "We must invent a new language," Wallerstein insists, to transcend the illusions of the "three supposedly distinctive arenas" of society, economy and politics. The trinitarian structure of knowledge is grounded in another, even grander, modernist architecture, the distinction of biophysical worlds (including those within bodies) from social ones: "One question, therefore, is whether we will be able to justify something called social science in the twenty-first century as a separate sphere of knowledge." Many other scholars have contributed significant work in this "knowledge movement."
Origins
Influences
World-systems theory traces emerged in the 1970s. Its roots can be found in sociology, but it has developed into a highly interdisciplinary field.
World-systems theory was aiming to replace modernization theory, which Wallerstein criticised for three reasons:
its focus on the nation state as the only unit of analysis
its assumption that there is only a single path of evolutionary development for all countries
its disregard of transnational structures that constrain local and national development.
There are three major predecessors of world-systems theory: the Annales school, the Marxist tradition, and dependency theory. The Annales School tradition, represented most notably by Fernand Braudel, influenced Wallerstein to focus on long-term processes and geo-ecological regions as units of analysis. Marxism added a stress on social conflict, a focus on the capital accumulation process and competitive class struggles, a focus on a relevant totality, the transitory nature of social forms and a dialectical sense of motion through conflict and contradiction.
World-systems theory was also significantly influenced by dependency theory, a neo-Marxist explanation of development processes.
Other influences on the world-systems theory come from scholars such as Karl Polanyi, Nikolai Kondratiev and Joseph Schumpeter. These scholars researched business cycles and developed concepts of three basic modes of economic organization: reciprocal, redistributive, and market modes. Wallerstein reframed these concepts into a discussion of mini systems, world empires, and world economies.
Wallerstein sees the development of the capitalist world economy as detrimental to a large proportion of the world's population. Wallerstein views the period since the 1970s as an "age of transition" that will give way to a future world system (or world systems) whose configuration cannot be determined in advance.
Other world-systems thinkers include Oliver Cox, Samir Amin, Giovanni Arrighi, and Andre Gunder Frank, with major contributions by Christopher Chase-Dunn, Beverly Silver, Janet Abu Lughod, Li Minqi, Kunibert Raffer, and others. In sociology, a primary alternative perspective is World Polity Theory, as formulated by John W. Meyer.
Dependency theory
World-systems analysis builds upon but also differs fundamentally from dependency theory. While accepting world inequality, the world market and imperialism as fundamental features of historical capitalism, Wallerstein broke with orthodox dependency theory's central proposition. For Wallerstein, core countries do not exploit poor countries for two basic reasons.
Firstly, core capitalists exploit workers in all zones of the capitalist world economy (not just the periphery) and therefore, the crucial redistribution between core and periphery is surplus value, not "wealth" or "resources" abstractly conceived. Secondly, core states do not exploit poor states, as dependency theory proposes, because capitalism is organised around an inter-regional and transnational division of labor rather than an international division of labour. Thirdly, economically relevant structures such as metropolitan regions, international unions and bilateral agreements tend to weaken and blur out the economic importance of nation-states and their borders.
During the Industrial Revolution, for example, English capitalists exploited slaves (unfree workers) in the cotton zones of the American South, a peripheral region within a semiperipheral country, United States.
From a largely Weberian perspective, Fernando Henrique Cardoso described the main tenets of dependency theory as follows:
There is a financial and technological penetration of the periphery and semi-periphery countries by the developed capitalist core countries.
That produces an unbalanced economic structure within the peripheral societies and between them and the central countries.
That leads to limitations upon self-sustained growth in the periphery.
That helps the appearance of specific patterns of class relations.
They require modifications in the role of the state to guarantee the functioning of the economy and the political articulation of a society, which contains, within itself, foci of inarticulateness and structural imbalance.
Dependency and world system theory propose that the poverty and backwardness of poor countries are caused by their peripheral position in the international division of labor. Since the capitalist world system evolved, the distinction between the central and the peripheral states has grown and diverged. In recognizing a tripartite pattern in the division of labor, world-systems analysis criticized dependency theory with its bimodal system of only cores and peripheries.
Immanuel Wallerstein
The best-known version of the world-systems approach was developed by Immanuel Wallerstein. Wallerstein notes that world-systems analysis calls for a unidisciplinary historical social science and contends that the modern disciplines, products of the 19th century, are deeply flawed because they are not separate logics, as is manifest for example in the de facto overlap of analysis among scholars of the disciplines. Wallerstein offers several definitions of a world-system, defining it in 1974 briefly:
He also offered a longer definition:
In 1987, Wallerstein again defined it:
Wallerstein characterizes the world system as a set of mechanisms, which redistributes surplus value from the periphery to the core. In his terminology, the core is the developed, industrialized part of the world, and the periphery is the "underdeveloped", typically raw materials-exporting, poor part of the world; the market being the means by which the core exploits the periphery.
Apart from them, Wallerstein defines four temporal features of the world system. Cyclical rhythms represent the short-term fluctuation of economy, and secular trends mean deeper long run tendencies, such as general economic growth or decline. The term contradiction means a general controversy in the system, usually concerning some short term versus long term tradeoffs. For example, the problem of underconsumption, wherein the driving down of wages increases the profit for capitalists in the short term, but in the long term, the decreasing of wages may have a crucially harmful effect by reducing the demand for the product. The last temporal feature is the crisis: a crisis occurs if a constellation of circumstances brings about the end of the system.
In Wallerstein's view, there have been three kinds of historical systems across human history: "mini-systems" or what anthropologists call bands, tribes, and small chiefdoms, and two types of world-systems, one that is politically unified and the other is not (single state world empires and multi-polity world economies). World-systems are larger, and are ethnically diverse. The modern world-system, a capitalist world-economy, is unique in being the first and only world-system, which emerged around 1450 to 1550, to have geographically expanded across the entire planet, by about 1900. It is defined, as a world-economy, in having many political units tied together as an interstate system and through its division of labor based on capitalist enterprises.
Importance
World-Systems Theory can be useful in understanding world history and the core countries' motives for imperialization and other involvements like the US aid following natural disasters in developing Central American countries or imposing regimes on other core states. With the interstate system as a system constant, the relative economic power of the three tiers points to the internal inequalities that are on the rise in states that appear to be developing. Some argue that this theory, though, ignores local efforts of innovation that have nothing to do with the global economy, such as the labor patterns implemented in Caribbean sugar plantations. Other modern global topics can be easily traced back to the world-systems theory.
As global talk about climate change and the future of industrial corporations, the world systems theory can help to explain the creation of the G-77 group, a coalition of 77 peripheral and semi-peripheral states wanting a seat at the global climate discussion table. The group was formed in 1964, but it now has more than 130 members who advocate for multilateral decision making. Since its creation, G-77 members have collaborated with two main aims: 1) decreasing their vulnerability based on the relative size of economic influence, and 2) improving outcomes for national development. World-systems theory has also been utilized to trace CO2 emissions’ damage to the ozone layer. The levels of world economic entrance and involvement can affect the damage a country does to the earth. In general, scientists can make assumptions about a country's CO2 emissions based on GDP. Higher exporting countries, countries with debt, and countries with social structure turmoil land in the upper-periphery tier. Though more research must be done in the arena, scientists can call core, semi-periphery, and periphery labels as indicators for CO2 intensity.
In a health realm, studies have shown the effect of less industrialized countries’, the periphery's, acceptance of packaged foods and beverages that are loaded with sugars and preservatives. While core states benefit from dumping large amounts of processed, fatty foods into poorer states, there has been a recorded increase in obesity and related chronic conditions such as diabetes and chronic heart disease. While some aspects of the modernization theory have been found to improve the global obesity crisis, a world systems theory approach identifies holes in the progress.
Knowledge economy and finance now dominate the industry in core states while manufacturing has shifted to semi-periphery and periphery ones. Technology has become a defining factor in the placement of states into core or semi-periphery versus periphery. Wallerstein's theory leaves room for poor countries to move into better economic development, but he also admits that there will always be a need for periphery countries as long as there are core states who derive resources from them. As a final mark of modernity, Wallerstein admits that advocates are the heart of this world-system: “Exploitation and the refusal to accept exploitation as either inevitable or just constitute the continuing antinomy of the modern era”.
Characteristics
World-systems analysis argues that capitalism, as a historical system, has always integrated a variety of labor forms within a functioning division of labor (world economy). Countries do not have economies but are part of the world economy. Far from being separate societies or worlds, the world economy manifests a tripartite division of labor, with core, semiperipheral and peripheral zones. In the core zones, businesses, with the support of states they operate within, monopolise the most profitable activities of the division of labor.
There are many ways to attribute a specific country to the core, semi-periphery, or periphery. Using an empirically based sharp formal definition of "domination" in a two-country relationship, Piana in 2004 defined the "core" as made up of "free countries" dominating others without being dominated, the "semi-periphery" as the countries that are dominated (usually, but not necessarily, by core countries) but at the same time dominating others (usually in the periphery) and "periphery" as the countries dominated. Based on 1998 data, the full list of countries in the three regions, together with a discussion of methodology, can be found.
The late 18th and early 19th centuries marked a great turning point in the development of capitalism in that capitalists achieved state society power in the key states, which furthered the industrial revolution marking the rise of capitalism. World-systems analysis contends that capitalism as a historical system formed earlier and that countries do not "develop" in stages, but the system does, and events have a different meaning as a phase in the development of historical capitalism, the emergence of the three ideologies of the national developmental mythology (the idea that countries can develop through stages if they pursue the right set of policies): conservatism, liberalism, and radicalism.
Proponents of world-systems analysis see the world stratification system the same way Karl Marx viewed class (ownership versus nonownership of the means of production) and Max Weber viewed class (which, in addition to ownership, stressed occupational skill level in the production process). The core states primarily own and control the major means of production in the world and perform the higher-level production tasks. The periphery nations own very little of the world's means of production (even when they are located in periphery states) and provide less-skilled labour. Like a class system with a states, class positions in the world economy result in an unequal distribution of rewards or resources. The core states receive the greatest share of surplus production, and periphery states receive the smallest share. Furthermore, core states are usually able to purchase raw materials and other goods from non-core states at low prices and demand higher prices for their exports to non-core states. Chirot (1986) lists the five most important benefits coming to core states from their domination of the periphery:
Access to a large quantity of raw material
Cheap labour
Enormous profits from direct capital investments
A market for exports
Skilled professional labor through migration of these people from the non-core to the core.
According to Wallerstein, the unique qualities of the modern world system include its capitalistic nature, its truly global nature, and the fact that it is a world economy that has not become politically unified into a world empire.
Core states
In general, core states:
Are the most economically diversified, wealthy, and powerful both economically and militarily
Have strong central governments controlling extensive bureaucracies and powerful militaries
Have stronger and more complex state institutions that help manage economic affairs internally and externally
Have a sufficiently large tax base, such that state institutions can provide the infrastructure for a strong economy
Are highly industrialised and produce manufactured goods for export instead of raw materials
Increasingly tend to specialise in the information, finance, and service industries
Are more regularly at the forefront of new technologies and new industries. Contemporary examples include the electronics and biotechnology industries. The use of the assembly line is a historic example of this trend.
Have strong bourgeois and working classes
Have significant means of influence over non-core states
Are relatively independent of outside control
Throughout the history of the modern world system, a group of core states has competed for access to the world's resources, economic dominance, and hegemony over periphery states. Occasionally, one core state possessed clear dominance over the others. According to Immanuel Wallerstein, a core state is dominant over all the others when it has a lead in three forms of economic dominance:
Productivity dominance allows a country to develop higher-quality products at a cheaper price compared to other countries.
Productivity dominance may lead to trade dominance. In this case, there is a favorable balance of trade for the dominant state since other countries are buying more of its products than those of others.
Trade dominance may lead to financial dominance. At this point, more money is flowing into the country than is leaving it. Bankers from the dominant state tend to acquire greater control over the world's financial resources.
Military dominance is also likely once a state has reached this point. However, it has been posited that throughout the modern world system, no state has been able to use its military to gain economic dominance. Each of the past dominant states became dominant with fairly small levels of military spending and began to lose economic dominance with military expansion later on. Historically, cores were located in northwestern Europe (England, France, Netherlands) but later appeared in other parts of the world such as the United States, Canada, and Australia.
Peripheral states
Are the least economically diversified
Have relatively weak governments
Have relatively weak institutions, with tax bases too small to support infrastructural development
Tend to depend on one type of economic activity, often by extracting and exporting raw materials to core states
Tend to be the least industrialized
Are often targets for investments from multinational (or transnational) corporations from core states that come into the country to exploit cheap unskilled labor in order to export back to core states
Have a small bourgeois and a large peasant classes
Tend to have populations with high percentages of poor and uneducated people
Tend to have very high social inequality because of small upper classes that own most of the land and have profitable ties to multinational corporations
Tend to be extensively influenced by core states and their multinational corporations and often forced to follow economic policies that help core states and harm the long-term economic prospects of peripheral states.
Historically, peripheries were found outside Europe, such as in Latin America and today in sub-Saharan Africa.
Semi-peripheral states
Semi-peripheral states are those that are midway between the core and periphery. Thus, they have to keep themselves from falling into the category of peripheral states and at the same time, they strive to join the category of core states. Therefore, they tend to apply protectionist policies most aggressively among the three categories of states. They tend to be countries moving towards industrialization and more diversified economies. These regions often have relatively developed and diversified economies but are not dominant in international trade. They tend to export more to peripheral states and import more from core states in trade. According to some scholars, such as Chirot, they are not as subject to outside manipulation as peripheral societies; but according to others (Barfield), they have "periperial-like" relations to the core. While in the sphere of influence of some cores, semiperipheries also tend to exert their own control over some peripheries. Further, semi-peripheries act as buffers between cores and peripheries and thus "...partially deflect the political pressures which groups primarily located in peripheral areas might otherwise direct against core-states" and stabilise the world system.
Semi-peripheries can come into existence from developing peripheries and declining cores. Historically, two examples of semiperipheral states would be Spain and Portugal, which fell from their early core positions but still managed to retain influence in Latin America. Those countries imported silver and gold from their American colonies but then had to use it to pay for manufactured goods from core countries such as England and France. In the 20th century, states like the "settler colonies" of Australia, Canada and New Zealand had a semiperipheral status. In the 21st century, states like Brazil, Russia, India, China, South Africa (BRICS), and Israel are usually considered semiperipheral.
Interstate system
Between the core, periphery and semi-periphery countries lies a system of interconnected state relationships, or the interstate system. The interstate system arose either as a concomitant process or as a consequence of the development of the capitalist world-system over the course of the “long” 16th century as states began to recognize each other's sovereignty and form agreements and rules between themselves.
Wallerstein wrote that there were no concrete rules about what exactly constitutes an individual state as various indicators of statehood (sovereignty, power, market control etc.) could range from total to nil. There were also no clear rules about which group controlled the state, as various groups located inside, outside, and across the states’ frontiers could seek to increase or decrease state power in order to better profit from a world-economy. Nonetheless, the “relative power continuum of stronger and weaker states has remained relatively unchanged over 400-odd years” implying that while there is no universal state system, an interstate system had developed out of the sum of state actions, which existed to reinforce certain rules and preconditions of statehood. These rules included maintaining consistent relations of production, and regulating the flow of capital, commodities and labor across borders to maintain the price structures of the global market. If weak states attempt to rewrite these rules as they prefer them, strong states will typically intervene to rectify the situation.
The ideology of the interstate system is sovereign equality, and while the system generally presents a set of constraints on the power of individual states, within the system states are “neither sovereign nor equal.” Not only do strong states impose their will on weak states, strong states also impose limitations upon other strong states, and tend to seek strengthened international rules, since enforcing consequences for broken rules can be highly beneficial and confer comparative advantages.
External areas
External areas are those that maintain socially necessary divisions of labor independent of the capitalist world economy.
The interpretation of world history
Wallerstein traces the origin of today's world-system to the "long 16th century" (a period that began with the discovery of the Americas by Western European sailors and ended with the English Revolution of 1640). And, according to Wallerstein, globalization, or the becoming of the world's system, is a process coterminous with the spread and development of capitalism over the past 500 years.
Janet Abu Lughod argues that a pre-modern world system extensive across Eurasia existed in the 13th century prior to the formation of the modern world-system identified by Wallerstein. He contends that the Mongol Empire played an important role in stitching together the Chinese, Indian, Muslim and European regions in the 13th century, before the rise of the modern world system. In debates, Wallerstein contends that Lughod's system was not a "world-system" because it did not entail integrated production networks, but it was instead a vast trading network.
Andre Gunder Frank goes further and claims that a global world system that includes Asia, Europe and Africa has existed since the 4th millennium BCE. The centre of this system was in Asia, specifically China. Andrey Korotayev goes even further than Frank and dates the beginning of the world system formation to the 10th millennium BCE and connects it with the start of the Neolithic Revolution in the Middle East. According to him, the centre of this system was originally in Western Asia.
Before the 16th century, Europe was dominated by feudal economies. European economies grew from mid-12th to 14th century but from 14th to mid 15th century, they suffered from a major crisis. Wallerstein explains this crisis as caused by the following:
stagnation or even decline of agricultural production, increasing the burden of peasants,
decreased agricultural productivity caused by changing climatological conditions (Little Ice Age),
an increase in epidemics (Black Death),
optimum level of the feudal economy having been reached in its economic cycle; the economy moved beyond it and entered a depression period.
As a response to the failure of the feudal system, European society embraced the capitalist system. Europeans were motivated to develop technology to explore and trade around the world, using their superior military to take control of the trade routes. Europeans exploited their initial small advantages, which led to an accelerating process of accumulation of wealth and power in Europe.
Wallerstein notes that never before had an economic system encompassed that much of the world, with trade links crossing so many political boundaries. In the past, geographically large economic systems existed but were mostly limited to spheres of domination of large empires (such as the Roman Empire); development of capitalism enabled the world economy to extend beyond individual states. International division of labor was crucial in deciding what relationships exists between different regions, their labor conditions and political systems. For classification and comparison purposes, Wallerstein introduced the categories of core, semi-periphery, periphery, and external countries. Cores monopolized the capital-intensive production, and the rest of the world could provide only workforce and raw resources. The resulting inequality reinforced existing unequal development.
According to Wallerstein there have only been three periods in which a core state dominated in the modern world-system, with each lasting less than one hundred years. In the initial centuries of the rise of European dominance, Northwestern Europe constituted the core, Mediterranean Europe the semiperiphery, and Eastern Europe and the Western hemisphere (and parts of Asia) the periphery. Around 1450, Spain and Portugal took the early lead when conditions became right for a capitalist world-economy. They led the way in establishing overseas colonies. However, Portugal and Spain lost their lead, primarily by becoming overextended with empire-building. It became too expensive to dominate and protect so many colonial territories around the world.
The first state to gain clear dominance was the Netherlands in the 17th century, after its revolution led to a new financial system that many historians consider revolutionary. An impressive shipbuilding industry also contributed to their economic dominance through more exports to other countries. Eventually, other countries began to copy the financial methods and efficient production created by the Dutch. After the Dutch gained their dominant status, the standard of living rose, pushing up production costs.
Dutch bankers began to go outside of the country seeking profitable investments, and the flow of capital moved, especially to England. By the end of the 17th century, conflict among core states increased as a result of the economic decline of the Dutch. Dutch financial investment helped England gain productivity and trade dominance, and Dutch military support helped England to defeat France, the other country competing for dominance at the time.
In the 19th century, Britain replaced the Netherlands as the hegemon. As a result of the new British dominance, the world system became relatively stable again during the 19th century. The British began to expand globally, with many colonies in the New World, Africa, and Asia. The colonial system began to place a strain on the British military and, along with other factors, led to an economic decline. Again there was a great deal of core conflict after the British lost their clear dominance. This time it was Germany, and later Italy and Japan that provided the new threat.
Industrialization was another ongoing process during British dominance, resulting in the diminishing importance of the agricultural sector. In the 18th century, Britain was Europe's leading industrial and agricultural producer; by 1900, only 10% of England's population was working in the agricultural sector.
By 1900, the modern world system appeared very different from that of a century earlier in that most of the periphery societies had already been colonised by one of the older core states. In 1800, the old European core claimed 35% of the world's territory, but by 1914, it claimed 85% of the world's territory, with the Scramble for Africa closing out the imperial era. If a core state wanted periphery areas to exploit as had done the Dutch and British, these periphery areas had to be taken from another core state, which the US did by way of the Spanish–American War, and Germany, and then Japan and Italy, attempted to do in the leadup to World War II. The modern world system was thus geographically global, and even the most remote regions of the world had all been integrated into the global economy.
As countries vied for core status, so did the United States. The American Civil War led to more power for the Northern industrial elites, who were now better able to pressure the government for policies helping industrial expansion. Like the Dutch bankers, British bankers were putting more investment toward the United States. The US had a small military budget compared to other industrial states at the time.
The US began to take the place of the British as a new dominant state after World War I. With Japan and Europe in ruins after World War II, the US was able to dominate the modern world system more than any other country in history, while the USSR and to a lesser extent China were viewed as primary threats. At its height, US economic reach accounted for over half of the world's industrial production, owned two thirds of the gold reserves in the world and supplied one third of the world's exports.
However, since the end of the Cold War, the future of US hegemony has been questioned by some scholars, as its hegemonic position has been in decline for a few decades. By the end of the 20th century, the core of the wealthy industrialized countries was composed of Western Europe, the United States, Japan and a rather limited selection of other countries. The semiperiphery was typically composed of independent states that had not achieved Western levels of influence, while poor former colonies of the West formed most of the periphery.
Criticisms
World-systems theory has attracted criticisms from its rivals; notably for being too focused on economy and not enough on culture and for being too core-centric and state-centric. William I. Robinson has criticized world-systems theory for its nation-state centrism, state-structuralist approach, and its inability to conceptualize the rise of globalization. Robinson suggests that world-systems theory does not account for emerging transnational social forces and the relationships forged between them and global institutions serving their interests. These forces operate on a global, rather than state system and cannot be understood by Wallerstein's nation-centered approach.
According to Wallerstein himself, critique of the world-systems approach comes from four directions: the positivists, the orthodox Marxists, the state autonomists, and the culturalists. The positivists criticise the approach as too prone to generalization, lacking quantitative data and failing to put forth a falsifiable proposition. Orthodox Marxists find the world-systems approach deviating too far from orthodox Marxist principles, such as by not giving enough weight to the concept of social class. It is worth noting, however, that "[d]ependency theorists argued that [the beneficiaries of class society, the bourgeoisie,] maintained a dependent relationship because their private interests coincided with the interest of the dominant states." The state autonomists criticize the theory for blurring the boundaries between state and businesses. Further, the positivists and the state autonomists argue that state should be the central unit of analysis. Finally, the culturalists argue that world-systems theory puts too much importance on the economy and not enough on the culture. In Wallerstein's own words:
One of the fundamental conceptual problems of the world-system theory is that the assumptions that define its actual conceptual units are social systems. The assumptions, which define them, need to be examined as well as how they are related to each other and how one changes into another. The essential argument of the world-system theory is that in the 16th century a capitalist world economy developed, which could be described as a world system. The following is a theoretical critique concerned with the basic claims of world-system theory:
"There are today no socialist systems in the world-economy any more than there are feudal systems because there is only one world system. It is a world-economy and it is by definition capitalist in form."
Robert Brenner has pointed out that the prioritization of the world market means the neglect of local class structures and class struggles:
"They fail to take into account either the way in which these class structures themselves emerge as the outcome of class struggles whose results are incomprehensible in terms merely of market forces."
Another criticism is that of reductionism made by Theda Skocpol: she believes the interstate system is far from being a simple superstructure of the capitalist world economy:
"The international states system as a transnational structure of military competition was not originally created by capitalism. Throughout modern world history, it represents an analytically autonomous level [... of] world capitalism, but [is] not reducible to it."
A concept that we can perceive as critique and mostly as renewal is the concept of coloniality (Anibal Quijano, 2000, Nepantla, Coloniality of power, eurocentrism and Latin America). Issued from the think tank of the in Latin America, it re-uses the concept of world working division and core/periphery system in its system of coloniality. But criticizing the "core-centric" origin of World-system and its only economical development, "coloniality" allows further conception of how power still processes in a colonial way over worldwide populations (Ramon Grosfogel, "the epistemic decolonial turn" 2007): "by 'colonial situations' I mean the cultural, political, sexual, spiritual, epistemic and economic oppression/exploitation of subordinate racialized/ethnic groups by dominant racialized/ethnic groups with or without the existence of colonial administration". Coloniality covers, so far, several fields such as coloniality of gender (Maria Lugones), coloniality of "being" (Maldonado Torres), coloniality of knowledge (Walter Mignolo) and Coloniality of power (Anibal Quijano).
Related journals
Annales. Histoire, Sciences sociales
Ecology and Society
Journal of World-Systems Research
See also
Big History
Dependency theory
Structuralist economics
Third Space Theory
Third place
Hybridity
Post-colonial theory
General systems theory
Geography and cartography in medieval Islam
Globalization
International relations theory
List of cycles
Social cycle theory
Sociocybernetics
Systems philosophy
Systems thinking
Systemography
War cycles
Hierarchy theory
References
Further reading
Works of Samir Amin; especially 'Empire of Chaos' (1991) and 'Le developpement inegal. Essai sur les formations sociales du capitalisme peripherique' (1973)
Works of Giovanni Arrighi
József Böröcz
(2005), 'Redistributing Global Inequality: A Thought Experiment', Economic and Political Weekly , February 26:886-92.
(1992) 'Dual Dependency and Property Vacuum: Social Change in the State Socialist Semiperiphery' Theory & Society, 21:74-104.
Grinin, L., Korotayev, A. and Tausch A. (2016) Economic Cycles, Crises, and the Global Periphery. Springer International Publishing, Heidelberg, New York, Dordrecht, London, .
With contributions by Samir Amin, Christopher Chase-Dunn, Andre Gunder Frank, Immanuel Wallerstein. Pre-publication download of Chapter 5: The European Union: global challenge or global governance? 14 world system hypotheses and two scenarios on the future of the Union, pages 93 - 196 Arno Tausch at http://edoc.vifapol.de/opus/volltexte/2012/3587/pdf/049.pdf.
Korotayev A., Malkov A., Khaltourina D. Introduction to Social Macrodynamics: Compact Macromodels of the World System Growth. Moscow: URSS, 2006. .
Lenin, Vladimir, 'Imperialism, the Highest Stage of Capitalism'
Moore, Jason W. (2000). "Environmental Crises and the Metabolic Rift in World-Historical Perspective ," Organization & Environment 13(2), 123–158.
Raffer K. (1993), ‘Trade, transfers, and development: problems and prospects for the twenty-first century’ Aldershot, Hants, England; Brookfield, Vt., USA: E. Elgar Pub. Co.
Raffer K. and Singer H.W. (1996), ‘The Foreign Aid Business. Economic Assistance and Development Cooperation’ Cheltenham and Borookfield: Edward Alger.
Tausch A. and Christian Ghymers (2006), 'From the "Washington" towards a "Vienna Consensus"? A quantitative analysis on globalization, development and global governance'. Hauppauge, New York: Nova Science.
External links
Fernand Braudel Center for the Study of Economies, Historical Systems and Civilizations closed
Review, A Journal of the Fernand Braudel Center
Institute for Research on World-Systems (IROWS), University of California, Riverside
World-Systems Archive
Working Papers in the World Systems Archive
World-Systems Archive Books
World-Systems Electronic Seminars
Preface to "ReOrient" by Andre Gunder Frank
Development economics
Imperialism studies
International relations theory
Sociological theories
Theories of history
Political science
Political science theories
Schools of economic thought
Political systems
Economic systems
Systems theory
Economic globalization
Neo-Marxism | 0.771188 | 0.997875 | 0.769549 |
Life course approach | The life course approach, also known as the life course perspective or life course theory, refers to an approach developed in the 1960s for analyzing people's lives within structural, social, and cultural contexts. It views one's life as a socially sequenced timeline and recognizes the importance of factors such as generational succession and age in shaping behavior and career. Development does not end at childhood, but instead extends through multiple life stages to influence life trajectory.
The origins of this approach can be traced back to pioneering studies of the 1920s such as William I. Thomas and Florian Znaniecki's The Polish Peasant in Europe and America and Karl Mannheim's essay on the "Problem of Generations".
Overview
The life course approach examines an individual's life history and investigates, for example, how early events influenced future decisions and events such as marriage and divorce, engagement in crime, or disease incidence. The primary factor promoting standardization of the life course was improvement in mortality rates brought about by the management of contagious and infectious diseases such as smallpox. A life course is defined as "a sequence of socially defined events and roles that the individual enacts over time". In particular, the approach focuses on the connection between individuals and the historical and socioeconomic context in which these individuals lived.
The method encompasses observations including history, sociology, demography, developmental psychology, biology, public health and economics. So far, empirical research from a life course perspective has not resulted in the development of a formal theory.
Glen Elder theorized the life course as based on five key principles: life-span development, human agency, historical time and geographic place, timing of decisions, and linked lives. As a concept, a life course is defined as "a sequence of socially defined events and roles that the individual enacts over time" (Giele and Elder 1998, p. 22). These events and roles do not necessarily proceed in a given sequence, but rather constitute the sum total of the person's actual experience. Thus the concept of life course implies age-differentiated social phenomena distinct from uniform life-cycle stages and the life span. Life span refers to duration of life and characteristics that are closely related to age but that vary little across time and place.
In contrast, the life course perspective elaborates the importance of time, context, process, and meaning on human development and family life (Bengtson and Allen 1993). The family is perceived as a micro social group within a macro social context—a "collection of individuals with shared history who interact within ever-changing social contexts across ever increasing time and space" (Bengtson and Allen 1993, p. 470). Aging and developmental change, therefore, are continuous processes that are experienced throughout life. As such, the life course reflects the intersection of social and historical factors with personal biography and development within which the study of family life and social change can ensue (Elder 1985; Hareven 1996).
Life course theory also has moved in a constructionist direction. Rather than taking time, sequence, and linearity for granted, in their book Constructing the Life Course, Jaber F. Gubrium and James A. Holstein (2000) take their point of departure from accounts of experience through time. This shifts the figure and ground of experience and its stories, foregrounding how time, sequence, linearity, and related concepts are used in everyday life. It presents a radical turn in understanding experience through time, moving well beyond the notion of a multidisciplinary paradigm, providing an altogether different paradigm from traditional time-centered approaches. Rather than concepts of time being the principal building blocks of propositions, concepts of time are analytically bracketed and become focal topics of research and constructive understanding.
The life course approach has been applied to topics such as the occupational health of immigrants, and retirement age. It has also become increasingly important in other areas such as in the role of childhood experiences affecting the behaviour of students later in life or physical activity in old age.
References
Further reading
Elder G. H. Jr & Giele J.Z. (2009). Life Course Studies. An Evolving Field. In Elder G. H. Jr & Giele J.Z. (Eds.), The Craft of Life Course Research (pp 1–28). New-york, London: The Guilford Press.
Levy, R., Ghisletta, P., Le Goff, J. M., Spini, D., & Widmer, E. (2005). Towards an Interdisciplinary Perspective on the Life Course. pp. 3–32. Elsevier.
Developmental psychology
Methods in sociology
Epidemiology | 0.782747 | 0.983102 | 0.76952 |
Western Europe | Western Europe is the western region of Europe. The region's extent varies depending on context.
The concept of "the West" appeared in Europe in juxtaposition to "the East" and originally applied to the ancient Mediterranean world, the Roman Empire (both Western and Eastern), and medieval "Christendom". Beginning with the Renaissance and the Age of Discovery, roughly from the 15th century, the concept of Europe as "the West" slowly became distinguished from and eventually replaced the dominant use of "Christendom" as the preferred endonym within the area. By the Age of Enlightenment and the Industrial Revolution, the concepts of "Eastern Europe" and "Western Europe" were more regularly used. The distinctiveness of Western Europe became most apparent during the Cold War, when Europe was divided for 40 years by the Iron Curtain into the Western Bloc and Eastern Bloc, each characterised by distinct political and economical systems.
Historical divisions
Classical antiquity and medieval origins
Prior to the Roman conquest, a large part of Western Europe had adopted the newly developed La Tène culture. As the Roman domain expanded, a cultural and linguistic division appeared between the mainly Greek-speaking eastern provinces, which had formed the highly urbanised Hellenistic civilisation, and the western territories, which in contrast largely adopted the Latin language. This cultural and linguistic division was eventually reinforced by the later political east–west division of the Roman Empire. The Western Roman Empire and the Eastern Roman Empire controlled the two divergent regions between the 3rd and the 5th centuries.
The division between these two was enhanced during late antiquity and the Middle Ages by a number of events. The Western Roman Empire collapsed, starting the Early Middle Ages. By contrast, the Eastern Roman Empire, mostly known as the Greek or Byzantine Empire, survived and even thrived for another 1000 years. The rise of the Carolingian Empire in the west, and in particular the Great Schism between Eastern Orthodoxy and Roman Catholicism, enhanced the cultural and religious distinctiveness between Eastern and Western Europe.
After the conquest of the Byzantine Empire, center of the Eastern Orthodox Church, by the Muslim Ottoman Empire in the 15th century, and the gradual fragmentation of the Holy Roman Empire (which had replaced the Carolingian Empire), the division between Roman Catholic and Protestant became more important in Europe than that with Eastern Orthodoxy.
In East Asia, Western Europe was historically known as in China and in Japan, which literally translates as the "Far West". The term Far West became synonymous with Western Europe in China during the Ming dynasty. The Italian Jesuit priest Matteo Ricci was one of the first writers in China to use the Far West as an Asian counterpart to the European concept of the Far East. In Ricci's writings, Ricci referred to himself as "Matteo of the Far West". The term was still in use in the late 19th and early 20th centuries.
Religion
Christianity is the largest religion in Western Europe. According to a 2018 study by the Pew Research Center, 71.0% of Western Europeans identified as Christians.
In 1054, the East–West Schism divided Christianity into Western Christianity and Eastern Christianity. This split Europe in two, with Western Europe primarily under the Catholic Church, and Eastern Europe primarily under the Eastern Orthodox Church. Ever since the Reformation in the 16th century, Protestantism has also been a major denomination in Europe, with Eastern Protestant and Eastern Catholic denominations also emerging in Central and Eastern Europe.
Cold War
During the four decades of the Cold War, the definition of East and West was simplified by the existence of the Eastern Bloc. A number of historians and social scientists view the Cold War definition of Western and Eastern Europe as outdated or relegating.
During the final stages of World War II, the future of Europe was decided between the Allies in the 1945 Yalta Conference, between the British Prime Minister, Winston Churchill, the U.S. President, Franklin D. Roosevelt, and the Premier of the Soviet Union, Joseph Stalin.
Post-war Europe was divided into two major spheres: the Western Bloc, influenced by the United States, and the Eastern Bloc, influenced by the Soviet Union. With the onset of the Cold War, Europe was divided by the Iron Curtain. This term had been used during World War II by German Propaganda Minister Joseph Goebbels and, later, Count Lutz Schwerin von Krosigk in the last days of the war; however, its use was hugely popularised by Winston Churchill, who used it in his famous "Sinews of Peace" address on 5 March 1946 at Westminster College in Fulton, Missouri:
Although some countries were officially neutral, they were classified according to the nature of their political and economic systems. This division largely defines the popular perception and understanding of Western Europe and its borders with Eastern Europe.
The world changed dramatically with the fall of the Iron Curtain in 1989. West Germany peacefully absorbed East Germany, in the German reunification. Comecon and the Warsaw Pact were dissolved, and in 1991, the Soviet Union ceased to exist. Several countries which had been part of the Soviet Union regained full independence.
Western European Union
In 1948 the Treaty of Brussels was signed between Belgium, France, Luxembourg, the Netherlands and the United Kingdom. It was further revisited in 1954 at the Paris Conference, when the Western European Union was established. It was declared defunct in 2011 after the Treaty of Lisbon, and the Treaty of Brussels was terminated. When the Western European Union was dissolved, it had 10 member countries. Additionally, it had 6 associate member countries, 7 associate partner countries and 5 observer countries.
Modern divisions
UN geoscheme classification
The United Nations geoscheme is a system devised by the United Nations Statistics Division (UNSD) which divides the countries of the world into regional and subregional groups, based on the M49 coding classification. The partition is for statistical convenience and does not imply any assumption regarding political or other affiliation of countries or territories.
In the UN geoscheme, the following countries are classified as Western Europe:
Austria
Belgium
France
Germany
Liechtenstein
Luxembourg
Monaco
Netherlands
Switzerland
CIA classification
The CIA classifies seven countries as belonging to "Western Europe":
Belgium
France
Ireland
Luxembourg
Monaco
Netherlands
United Kingdom
The CIA also classifies three countries as belonging to "Southwestern Europe":
Andorra
Portugal
Spain
EuroVoc classification
EuroVoc is a multilingual thesaurus maintained by the Publications Office of the European Union. In this thesaurus, the countries of Europe are grouped into sub-regions. The following countries are included in the sub-group Western Europe:
Andorra
Austria
Belgium
France
Germany
Ireland
Liechtenstein
Luxembourg
Monaco
Netherlands
Switzerland
United Kingdom
UN regional groups: Western European and Others Group
The Western European and Others Group is one of several unofficial Regional Groups in the United Nations that act as voting blocs and negotiation forums. Regional voting blocs were formed in 1961 to encourage voting to various UN bodies from different regional groups. The European members of the group are:
Andorra
Austria
Belgium
Cyprus
Denmark
Finland
France
Germany
Greece
Iceland
Ireland
Italy
Liechtenstein
Luxembourg
Malta
Monaco
Netherlands
Norway
Portugal
San Marino
Spain
Sweden
Switzerland
Turkey
United Kingdom
In addition, Australia, Canada, Israel and New Zealand are members of the group, with the United States as observer.
Population
Using the CIA classification strictly would give the following calculation of Western Europe's population. All figures based on the projections for 2018 by the Population Division of the United Nations Department of Economic and Social Affairs.
Using the CIA classification a little more liberally and including "South-Western Europe", would give the following calculation of Western Europe's population.
1 The Hague is the seat of government
Climate
The climate of Western Europe varies from Mediterranean in the coasts of Italy, Portugal and Spain to alpine in the Pyrenees and the Alps. The Mediterranean climate of the south is dry and warm. The western and northwestern parts have a mild, generally humid climate, influenced by the North Atlantic Current. Western Europe is a heatwave hotspot, exhibiting upward trends that are three-to-four times faster compared to the rest of the northern midlatitudes.
Languages
Western European languages mostly fall within two Indo-European language families: the Romance languages, descended from the Latin of the Roman Empire; and the Germanic languages, whose ancestor language (Proto-Germanic) came from southern Scandinavia.
Romance languages are spoken primarily in the southern and central part of Western Europe, Germanic languages in the northern part (the British Isles and the Low Countries), as well as a large part of Northern and Central Europe.
Other Western European languages include the Celtic group (that is, Irish, Scottish Gaelic, Manx, Welsh, Cornish and Breton) and Basque, the only currently living European language isolate.
Multilingualism and the protection of regional and minority languages are recognised political goals in Western Europe today. The Council of Europe Framework Convention for the Protection of National Minorities and the Council of Europe's European Charter for Regional or Minority Languages set up a legal framework for language rights in Europe.
Economy
Western Europe is one of the richest regions of the world. Germany has the highest gross domestic product in Europe and the largest financial surplus of any country, Luxembourg has the world's highest GDP per capita, and Germany has the highest net national wealth of any European state.
Switzerland and Luxembourg have the highest average wage in the world, in nominal and PPP, respectively. Norway ranks highest in the world on the Social Progress Index.
See also
Central Europe
Eastern Europe
Northern Europe
Southern Europe
Far West
Marshall Plan
Western world
References
Citations
Sources
The Making of Europe, , by Robert Bartlett
Crescent and Cross, , by Hugh Bicheno
The Normans, , by Trevor Rowley
1066: The Year of the Three Battles, , by Frank McLynn
External links
The European sub-regions according to the UN
Teaching about Western Europe
Regions of Europe
Articles containing video clips | 0.769967 | 0.999413 | 0.769515 |
Tertiary source | A tertiary source is an index or textual consolidation of already published primary and secondary sources that does not provide additional interpretations or analysis of the sources. Some tertiary sources can be used as an aid to find key (seminal) sources, key terms, general common knowledge and established mainstream science on a topic. The exact definition of tertiary varies by academic field.
Academic research standards generally do not accept tertiary sources such as encyclopedias as citations, although survey articles are frequently cited rather than the original publication.
Overlap with secondary sources
Depending on the topic of research, a scholar may use a bibliography, dictionary, or encyclopedia as either a tertiary or a secondary source. This causes some difficulty in defining many sources as either one type or the other.
In some academic disciplines, the differentiation between a secondary and tertiary source is relative.
In the United Nations International Scientific Information System (UNISIST) model, a secondary source is a bibliography, whereas a tertiary source is a synthesis of primary sources.
Types of tertiary sources
As tertiary sources, encyclopedias, dictionaries, some textbooks, and compendia attempt to summarize, collect, and consolidate the source materials into an overview without adding analysis and synthesis of new conclusions.
Indexes, bibliographies, concordances, and databases are aggregates of primary and secondary sources and therefore often considered tertiary sources. They may also serve as a point of access to the full or partial text of primary and secondary sources. Almanacs, travel guides, field guides, and timelines are also examples of tertiary sources.
Wikipedia is a tertiary source.
See also
Source text
Third-party source
References
History resources
Information science
de:Sekundärliteratur#Tertiärliteratur | 0.781261 | 0.984922 | 0.769481 |
History of slavery | The history of slavery spans many cultures, nationalities, and religions from ancient times to the present day. Likewise, its victims have come from many different ethnicities and religious groups. The social, economic, and legal positions of slaves have differed vastly in different systems of slavery in different times and places.
Slavery has been found in some hunter-gatherer populations, particularly as hereditary slavery, but the conditions of agriculture with increasing social and economic complexity offer greater opportunity for mass chattel slavery. Slavery was institutionalized by the time the first civilizations emerged (such as Sumer in Mesopotamia, which dates back as far as 3500 BC). Slavery features in the Mesopotamian Code of Hammurabi (c. 1750 BC), which refers to it as an established institution.
Slavery was widespread in the ancient world in Europe, Asia, the Middle East, and Africa.
Slavery became less common throughout Europe during the Early Middle Ages but continued to be practiced in some areas. Both Christians and Muslims captured and enslaved each other during centuries of warfare in the Mediterranean and Europe. Islamic slavery encompassed mainly Western and Central Asia, Northern and Eastern Africa, India, and Europe from the 7th to the 20th century. Islamic law approved of enslavement of non-Muslims, and slaves were trafficked from non-Muslim lands: from the North via the Balkan slave trade and the Crimean slave trade; from the East via the Bukhara slave trade; from the West via Andalusian slave trade; and from the South via the Trans-Saharan slave trade, the Red Sea slave trade and the Indian Ocean slave trade.
Beginning in the 16th century, European merchants, starting mainly with merchants from Portugal, initiated the transatlantic slave trade. Few traders ventured far inland, attempting to avoid tropical diseases and violence. They mostly purchased imprisoned Africans (and exported commodities including gold and ivory) from West African kingdoms, transporting them to Europe's colonies in the Americas. The merchants were sources of desired goods including guns, gunpowder, copper manillas, and cloth, and this demand for imported goods drove local wars and other means to the enslavement of Africans in ever greater numbers. In India and throughout the New World, people were forced into slavery to create the local workforce. The transatlantic slave trade was eventually curtailed after European and American governments passed legislation abolishing their nations' involvement in it. Practical efforts to enforce the abolition of slavery included the British Preventative Squadron and the American African Slave Trade Patrol, the abolition of slavery in the Americas, and the widespread imposition of European political control in Africa.
In modern times, human trafficking remains an international problem. Slavery in the 21st century continues and generates an estimated $150 billion in annual profits. Populations in regions with armed conflict are especially vulnerable, and modern transportation has made human trafficking easier. In 2019, there were an estimated 40.3 million people worldwide subject to some form of slavery, and 25% were children. 24.9 million are used for forced labor, mostly in the private sector; 15.4 million live in forced marriages. Forms of slavery include domestic labour, forced labour in manufacturing, fishing, mining and construction, and sexual slavery.
Prehistoric and ancient slavery
Evidence of slavery predates written records; the practice has existed in many cultures and can be traced back 11,000 years ago due to the conditions created by the invention of agriculture during the Neolithic Revolution. Economic surpluses and high population densities were conditions that made mass slavery viable.
Slavery occurred in civilizations including ancient Egypt, ancient China, the Akkadian Empire, Assyria, Babylonia, Persia, ancient Israel, ancient Greece, ancient India, the Roman Empire, the Arab Islamic Caliphates and Sultanates, Nubia, the pre-colonial empires of Sub-Saharan Africa, and the pre-Columbian civilizations of the Americas. Ancient slavery consists of a mixture of debt-slavery, punishment for crime, prisoners of war, child abandonment, and children born to slaves.
Africa
Writing in 1984, French historian Fernand Braudel noted that slavery had been endemic in Africa and part of the structure of everyday life throughout the 15th to the 18th century. "Slavery came in different guises in different societies: there were court slaves, slaves incorporated into princely armies, domestic and household slaves, slaves working on the land, in industry, as couriers and intermediaries, even as traders". During the 16th century, Europe began to outpace the Arab world in the export traffic, with its trafficking of slaves from Africa to the Americas. The Dutch imported slaves from Asia into their colony at the Cape of Good Hope (now Cape Town) in the 17th century. In 1807 Britain (which already held a small coastal territory, intended for the resettlement of former slaves, in Freetown, Sierra Leone) made the slave trade within its empire illegal with the Slave Trade Act 1807, and worked to extend the prohibition to other territory, as did the United States in 1808.
In Senegambia, between 1300 and 1900, close to one-third of the population was enslaved. In early Islamic states of the Western Sudan, including Ghana (750–1076), Mali (1235–1645), Segou (1712–1861), and Songhai (1275–1591), about a third of the population was enslaved. The earliest Akan state of Bonoman which had third of its population being enslaved in the 17th century. In Sierra Leone in the 19th century about half of the population consisted of slaves. In the 19th century at least half the population was enslaved among the Duala of the Cameroon, the Igbo and other peoples of the lower Niger, the Kongo, and the Kasanje kingdom and Chokwe of Angola. Among the Ashanti and Yoruba a third of the population consisted of slaves as well as Bono. The population of the Kanem was about one third enslaved. It was perhaps 40% in Bornu (1396–1893). Between 1750 and 1900 from one- to two-thirds of the entire population of the Fulani jihad states consisted of slaves. The population of the Sokoto caliphate formed by Hausas in northern Nigeria and Cameroon was half-slave in the 19th century. It is estimated that up to 90% of the population of Arab-Swahili Zanzibar was enslaved. Roughly half the population of Madagascar was enslaved.
Slavery in Ethiopia persisted until 1942. The Anti-Slavery Society estimated that there were 2,000,000 slaves in the early 1930s, out of an estimated population of between 8 and 16 million. It was finally abolished by order of emperor Haile Selassie on 26 August 1942.
When British rule was first imposed on the Sokoto Caliphate and the surrounding areas in northern Nigeria at the turn of the 20th century, approximately 2 million to 2.5 million people living there were enslaved. Slavery in northern Nigeria was finally outlawed in 1936.
Writing in 1998 about the extent of trade coming through and from Africa, the Congolese journalist Elikia M'bokolo wrote "The African continent was bled of its human resources via all possible routes. Across the Sahara, through the Red Sea, from the Indian Ocean ports and across the Atlantic. At least ten centuries of slavery for the benefit of the Muslim countries (from the ninth to the nineteenth)." He continues: "Four million slaves exported via the Red Sea, another four million through the Swahili ports of the Indian Ocean, perhaps as many as nine million along the trans-Saharan caravan route, and eleven to twenty million (depending on the author) across the Atlantic Ocean"
Sub-Saharan Africa
Zanzibar was once East Africa's main slave-trading port, during the Indian Ocean slave trade and under Omani Arabs in the 19th century, with as many as 50,000 slaves passing through the city each year.
Prior to the 16th century, the bulk of slaves exported from Africa were shipped from East Africa to the Arabian peninsula. Zanzibar became a leading port in this trade. Arab traders of slaves differed from European ones in that they would often conduct raiding expeditions themselves, sometimes penetrating deep into the continent. They also differed in that their market greatly preferred the purchase of enslaved females over male.
The increased presence of European rivals along the East coast led Arab traders to concentrate on the overland slave caravan routes across the Sahara from the Sahel to North Africa. The German explorer Gustav Nachtigal reported seeing slave caravans departing from Kukawa in Bornu bound for Tripoli and Egypt in 1870. The trade of slaves represented the major source of revenue for the state of Bornu as late as 1898. The eastern regions of the Central African Republic have never recovered demographically from the impact of 19th-century raids from the Sudan and still have a population density of less than 1 person/km2. During the 1870s, European initiatives against the trade of slaves caused an economic crisis in northern Sudan, precipitating the rise of Mahdist forces. Mahdi's victory created an Islamic state, one that quickly reinstituted slavery.
The Middle Passage, the crossing of the Atlantic to the Americas, endured by slaves laid out in rows in the holds of ships, was only one element of the well-known triangular trade engaged in by Portuguese, American, Dutch, Danish-Norwegians, French, British and others. Ships having landed with slaves in Caribbean ports would take on sugar, indigo, raw cotton, and later coffee, and make for Liverpool, Nantes, Lisbon or Amsterdam. Ships leaving European ports for West Africa would carry printed cotton textiles, some originally from India, copper utensils and bangles, pewter plates and pots, iron bars more valued than gold, hats, trinkets, gunpowder and firearms and alcohol. Tropical shipworms were eliminated in the cold Atlantic waters, and at each unloading, a profit was made.
The Atlantic slave trade peaked in the late 18th century when the largest number of people were captured and enslaved on raiding expeditions into the interior of West Africa. These expeditions were typically carried out by African states, such as the Bono State, Oyo empire (Yoruba), Kong Empire, Kingdom of Benin, Imamate of Futa Jallon, Imamate of Futa Toro, Kingdom of Koya, Kingdom of Khasso, Kingdom of Kaabu, Fante Confederacy, Ashanti Confederacy, Aro Confederacy and the kingdom of Dahomey. Europeans rarely entered the interior of Africa, due to fear of disease and moreover fierce African resistance. The slaves were brought to coastal outposts where they were traded for goods. The people captured on these expeditions were shipped by European traders to the colonies of the New World. It is estimated that over the centuries, twelve to twenty million slaves were shipped from Africa by European traders, of whom some 15 percent died during the terrible voyage, many during the arduous journey through the Middle Passage. The great majority were shipped to the Americas, but some also went to Europe and Southern Africa.
While talking about the trade of slaves in East Africa in his journals, David Livingstone said
While travelling in the African Great Lakes Region in 1866, Livingstone described a trail of slaves:
19th June 1866 – We passed a woman tied by the neck to a tree and dead, the people of the country explained that she had been unable to keep up with the other slaves in a gang, and her master had determined that she should not become anyone's property if she recovered.26th June. – ...We passed a slave woman shot or stabbed through the body and lying on the path: a group of men stood about a hundred yards off on one side, and another of the women on the other side, looking on; they said an Arab who passed early that morning had done it in anger at losing the price he had given for her, because she was unable to walk any longer.
27th June 1866 – To-day we came upon a man dead from starvation, as he was very thin. One of our men wandered and found many slaves with slave-sticks on, abandoned by their masters from want of food; they were too weak to be able to speak or say where they had come from; some were quite young.
African participation in the slave trade
African states played a key role in the trade of slaves, and slavery was a common practice among Sub Saharan Africans even before the involvement of the Arabs, Berbers and Europeans. There were three types: those who were enslaved through conquest, instead of unpaid debts, or those whose parents gave them as property to tribal chiefs. Chieftains would barter their slaves to Arab, Berber, Ottoman or European buyers for rum, spices, cloth or other goods. Selling captives or prisoners was a common practice among Africans, Turks, Berbers and Arabs during that era. However, as the Atlantic trade of slaves increased its demand, local systems which primarily serviced indentured servitude expanded. European trading of slaves, as a result, was the most pivotal change in the social, economic, cultural, spiritual, religious, political dynamics of the concept of trading in slaves. It ultimately undermined local economies and political stability as villages' vital labour forces were shipped overseas as slave raids and civil wars became commonplace. Crimes which were previously punishable by some other means became punishable by enslavement.
Slavery already existed in Kingdom of Kongo prior to the arrival of the Portuguese. Because it had been established within his kingdom, Afonso I of Kongo believed that the slave trade should be subject to Kongo law. When he suspected the Portuguese of receiving illegally slaves to sell, he wrote letters to the King João III of Portugal in 1526 imploring him to put a stop to the practice.
The kings of Dahomey sold their war captives into transatlantic slavery, who otherwise may have been killed in a ceremony known as the Annual Customs. As one of West Africa's principal slave states, Dahomey became extremely unpopular with neighbouring peoples. Like the Bambara Empire to the east, the Khasso kingdoms depended heavily on the slave trade for their economy. A family's status was indicated by the number of slaves it owned, leading to wars for the sole purpose of taking more captives. This trade led the Khasso into increasing contact with the European settlements of Africa's west coast, particularly the French. Benin grew increasingly rich during the 16th and 17th centuries on the trade of slaves with Europe; slaves from enemy states of the interior were sold, and carried to the Americas in Dutch and Portuguese ships. The Bight of Benin's shore soon came to be known as the "Slave Coast".
In the 1840s, King Gezo of Dahomey said:
"The slave trade is the ruling principle of my people. It is the source and the glory of their wealth...the mother lulls the child to sleep with notes of triumph over an enemy reduced to slavery."
In 1807 the United Kingdom made the international trade of slaves illegal with the Slave Trade Act. The Royal Navy was deployed to prevent slavers from the United States, France, Spain, Portugal, Holland, West Africa and Arabia. The King of Bonny (now in Nigeria) allegedly became dissatisfied of the British intervention in stopping the trade of slaves:
"We think this trade must go on. That is the verdict of our oracle and the priests. They say that your country, however great, can never stop a trade ordained by God himself."
Joseph Miller states that African buyers would prefer males, but in reality, women and children would be more easily captured as men fled. Those captured would be sold for various reasons such as food, debts, or servitude. Once captured, the journey to the coast killed many and weakened others. Disease engulfed many, and insufficient food damaged those who made it to the coasts. Scurvy was common, and was often referred to as mal de Luanda ("Luanda sickness," after the port in Angola). The assumption for those who died on the journey died from malnutrition. As food was limited, water may have been just as bad. Dysentery was widespread and poor sanitary conditions at ports did not help. Since supplies were poor, slaves were not equipped with the best clothing, meaning they were even more exposed to diseases.
On top of the fear of disease, people were afraid of why they were being captured. The popular assumption was that Europeans were cannibals. Stories and rumours spread that whites captured Africans to eat them. Olaudah Equiano accounts his experience about the sorrow slaves encountered at the ports. He talks about his first moment on a slave ship and asked if he was going to be eaten. Yet, the worst for slaves has only begun, and the journey on the water proved to be more harrowing. For every 100 Africans captured, only 64 would reach the coast, and only about 50 would reach the New World.
Others believe that slavers had a vested interest in capturing rather than killing, and in keeping their captives alive; and that this coupled with the disproportionate removal of males and the introduction of new crops from the Americas (cassava, maize) would have limited general population decline to particular regions of western Africa around 1760–1810, and in Mozambique and neighbouring areas half a century later. There has also been speculation that within Africa, females were most often captured as brides, with their male protectors being a "bycatch" who would have been killed if there had not been an export market for them.
British explorer Mungo Park encountered a group of slaves when traveling through Mandinka country:
During the period from the late 19th century and early 20th century, demand for the labour-intensive harvesting of rubber drove frontier expansion and forced labour. The personal monarchy of Belgian King Leopold II in the Congo Free State saw mass killings and slavery to extract rubber.
Africans on ships
Surviving the voyage was the main struggle. Close quarters meant everyone was infected by any diseases that spread, including the crew. Death was so common that ships were called tumbeiros, or floating tombs. What shocked Africans the most was how death was handled in the ships. Smallwood says the traditions for an African death were delicate and community-based. On ships, bodies would be thrown into the sea. Because the sea represented bad omens, bodies in the sea represented a form of purgatory and the ship a form of hell. Any Africans who made the journey would have survived extreme disease and malnutrition, as well as trauma from being on the open ocean and the death of their friends.
North Africa
In Algiers during the time of the Regency of Algiers in North Africa in the 19th century, up to 1.5 million Christians and Europeans were captured and forced into slavery. This eventually led to the Bombardment of Algiers in 1816 by the British and Dutch, forcing the Dey of Algiers to free many slaves.
Modern times
The trading of children has been reported in modern Nigeria and Benin. In parts of Ghana, a family may be punished for an offense by having to turn over a virgin female to serve as a sex slave within the offended family. In this instance, the woman does not gain the title or status of "wife". In parts of Ghana, Togo, and Benin, shrine slavery persists, despite being illegal in Ghana since 1998. In this system of ritual servitude, sometimes called trokosi (in Ghana) or voodoosi in Togo and Benin, young virgin girls are given as slaves to traditional shrines and are used sexually by the priests in addition to providing free labor for the shrine.
An article in the Middle East Quarterly in 1999 reported that slavery is endemic in Sudan. Estimates of abductions during the Second Sudanese Civil War range from 14,000 to 200,000 people.
During the Second Sudanese Civil War people were taken into slavery; estimates of abductions range from 14,000 to 200,000. Abduction of Dinka women and children was common. In Mauritania it is estimated that up to 600,000 men, women and children, or 20% of the population, are currently enslaved, many of them used as bonded labor. Slavery in Mauritania was criminalized in August 2007.
During the Darfur conflict that began in 2003, many people were kidnapped by Janjaweed and sold into slavery as agricultural labor, domestic servants and sex slaves.
In Niger, slavery is also a current phenomenon. A Nigerien study has found that more than 800,000 people are enslaved, almost 8% of the population. Niger installed an anti-slavery provision in 2003. In a landmark ruling in 2008, the ECOWAS Community Court of Justice declared that the Republic of Niger failed to protect Hadijatou Mani Koraou from slavery, and awarded Mani CFA 10,000,000 (approximately ) in reparations.
Sexual slavery and forced labor are common in the Democratic Republic of Congo.
Many pygmies in the Republic of Congo and Democratic Republic of Congo belong from birth to Bantus in a system of slavery.
Evidence emerged in the late 1990s of systematic slavery in cacao plantations in West Africa; see the chocolate and slavery article.
According to the U.S. State Department, more than 109,000 children were working on cocoa farms alone in Ivory Coast in "the worst forms of child labour" in 2002.
On the night of 14–15 April 2014, a group of militants attacked the Government Girls Secondary School in Chibok, Nigeria. They broke into the school, pretending to be guards, telling the girls to get out and come with them. A large number of students were taken away in trucks, possibly into the Konduga area of the Sambisa Forest where Boko Haram were known to have fortified camps. Houses in Chibok were also burned down in the incident. According to police, approximately 276 children were taken in the attack, of whom 53 had escaped as of 2 May. Other reports said that 329 girls were kidnapped, 53 had escaped and 276 were still missing. The students have been forced to convert to Islam and into marriage with members of Boko Haram, with a reputed "bride price" of ₦2,000 each ($12.50/£7.50). Many of the students were taken to the neighbouring countries of Chad and Cameroon, with sightings reported of the students crossing borders with the militants, and sightings of the students by villagers living in the Sambisa Forest, which is considered a refuge for Boko Haram.
On 5 May 2014 a video in which Boko Haram leader Abubakar Shekau claimed responsibility for the kidnappings emerged. Shekau claimed that "Allah instructed me to sell them...I will carry out his instructions" and "[s]lavery is allowed in my religion, and I shall capture people and make them slaves." He said the girls should not have been in school and instead should have been married since girls as young as nine are suitable for marriage.
Libyan slave trade
During the Second Libyan Civil War Libyans started capturing some of the Sub-Saharan African migrants trying to get to Europe through Libya and selling them on slave markets. Slaves are often ransomed to their families and in the meantime until ransom can be paid, they may be tortured, forced to work, sometimes worked to death, and eventually they may be executed or left to starve if the payment has not been made after a period of time. Women are often raped and used as sex slaves and sold to brothels.
Many child migrants also suffer from abuse and child rape in Libya.
Americas
To participate in the slave trade in Spanish America, bankers and trading companies had to pay the Spanish king for the license, called the Asiento de Negros, but an unknown amount of the trade was illegal. After 1670 when the Spanish Empire declined substantially they outsourced part of the slave trade to the Dutch (1685–1687), the Portuguese, the French (1698–1713) and the English (1713–1750), also providing organized depots in the Caribbean islands to the Dutch, British and French America. As a result of the War of the Spanish Succession (1701–1714), the British government obtained the monopoly (asiento de negros) of selling African slaves in Spanish America, which was granted to the South Sea Company. Meanwhile, slave trading became a core business for privately owned enterprises in the Americas.
Among indigenous peoples
In Pre-Columbian Mesoamerica the most common forms of slavery were those of prisoners of war and debtors. People unable to pay back debts could be sentenced to work as slaves to the people owed until the debts were worked off, as a form of indentured servitude. Warfare was important to Maya society, because raids on surrounding areas provided the victims required for human sacrifice, as well as slaves for the construction of temples. Most victims of human sacrifice were prisoners of war or slaves. Slavery was not usually hereditary; children of slaves were born free. In the Inca Empire, workers were subject to a mita instead of taxes which they paid by working for the government. Each ayllu, or extended family, would decide which family member to send to do the work. It is unclear if this labor draft or corvée counts as slavery. The Spanish adopted this system, particularly for their silver mines in Bolivia.
Other slave-owning societies and tribes of the New World were, for example, the Tehuelche of Patagonia, the Comanche of Texas, the Caribs of Dominica, the Tupinambá of Brazil, the fishing societies, such as the Yurok, that lived along the west coast of North America from what is now Alaska to California, the Pawnee and Klamath. Many of the indigenous peoples of the Pacific Northwest Coast, such as the Haida and Tlingit, were traditionally known as fierce warriors and slave-traders, raiding as far as California. Slavery was hereditary, the slaves being prisoners of war. Among some Pacific Northwest tribes, about a quarter of the population was enslaved. One slave narrative was composed by an Englishman, John R. Jewitt, who had been taken alive when his ship was captured in 1802; his memoir provides a detailed look at life as a slave, and asserts that a large number were held.
Brazil
Slavery was a mainstay of the Brazilian colonial economy, especially in mining and sugarcane production. 35.3% of all slaves from the Atlantic Slave trade went to Colonial Brazil. 4 million slaves were obtained by Brazil, 1.5 million more than any other country. Starting around 1550, the Portuguese began to trade enslaved Africans to work the sugar plantations, once the native Tupi people deteriorated. Although Portuguese Prime Minister Sebastião José de Carvalho e Melo, 1st Marquis of Pombal prohibited the importation of slaves into Continental Portugal on 12 February 1761, slavery continued in her overseas colonies. Slavery was practiced among all classes. slaves were owned by upper and middle classes, by the poor, and even by other slaves.
From São Paulo, the Bandeirantes, adventurers mostly of mixed Portuguese and native ancestry, penetrated steadily westward in their search for Indians to enslave. Along the Amazon River and its major tributaries, repeated slaving raids and punitive attacks left their mark. One French traveler in the 1740s described hundreds of miles of river banks with no sign of human life and once-thriving villages that were devastated and empty. In some areas of the Amazon Basin, and particularly among the Guarani of southern Brazil and Paraguay, the Jesuits had organized their Jesuit Reductions along military lines to fight the slavers. In the mid-to-late 19th century, many Amerindians were enslaved to work on rubber plantations.
Resistance and abolition
Slaves that escaped formed Maroon communities which played an important role in the histories of Brazil and other countries such as Suriname, Puerto Rico, Cuba, and Jamaica. In Brazil, the Maroon villages were called palenques or quilombos. Maroons survived by growing vegetables and hunting. They also raided plantations. At these attacks, the maroons would burn crops, steal livestock and tools, kill slavemasters, and invite other slaves to join their communities.
Jean-Baptiste Debret, a French painter who was active in Brazil in the first decades of the 19th century, started out with painting portraits of members of the Brazilian Imperial family, but soon became concerned with the slavery of both blacks and indigenous inhabitants. His paintings on the subject (two appear on this page) helped bring attention to the subject in both Europe and Brazil itself.
The Clapham Sect, a group of evangelical reformers, campaigned during much of the 19th century for Britain to use its influence and power to stop the traffic of slaves to Brazil. Besides moral qualms, the low cost of slave-produced Brazilian sugar meant that the British West Indies were unable to match the market prices of Brazilian sugar, and each Briton was consuming 16 pounds (7 kg) of sugar a year by the 19th century. This combination led to intensive pressure from the British government for Brazil to end this practice, which it did by steps over several decades.
First, foreign trade of slaves was banned in 1850. Then, in 1871, the sons of the slaves were freed. In 1885, slaves aged over 60 years were freed. The Paraguayan War contributed to ending slavery as many slaves enlisted in exchange for freedom. In Colonial Brazil, slavery was more a social than a racial condition. Some of the greatest figures of the time, like the writer Machado de Assis and the engineer André Rebouças had black ancestry.
Brazil's 1877–78 Grande Seca (Great Drought) in the cotton-growing northeast led to major turmoil, starvation, poverty and internal migration. As wealthy plantation holders rushed to sell their slaves south, popular resistance and resentment grew, inspiring numerous emancipation societies. They succeeded in banning slavery altogether in the province of Ceará by 1884. Slavery was legally ended nationwide on 13 May by the Lei Áurea ("Golden Law") of 1888. It was an institution in decadence at these times, as since the 1880s the country had begun to use European immigrant labor instead. Brazil was the last nation in the Western Hemisphere to abolish slavery.
British and French Caribbean
Slavery was commonly used in the parts of the Caribbean controlled by France and the British Empire. The Lesser Antilles islands of Barbados, St. Kitts, Antigua, Martinique and Guadeloupe, which were the first important societies of slaves in the Caribbean, began the widespread use of enslaved Africans by the end of the 17th century, as their economies converted from sugar production.
England had multiple sugar colonies in the Caribbean, especially Jamaica, Barbados, Nevis, and Antigua, which provided a steady flow of sugar sales; forced labor of slaves produced the sugar. By the 1700s, there were more slaves in Barbados than in all the English colonies on the mainland combined. Since Barbados did not have many mountains, English planters were able to clear land for sugarcane. Indentured servants were initially sent to Barbados to work in the sugar fields. These indentured servants were treated so poorly that future indentured servants stopped going to Barbados, and there were not enough people to work the fields. This is when the British started bringing in enslaved Africans. For the English planters in Barbados, reliance on enslaved labor was necessary for them to be able to profit from production of cane-origin sugar for the growing market for sugar in Europe and other markets.
In the Treaty of Utrecht, which ended the War of the Spanish Succession (1702–1714), the various European powers negotiating the terms of the treaty also discussed colonial issues as well. Of special importance in the negotiations at Utrecht was the successful negotiation between the British and French delegations for Britain to obtain a thirty-year monopoly on the right to sell slaves in Spanish America, called the Asiento de Negros. Queen Anne also allowed her North American colonies like Virginia to make laws that promoted the importation of slaves. Anne had secretly negotiated with France to get its approval regarding the Asiento. In 1712, she delivered a speech which included a public announcement of her success in taking the Asiento away from France; many London merchants celebrated her economic coup. Most of the trade of slaves involved sales to Spanish colonies in the Caribbean, and to Mexico, as well as sales to European colonies in the Caribbean and in North America. Historian Vinita Ricks says the agreement allotted Queen Anne "22.5% (and King Philip V, of Spain 28%) of all profits collected for the Asiento monopoly. Ricks concludes that the Queen's "connection to slave trade revenue meant that she was no longer a neutral observer. She had a vested interest in what happened on slave ships."
By 1778, the French were importing approximately 13,000 Africans for enslavement yearly to the French West Indies.
To regularise slavery, in 1685 Louis XIV had enacted the Code Noir, a slave code accorded certain human rights to slaves and responsibilities to the master, who was obliged to feed, clothe and provide for the general well-being of his human property. Free people of color owned one-third of the plantation property and one-quarter of the slaves in Saint Domingue (later Haiti). Slavery in the First Republic was abolished on 4 February 1794. When it became clear that Napoleon intended to re-establish slavery in Saint-Domingue (Haiti), Jean-Jacques Dessalines and Alexandre Pétion switched sides, in October 1802. On 1 January 1804, Dessalines, the new leader under the dictatorial 1801 constitution, declared Haiti a free republic. Thus Haiti became the second independent nation in the Western Hemisphere, after the United States, as a result of the only successful slave rebellion in world history.
Whitehall in England announced in 1833 that slaves in British colonies would be completely freed by 1838. In the meantime, the government told slaves they had to remain on their plantations and would have the status of "apprentices" for the next six years.
In Port-of-Spain, Trinidad, on 1 August 1834, an unarmed group of mainly elderly Negroes being addressed by the Governor at Government House about the new laws, began chanting: "Pas de six ans. Point de six ans" ("Not six years. No six years"), drowning out the voice of the Governor. Peaceful protests continued until a resolution to abolish apprenticeship was passed and de facto freedom was achieved. Full emancipation for all was legally granted ahead of schedule on 1 August 1838, making Trinidad the first British colony with slaves to completely abolish slavery.
After Great Britain abolished slavery, it began to pressure other nations to do the same. France, too, abolished slavery. By then Saint-Domingue had already won its independence and formed the independent Republic of Haiti, though France still controlled Guadeloupe, Martinique and a few smaller islands.
Canada
Slavery in Canada was practised by First Nations and continued during the European colonization of Canada. It is estimated that there were
4,200 slaves in the French colony of Canada and later British North America between 1671 and 1831. Two-thirds of these were of indigenous ancestry
(typically called panis) whereas the other third were of African descent. They were house servants and farm workers. The number of slaves of color increased during British rule, especially with the arrival of United Empire Loyalists after 1783. A small portion of Black Canadians today are descended from these slaves.
The practice of slavery in the Canadas ended through case law; having died out in the early 19th century through judicial actions litigated on behalf of slaves seeking manumission. The courts, to varying degrees, rendered slavery unenforceable in both Lower Canada and Nova Scotia. In Lower Canada, for example, after court decisions in the late 1790s, the "slave could not be compelled to serve longer than he would, and ... might leave his master at will." Upper Canada passed the Act Against Slavery in 1793, one of the earliest anti-slavery acts in the world. The institution was formally banned throughout most of the British Empire, including the Canadas in 1834, after the passage of the Slavery Abolition Act 1833 in the British parliament. These measures resulted in a number of Black people (free and slaves) from the United States moving to Canada after the American Revolution, known as the Black Loyalists; and again after the War of 1812, with a number of Black Refugees settling in Canada. During the mid-19th century, British North America served as a terminus for the Underground Railroad, a network of routes used by enslaved African-Americans to escape a slave state.
Latin America
During the period from the late 19th century and early 20th century, demand for the labor-intensive harvesting of rubber drove frontier expansion and slavery in Latin America and elsewhere. Indigenous peoples were enslaved as part of the rubber boom in Ecuador, Peru, Colombia, and Brazil. In Central America, rubber tappers participated in the enslavement of the indigenous Guatuso-Maleku people for domestic service.
United States
Early events
In late August 1619, the frigate White Lion, a privateer ship owned by Robert Rich, 2nd Earl of Warwick, but flying a Dutch flag arrived at Point Comfort, Virginia (several miles downstream from the colony of Jamestown, Virginia) with the first recorded slaves from Africa to Virginia. The approximately 20 Africans were from the present-day Angola. They had been removed by the White Lions crew from a Portuguese cargo ship, the São João Bautista.
Historians are undecided if the legal practice of slavery began in the colony because at least some of them had the status of indentured servant. Alden T. Vaughn says most agree that both black slaves and indentured servants existed by 1640.
Only a small fraction of the enslaved Africans brought to the New World came to British North America, perhaps as little as 5% of the total. The vast majority of slaves were sent to the Caribbean sugar colonies, Brazil, or Spanish America.
By the 1680s, with the consolidation of England's Royal African Company, enslaved Africans were arriving in English colonies in larger numbers, and the institution continued to be protected by the British government. Colonists now began purchasing slaves in larger numbers.
Slavery in American colonial law
1640: Virginia courts sentence John Punch to lifetime slavery, marking the earliest legal sanctioning of slavery in English colonies.
1641: Massachusetts legalizes slavery.
1650: Connecticut legalizes slavery.
1652: Rhode Island bans the enslavement or forced servitude of any white or negro for more than ten years or beyond the age of 24.
1654: Virginia sanctions "the right of Negros to own slaves of their own race" after African Anthony Johnson, former indentured servant, sued to have fellow African John Casor declared not an indentured servant but "slave for life."
1661: Virginia officially recognizes slavery by statute.
1662: A Virginia statute declares that children born would have the same status as their mother.
1663: Maryland legalizes slavery.
1664: Slavery is legalized in New York and New Jersey.
1670: Carolina (later, South Carolina and North Carolina) is founded mainly by planters from the overpopulated British sugar island colony of Barbados, who brought relatively large numbers of African slaves from that island.
1676: Rhode Island bans the enslavement of Native Americans.
Development of slavery
The shift from indentured servants to enslaved African was prompted by a dwindling class of former servants who had worked through the terms of their indentures and thus became competitors to their former masters. These newly freed servants were rarely able to support themselves comfortably, and the tobacco industry was increasingly dominated by large planters. This caused domestic unrest culminating in Bacon's Rebellion. Eventually, chattel slavery became the norm in regions dominated by plantations.
The Fundamental Constitutions of Carolina established a model in which a rigid social hierarchy placed slaves under the absolute authority of their master. With the rise of a plantation economy in the Carolina Lowcountry based on rice cultivation, a society of slaves was created that later became the model for the King Cotton economy across the Deep South. The model created by South Carolina was driven by the emergence of a majority enslaved population that required repressive and often brutal force to control. Justification for such an enslaved society developed into a conceptual framework of white supremacy in the American colonies.
Several local slave rebellions took place during the 17th and 18th centuries: Gloucester County, Virginia Revolt (1663); New York Slave Revolt of 1712; Stono Rebellion (1739); and New York Slave Insurrection of 1741.
Early United States law
Within the British Empire, the Massachusetts courts began to follow England when, in 1772, England became the first country in the world to outlaw the slave trade within its borders (see Somerset v Stewart) followed by the Knight v. Wedderburn decision in Scotland in 1778. Between 1764 and 1774, seventeen slaves appeared in Massachusetts courts to sue their owners for freedom. In 1766, John Adams' colleague Benjamin Kent won the first trial in the present-day United States to free a slave (Slew vs. Whipple).
The Republic of Vermont allowed the enslavement of children in its constitution of 1777 suggesting that people "ought not" enslave adults, but there was no enforcement of this suggestion. Vermont entered the United States in 1791 with the same constitutional provisions. Through the Northwest Ordinance of 1787 under the Congress of the Confederation, slavery was prohibited in the territories north west of the Ohio River. In 1794, Congress banned American vessels from being used in the slave trade, and also banned the export of slaves from America to other countries. However, little effort was made to enforce this legislation. The slave ship owners of Rhode Island were able to continue in trade, and the USA's slaving fleet in 1806 was estimated to be nearly 75% as large as that of Britain, with dominance of the transportation of slaves into Cuba. By 1804, abolitionists succeeded in passing legislation that ended legal slavery in every northern state (with slaves above a certain age legally transformed to indentured servants). Congress passed an Act Prohibiting Importation of Slaves as of 1 January 1808; but not the internal slave trade.
Despite the actions of abolitionists, free blacks were subject to racial segregation in the Northern states. While the United Kingdom did not ban slavery throughout most of the empire, including British North America till 1833, free blacks found refuge in the Canadas after the American Revolutionary War and again after the War of 1812. Refugees from slavery fled the South across the Ohio River to the North via the Underground Railroad. Midwestern state governments asserted States Rights arguments to refuse federal jurisdiction over fugitives. Some juries exercised their right of jury nullification and refused to convict those indicted under the Fugitive Slave Act of 1850.
After the passage of the Kansas–Nebraska Act in 1854, armed conflict broke out in Kansas Territory, where the question of whether it would be admitted to the Union as a slave state or a free state had been left to the inhabitants. The radical abolitionist John Brown was active in the mayhem and killing in "Bleeding Kansas." The true turning point in public opinion is better fixed at the Lecompton Constitution fraud. Pro-slavery elements in Kansas had arrived first from Missouri and quickly organized a territorial government that excluded abolitionists. Through the machinery of the territory and violence, the pro-slavery faction attempted to force the unpopular pro-slavery Lecompton Constitution through the state. This infuriated Northern Democrats, who supported popular sovereignty, and was exacerbated by the Buchanan administration reneging on a promise to submit the constitution to a referendum—which would surely fail. Anti-slavery legislators took office under the banner of the newly formed Republican Party. The Supreme Court in the Dred Scott decision of 1857 asserted that one could take one's property anywhere, even if one's property was chattel and one crossed into a free territory. It also asserted that African Americans could not be federal citizens. Outraged critics across the North denounced these episodes as the latest of the Slave Power (the politically organized slave owners) taking more control of the nation.
American Civil War
The enslaved population in the United States stood at four million. Ninety-five percent of blacks lived in the South, constituting one third of the population there as opposed to 1% of the population of the North. The central issue in politics in the 1850s involved the extension of slavery into the western territories, which settlers from the Northern states opposed. The Whig Party split and collapsed on the slavery issue, to be replaced in the North by the new Republican Party, which was dedicated to stopping the expansion of slavery. Republicans gained a majority in every northern state by absorbing a faction of anti-slavery Democrats, and warning that slavery was a backward system that undercut liberal democracy and economic modernization. Numerous compromise proposals were put forward, but they all collapsed. A majority of Northern voters were committed to stopping the expansion of slavery, which they believed would ultimately end slavery. Southern voters were overwhelmingly angry that they were being treated as second-class citizens. In the election of 1860, the Republicans swept Abraham Lincoln into the Presidency and his party took control with legislators into the United States Congress. The states of the Deep South, convinced that the economic power of what they called "King Cotton" would overwhelm the North and win support from Europe voted to secede from the U.S. (the Union). They formed the Confederate States of America, based on the promise of maintaining slavery. War broke out in April 1861, as both sides sought wave after wave of enthusiasm among young men volunteering to form new regiments and new armies. In the North, the main goal was to preserve the union as an expression of American nationalism.
Rebel leaders Jefferson Davis, Robert E. Lee, Nathan Bedford Forrest and others were slavers and slave-traders.
By 1862 most northern leaders realized that the mainstay of Southern secession, slavery, had to be attacked head-on. All the border states rejected President Lincoln's proposal for compensated emancipation. However, by 1865 all had begun the abolition of slavery, except Kentucky and Delaware. The Emancipation Proclamation was an executive order issued by Lincoln on 1 January 1863. In a single stroke, it changed the legal status, as recognized by the U.S. government, of 3 million slaves in designated areas of the Confederacy from "slave" to "free." It had the practical effect that as soon as a slave escaped the control of the Confederate government, by running away or through advances of the Union Army, the slave became legally and actually free. Plantation owners, realizing that emancipation would destroy their economic system, sometimes moved their human property as far as possible out of reach of the Union Army. By June 1865, the Union Army controlled all of the Confederacy and liberated all of the designated slaves. The owners were never compensated. About 186,000 free blacks and newly freed people fought for the Union in the Army and Navy, thereby validating their claims to full citizenship.
The severe dislocations of war and Reconstruction had a severe negative impact on the black population, with a large amount of sickness and death. After liberation, many of the Freedmen remained on the same plantation. Others fled or crowded into refugee camps operated by the Freedmen's Bureau. The Bureau provided food, housing, clothing, medical care, church services, some schooling, legal support, and arranged for labor contracts. Fierce debates about the rights of the Freedmen, and of the defeated Confederates, often accompanied by killings of black leaders, marked the Reconstruction Era, 1863–77.
Slavery was never reestablished, but after President Ulysses S. Grant left the White House in 1877, white-supremacist "Redeemer" Southern Democrats took control of all the southern states, and blacks lost nearly all the political power they had achieved during Reconstruction. By 1900, they also lost the right to vote – they had become second class citizens. The great majority lived in the rural South in poverty working as laborers, sharecroppers or tenant farmers; a small proportion owned their own land. The black churches, especially the Baptist Church, was the center of community activity and leadership.
Asia
Slavery has existed all throughout Asia, and forms of slavery still exist today. In the ancient Near East and Asia Minor slavery was common practice, dating back to the very earliest recorded civilisations in the world such as Sumer, Elam, Ancient Egypt, Akkad, Assyria, Ebla and Babylonia, as well as amongst the Hattians, Hittites, Hurrians, Mycenaean Greece, Luwians, Canaanites, Israelites, Amorites, Phoenicians, Arameans, Ammonites, Edomites, Moabites, Byzantines, Philistines, Medes, Phrygians, Lydians, Mitanni, Kassites, Parthians, Urartians, Colchians, Chaldeans and Armenians.
Slavery in the Middle East first developed out of the slavery practices of the Ancient Near East, and these practices were radically different at times, depending on social-political factors such as the Muslim slave trade. Two rough estimates by scholars of the number of slaves held over twelve centuries in Muslim lands are 11.5 million
and 14 million.
Under Sharia (Islamic law), children of slaves or prisoners of war could become slaves, but only if they are non-Muslim, leading to the Islamic world to import many slaves from other regions, predominantly Europe. Manumission of a slave was encouraged as a way of expiating sins. Many early converts to Islam, such as Bilal ibn Rabah al-Habashi, were poor and former slaves.
Byzantine Empire
Slavery played a notable role in the economy of the Byzantine Empire. Many slaves were sourced from wars within the Mediterranean and Europe while others were sourced from trading with Vikings visiting the empire. Slavery's role in the economy and the power of slave owners slowly diminished while laws gradually improved the rights of slaves. Under the influence of Christianity, views of slavery shifted leading to slaves gaining more rights and independence, and although slavery became rare and was seen as evil by many citizens it was still legal.
During the Arab–Byzantine wars many prisoners of war were ransomed into slavery while others took part in Arab–Byzantine prisoner exchanges. Exchanges of prisoners became a regular feature of the relations between the Byzantine Empire and the Abbasid Caliphate.
After the fall of the Byzantine empire thousands of Byzantine citizens were enslaved, with 30,000–50,000 citizens being enslaved by the Ottoman Empire after the Fall of Constantinople.
Ottoman Empire
Slavery was a legal and important part of the economy of the Ottoman Empire and Ottoman society until the slavery of Caucasians was banned in the early 19th century, although slaves from other groups were allowed. In Constantinople (present-day Istanbul), the administrative and political center of the Empire, about a fifth of the population consisted of slaves in 1609. Even after several measures to ban slavery in the late 19th century, the practice continued largely unaffected into the early 20th century. As late as 1908, female slaves were still sold in the Ottoman Empire. Sexual slavery was a central part of the Ottoman slave system throughout the history of the institution.
A member of the Ottoman slave class, called a kul in Turkish, could achieve high status. Harem guards and janissaries are some of the better-known positions a slave could hold, but slaves were actually often at the forefront of Ottoman politics. The majority of officials in the Ottoman government were bought slaves, raised as slaves of the Sultan, and integral to the success of the Ottoman Empire from the 14th century into the 19th. Many officials themselves owned a large number of slaves, although the Sultan himself owned by far the largest amount. By raising and specially training slaves as officials in palace schools such as Enderun, the Ottomans created administrators with intricate knowledge of government and fanatic loyalty.
Ottomans practiced devşirme, a sort of "blood tax" or "child collection", young Christian boys from the Balkans and Anatolia were taken from their homes and families, brought up as Muslims, and enlisted into the most famous branch of the kapıkulu, the Janissaries, a special soldier class of the Ottoman army that became a decisive faction in the Ottoman invasions of Europe.
During the various 18th and 19th century persecution campaigns against Christians as well as during the culminating Assyrian, Armenian and Greek genocides of World War I, many indigenous Armenian, Assyrian and Greek Christian women and children were carried off as slaves by the Ottoman Turks and their Kurdish allies. Henry Morgenthau, Sr., U.S. Ambassador in Constantinople from 1913 to 1916, reports in his Ambassador Morgenthau's Story that there were gangs trading white slaves during his term in Constantinople. He also reports that Armenian girls were sold as slaves during the Armenian Genocide.
According to Ronald Segal, the male:female gender ratio in the Atlantic slave trade was 2:1, whereas in Islamic lands the ratio was 1:2. Another difference between the two was, he argues, that slavery in the west had a racial component, whereas the Qur'an explicitly condemned racism. This, in Segal's view, eased assimilation of freed slaves into society. Men would often take their female slaves as concubines; in fact, most Ottoman sultans were sons of such concubines.
Ancient history
Ancient India
Scholars differ as to whether or not slaves and the institution of slavery existed in ancient India. These English words have no direct, universally accepted equivalent in Sanskrit or other Indian languages, but some scholars translate the word dasa, mentioned in texts like Manu Smriti, as slaves. Ancient historians who visited India offer the closest insights into the nature of Indian society and slavery in other ancient civilizations. For example, the Greek historian Arrian, who chronicled India about the time of Alexander the Great, wrote in his Indika,
Ancient China
Qin dynasty (221–206 BC) Men sentenced to castration became eunuch slaves of the Qin dynasty state and as a result they were made to do forced labor, on projects like the Terracotta Army. The Qin government confiscated the property and enslaved the families of those who received castration as a punishment for rape.
Slaves were deprived of their rights and connections to their families.
Han dynasty (206 BC – 220 AD) One of Emperor Gao's first acts was to set free from slavery agricultural workers who were enslaved during the Warring States period, although domestic servants retained their status.
Men punished with castration during the Han dynasty were also used as slave labor.
Deriving from earlier Legalist laws, the Han dynasty set in place rules that the property of and families of criminals doing three years of hard labor or sentenced to castration were to have their families seized and kept as property by the government.
During the millennium long Chinese domination of Vietnam, Vietnam was a great source of slave girls who were used as sex slaves in China. The slave girls of Viet were even eroticized in Tang dynasty poetry.
The Tang dynasty purchased Western slaves from the Radhanite Jews. Tang Chinese soldiers and pirates enslaved Koreans, Turks, Persians, Indonesians, and people from Inner Mongolia, Central Asia, and northern India. The greatest source of slaves came from southern tribes, including Thais and aboriginals from the southern provinces of Fujian, Guangdong, Guangxi, and Guizhou. Malays, Khmers, Indians, and black Africans were also purchased as slaves in the Tang dynasty. Slavery was prevalent until the late 19th century and early 20th century China. All forms of slavery have been illegal in China since 1910.
Postclassical history
Indian subcontinent
The Islamic invasions, starting in the 8th century, also resulted in hundreds of thousands of Indians being enslaved by the invading armies, one of the earliest being the armies of the Umayyad commander Muhammad bin Qasim. Qutb-ud-din Aybak, a Turkic slave of Muhammad Ghori rose to power following his master's death. For almost a century, his descendants ruled North-Central India in form of Slave Dynasty. Several slaves were also brought to India by the Indian Ocean trades; for example, the Siddi are descendants of Bantu slaves brought to India by Arab and Portuguese merchants.
Andre Wink summarizes the slavery in 8th and 9th century India as follows,
In the early 11th century Tarikh al-Yamini, the Arab historian Al-Utbi recorded that in 1001 the armies of Mahmud of Ghazna conquered Peshawar and Waihand (capital of Gandhara) after Battle of Peshawar (1001), "in the midst of the land of Hindustan", and captured some 100,000 youths. Later, following his twelfth expedition into India in 1018–19, Mahmud is reported to have returned with such a large number of slaves that their value was reduced to only two to ten dirhams each. This unusually low price made, according to Al-Utbi, "merchants [come] from distant cities to purchase them, so that the countries of Central Asia, Iraq and Khurasan were swelled with them, and the fair and the dark, the rich and the poor, mingled in one common slavery". Elliot and Dowson refer to "five hundred thousand slaves, beautiful men and women.". Later, during the Delhi Sultanate period (1206–1555), references to the abundant availability of low-priced Indian slaves abound. Levi attributes this primarily to the vast human resources of India, compared to its neighbors to the north and west (India's Mughal population being approximately 12 to 20 times that of Turan and Iran at the end of the 16th century).
The Delhi sultanate obtained thousands of slaves and eunuch servants from the villages of Eastern Bengal (a widespread practice which Mughal emperor Jahangir later tried to stop). Wars, famines, pestilences drove many villagers to sell their children as slaves. The Muslim conquest of Gujarat in Western India had two main objectives. The conquerors demanded and more often forcibly wrested both land owned by Hindus and Hindu women. Enslavement of women invariably led to their conversion to Islam. In battles waged by Muslims against Hindus in Malwa and Deccan plateau, a large number of captives were taken. Muslim soldiers were permitted to retain and enslave POWs as plunder.
The first Bahmani sultan, Alauddin Bahman Shah is noted to have captured 1,000 singing and dancing girls from Hindu temples after he battled the northern Carnatic chieftains. The later Bahmanis also enslaved civilian women and children in wars; many of them were converted to Islam in captivity. About the Mughal empire, W.H. Moreland observed, "it became a fashion to raid a village or group of villages without any obvious justification, and carry off the inhabitants as slaves."
During the rule of Shah Jahan, many peasants were compelled to sell their women and children into slavery to meet the land revenue demand. Slavery was officially abolished in British India by the Indian Slavery Act, 1843. However, in modern India, Pakistan and Nepal, there are millions of bonded laborers, who work as slaves to pay off debts.
Modern history
Iran
Reginald Dyer, recalling operations against tribes in Iranian Baluchistan in 1916, stated in a 1921 memoir that the local Balochi tribes would regularly carry out raids against travellers and small towns. During these raids, women and children would often be abducted to become slaves, and would be sold for prices varying based on quality, age and looks. He stated that the average price for a young woman was 300 rupees, and the average price for a small child 25 rupees. The slaves, it was noted, were often half starved.
Japan
Slavery in Japan was, for most of its history, indigenous, since the export and import of slaves was restricted by Japan being a group of islands. In late-16th-century Japan, slavery was officially banned; but forms of contract and indentured labor persisted alongside the period penal codes' forced labor. During the Second Sino-Japanese War and the Pacific War, the Imperial Japanese Armed Forces used millions of civilians and prisoners of war from several countries as forced laborers.
Korea
In Korea, slavery was officially abolished with the Gabo Reform of 1894. During the Joseon period, in times of poor harvest and famine, many peasants voluntarily sold themselves into the nobi system in order to survive.
Southeast Asia
In Southeast Asia, there was a large slave class in Khmer Empire who built the enduring monuments in Angkor Wat and did most of the heavy work. Between the 17th and the early 20th centuries one-quarter to one-third of the population of some areas of Thailand and Burma were slaves. By the 19th century, Bhutan had developed a slave trade with Sikkim and Tibet, also enslaving British subjects and Brahmins. According to the International Labour Organization (ILO), during the early 21st century an estimated 800,000
people are subject to forced labor in Myanmar.
Slavery in pre-Spanish Philippines was practiced by the tribal Austronesian peoples who inhabited the culturally diverse islands. The neighboring Muslim states conducted slave raids from the 1600s into the 1800s in coastal areas of the Gulf of Thailand and the Philippine islands. Slaves in Toraja society in Indonesia were family property. People would become slaves when they incurred a debt. Slaves could also be taken during wars, and slave trading was common. Torajan slaves were sold and shipped out to Java and Siam. Slaves could buy their freedom, but their children still inherited slave status. Slavery was abolished in 1863 in all Dutch colonies.
Islamic State slave trade
According to media reports from late 2014, the Islamic State (IS) was selling Yazidi and Christian women as slaves. According to Haleh Esfandiari of the Woodrow Wilson International Center for Scholars, after IS militants have captured an area "[t]hey usually take the older women to a makeshift slave market and try to sell them." In mid-October 2014, the UN estimated that 5,000 to 7,000 Yazidi women and children were abducted by IS and sold into slavery. In the digital magazine Dabiq, IS claimed religious justification for enslaving Yazidi women whom they consider to be from a heretical sect. IS claimed that the Yazidi are idol worshipers and their enslavement is part of the old shariah practice of spoils of war. According to The Wall Street Journal, IS appeals to apocalyptic beliefs and claims "justification by a Hadith that they interpret as portraying the revival of slavery as a precursor to the end of the world".
IS announced the revival of slavery as an institution. In 2015 the official slave prices set by IS were following:
Children aged 1 to 9 were sold for 200,000 dinars ($169).
Women and children 10 to 20 years sold for 150,000 dinars ($127).
Women 20 to 30 years old for 100,000 dinar ($85).
Women 30 to 40 years old are 75,000 dinar ($63).
Women 40 to 50 years old for 50,000 dinar ($42).
However some slaves have been sold for as little as a pack of cigarettes.
Sex slaves were sold to Saudi Arabia, other Persian Gulf states and Turkey.
Europe
Ancient history
Ancient Greece
Records of slavery in Ancient Greece go as far back as Mycenaean Greece. The origins are not known, but it appears that slavery became an important part of the economy and society only after the establishment of cities. Slavery was common practice and an integral component of ancient Greece, as it was in other societies of the time. It is estimated that in Athens, the majority of citizens owned at least one slave. Most ancient writers considered slavery not only natural but necessary, but some isolated debate began to appear, notably in Socratic dialogues. The Stoics produced the first condemnation of slavery recorded in history.
During the 8th and the 7th centuries BC, in the course of the two Messenian Wars, the Spartans reduced an entire population to a pseudo-slavery called helotry. According to Herodotus (IX, 28–29), helots were seven times as numerous as Spartans. Following several helot revolts around the year 600 BC, the Spartans restructured their city-state along authoritarian lines, for the leaders decided that only by turning their society into an armed camp could they hope to maintain control over the numerically dominant helot population. In some Ancient Greek city-states, about 30% of the population consisted of slaves, but paid and slave labor seem to have been equally important.
Rome
Romans inherited the institution of slavery from the Greeks and the Phoenicians. As the Roman Republic expanded outward, it enslaved entire populations, thus ensuring an ample supply of laborers to work in Rome's farms, quarries and households. The people subjected to Roman slavery came from all over Europe and the Mediterranean. Slaves were used for labor, and also for amusement (e.g. gladiators and sex slaves). In the late Republic, the widespread use of recently enslaved groups on plantations and ranches led to slave revolts on a large scale; the Third Servile War led by Spartacus was the most famous and most threatening to Rome.
Other European tribes
Various tribes of Europe are recorded by Roman sources as owning slaves. Strabo records slaves as an export commodity from Britannia, From Llyn Cerrig Bach in Anglesey, an iron gang chain dated to 100 BCE-50 CE was found, over 3 metres long with neck-rings for five captives.
Post-classical history
The chaos of invasion and frequent warfare also resulted in victorious parties taking slaves throughout Europe in the early Middle Ages. St. Patrick, himself captured and sold as a slave, protested against an attack that enslaved newly baptized Christians in his "Letter to the Soldiers of Coroticus". As a commonly traded commodity, like cattle, slaves could become a form of internal or trans-border currency.
Slavery during the Early Middle Ages had several distinct sources.
The Vikings raided across Europe, but took the most slaves in raids on the British Isles and in Eastern Europe. While the Vikings kept some slaves as servants, known as thralls, they sold most captives in the Byzantine via the Black sea slave trade or Islamic markets such as the Khazar slave trade, Volga Bulgarian slave trade and Bukhara slave trade. In the West, their target populations were primarily English, Irish, and Scottish, while in the East they were mainly Slavs (saqaliba). The Viking slave-trade slowly ended in the 11th century, as the Vikings settled in the European territories they had once raided. They converted serfs to Christianity and themselves merged with the local populace.
In central Europe, specifically the Frankish/German/Holy Roman Empire of Charlemagne, raids and wars to the east generated a steady supply of slaves from the Slavic captives of these regions. Because of high demand for slaves in the wealthy Muslim empires of Northern Africa, Spain, and the Near East, especially for slaves of European descent, a market for these slaves rapidly emerged. So lucrative was this market that it spawned an economic boom in central and western Europe, today known as the Carolingian Renaissance. This boom period for slaves stretched from the early Muslim conquests to the High Middle Ages but declined in the later Middle Ages as the Islamic Golden Age waned.
Medieval Spain and Portugal saw almost constant warfare between Muslims and Christians. Al-Andalus sent periodic raiding expeditions to loot the Iberian Christian kingdoms, bringing back booty and slaves. In a raid against Lisbon, Portugal in 1189, for example, the Almohad caliph Yaqub al-Mansur took 3,000 female and child captives. In a subsequent attack upon Silves, Portugal in 1191, his governor of Córdoba took 3,000 Christian slaves.
Ottoman Empire
The Byzantine-Ottoman wars and the Ottoman wars in Europe resulted in the taking of large numbers of Christian slaves and using or selling them in the Islamic world too. After the battle of Lepanto the victors freed approximately 12,000 Christian galley slaves from the Ottoman fleet.
Similarly, Christians sold Muslim slaves captured in war. The Order of the Knights of Malta attacked pirates and Muslim shipping, and their base became a centre for slave trading, selling captured North Africans and Turks. Malta remained a slave market until well into the late 18th century. One thousand slaves were required to man the galleys (ships) of the Order.
Eastern Europe
Poland banned slavery in the 15th century; in Lithuania, slavery was formally abolished in 1588; the institution was replaced by the second enserfment. Slavery remained a minor institution in Russia until 1723, when Peter the Great converted the household slaves into house serfs. Russian agricultural slaves were formally converted into serfs earlier, in 1679.
British Isles
Capture in war, voluntary servitude and debt slavery became common within the British Isles before 1066. The Bodmin manumissions show both that slavery existed in 9th and 10th Century Cornwall and that many Cornish slave owners did set their slaves free. Slaves were routinely bought and sold. Running away was also common and slavery was never a major economic factor in the British Isles during the Middle Ages. Ireland and Denmark provided markets for captured Anglo-Saxon and Celtic slaves. Pope Gregory I reputedly made the pun, Non Angli, sed Angeli ("Not Angles, but Angels"), after a response to his query regarding the identity of a group of fair-haired Angles, slave children whom he had observed in the marketplace. After the Norman Conquest, the law no longer supported chattel slavery and slaves became part of the larger body of serfs.
France
In the early Middle Ages, the city of Verdun was the centre of the thriving European slave trade in young boys who were sold to the Islamic emirates of Iberia where they were enslaved as eunuchs. The Italian ambassador Liutprand of Cremona, as one example in the 10th century, presented a gift of four eunuchs to Emperor Constantine VII.
Barbary pirates and Maltese corsairs
Barbary pirates and Maltese corsairs both raided for slaves and purchased slaves from European merchants, often the Radhanites, one of the few groups who could easily move between the Christian and Islamic worlds.
Genoa and Venice
In the late Middle Ages, from 1100 to 1500, the European slave-trade continued, though with a shift from being centered among the Western Mediterranean Islamic nations to the Eastern Christian and Muslim states. The city-states of Venice and Genoa controlled the Eastern Mediterranean from the 12th century and the Black Sea from the 13th century. They sold both Slavic and Baltic slaves, as well as Georgians, Turks, and other ethnic groups of the Black Sea and Caucasus via the Black Sea slave trade. The sale of European slaves by Europeans slowly ended as the Slavic and Baltic ethnic groups Christianized by the Late Middle Ages.
From the 1440s into the 18th century, Europeans from Italy, Spain, Portugal, France, and England were sold into slavery by North Africans. In 1575, the Tatars captured over 35,000 Ukrainians; a 1676 raid took almost 40,000. About 60,000 Ukrainians were captured in 1688; some were ransomed, but most were sold into slavery. Some 150,000–200,000 of the Roma people were enslaved over five centuries in Romania until abolition in 1864 (see Slavery in Romania).
Mongols
The Mongol invasions and conquests in the 13th century also resulted in taking numerous captives into slavery. The Mongols enslaved skilled individuals, women and children and marched them to Karakorum or Sarai, whence they were sold throughout Eurasia. Many of these slaves were shipped to the slave market in Novgorod.
Slave commerce during the Late Middle Ages was mainly in the hands of Venetian and Genoese merchants and cartels, who were involved in the slave trade with the Golden Horde. In 1382 the Golden Horde under Khan Tokhtamysh sacked Moscow, burning the city and carrying off thousands of inhabitants as slaves. Between 1414 and 1423, some 10,000 eastern European slaves were sold in Venice. Genoese merchants organized the slave trade from the Crimea to Mamluk Egypt. For years, the Khanates of Kazan and Astrakhan routinely made raids on Russian principalities for slaves and to plunder towns. Russian chronicles record about 40 raids by Kazan Khans on the Russian territories in the first half of the 16th century.
In 1441 Haci I Giray declared independence from the Golden Horde and established the Crimean Khanate. For a long time, until the early 18th century, the khanate maintained an extensive slave-trade with the Ottoman Empire and the Middle East. In a process called the "harvesting of the steppe" they enslaved many Slavic peasants. Muscovy recorded about 30 major Tatar raids into Muscovite territories between 1558 and 1596.
Moscow was repeatedly a target. In 1521, the combined forces of Crimean Khan Mehmed Giray and his Kazan allies attacked the city and captured thousands of slaves. In 1571, the Crimean Tatars attacked and sacked Moscow, burning everything but the Kremlin and taking thousands of captives as slaves. In Crimea, about 75% of the population consisted of slaves.
The Vikings and Scandinavia
In the Viking era beginning circa 793, the Norse raiders often captured and enslaved militarily weaker peoples they encountered. The Nordic countries called their slaves thralls (Old Norse: Þræll). The thralls were mostly from Western Europe, among them many Franks, Frisians, Anglo-Saxons, and both Irish and Britonnic Celts. Many Irish slaves travelled in expeditions for the colonization of Iceland. The Norse also took German, Baltic, Slavic and Latin slaves. The slave trade was one of the pillars of Norse commerce during the 9th through 11th centuries. The 10th-century Persian traveller Ibn Rustah described how Swedish Vikings, the Varangians or Rus, terrorized and enslaved the Slavs taken in their raids along the Volga River and sold them to slavery in the Abbasid Caliphate via the Volga Bulgarian slave trade and the Samanid slave trade. The thrall system was finally abolished in the mid-14th century in Scandinavia.
Early Modern history
Mediterranean powers frequently sentenced convicted criminals to row in the war-galleys of the state (initially only in time of war). After the revocation of the Edict of Nantes in 1685 and Camisard rebellion, the French Crown filled its galleys with French Huguenots, Protestants condemned for resisting the state. Galley-slaves lived and worked in such harsh conditions that many did not survive their terms of sentence, even if they survived shipwreck and slaughter or torture at the hands of enemies or of pirates. Naval forces often turned 'infidel' prisoners-of-war into galley-slaves. Several well-known historical figures served time as galley slaves after being captured by the enemy—the Ottoman corsair and admiral Turgut Reis and the Knights Hospitaller Grand Master Jean Parisot de la Valette among them.
Denmark-Norway was the first European country to ban the slave trade. This happened with a decree issued by King Christian VII of Denmark in 1792, to become fully effective by 1803. Slavery as an institution was not banned until 1848. At this time Iceland was a part of Denmark-Norway but slave trading had been abolished in Iceland in 1117 and had never been reestablished.
Slavery in the French Republic was abolished on 4 February 1794, including in its colonies. The lengthy Haitian Revolution by its slaves and free people of color established Haiti as a free republic in 1804 ruled by blacks, the first of its kind. At the time of the revolution, Haiti was known as Saint-Domingue and was a colony of France. Napoleon Bonaparte gave up on Haiti in 1803, but reestablished slavery in Guadeloupe and Martinique in 1804, at the request of planters of the Caribbean colonies. Slavery was permanently abolished in the French empire during the French Revolution of 1848.
Portugal
The 15th-century Portuguese exploration of the African coast is commonly regarded as the harbinger of European colonialism. In 1452, Pope Nicholas V issued the papal bull Dum Diversas, granting Afonso V of Portugal the right to reduce any "Saracens, pagans and any other unbelievers" to hereditary slavery which legitimized slave trade under Catholic beliefs of that time. This approval of slavery was reaffirmed and extended in his Romanus Pontifex bull of 1455. These papal bulls came to serve as a justification for the subsequent era of the slave trade and European colonialism, although for a short period as in 1462 Pius II declared slavery to be "a great crime". Unlike Portugal, Protestant nations did not use the papal bull as a justification for their involvement in the slave trade. The position of the church was to condemn the slavery of Christians, but slavery was regarded as an old established and necessary institution which supplied Europe with the necessary workforce. In the 16th century, African slaves had replaced almost all other ethnicities and religious enslaved groups in Europe. Within the Portuguese territory of Brazil, and even beyond its original borders, the enslavement of Native Americans was carried out by the Bandeirantes.
Among many other European slave markets, Genoa, and Venice were some well-known markets, their importance and demand growing after the great plague of the 14th century which decimated much of the European workforce.
The maritime town of Lagos, Portugal, was the first slave market created in Portugal for the sale of imported African slaves, the Mercado de Escravos, which opened in 1444. In 1441, the first slaves were brought to Portugal from northern Mauritania. Prince Henry the Navigator, major sponsor of the Portuguese African expeditions, as of any other merchandise, taxed one fifth of the selling price of the slaves imported to Portugal. By the year 1552 African slaves made up 10 percent of the population of Lisbon.
In the second half of the 16th century, the Crown gave up the monopoly on slave trade and the focus of European trade in African slaves shifted from import to Europe to slave transports directly to tropical colonies in the Americas—in the case of Portugal, especially Brazil. In the 15th century, one-third of the slaves were resold to the African market in exchange of gold.
Importation of black slaves was prohibited in mainland Portugal and Portuguese India in 1761, but slavery continued in Portuguese overseas colonies. At the same time, was stimulated the trade of black slaves ("the pieces", in the terms of that time) to Brazil and two companies were founded, with the support and direct involvement of the Marquis of Pombal - the Company of Grão-Pará and Maranhão and the General Company of Pernambuco and Paraíba - whose main activity was precisely the trafficking of slaves, mostly black Africans, to Brazilian lands.
Slavery was finally abolished in all Portuguese colonies in 1869.
Spain
The Spaniards were the first Europeans to use African slaves in the New World on islands such as Cuba and Hispaniola, due to a shortage of labor caused by the spread of diseases, and so the Spanish colonists gradually became involved in the Atlantic slave trade. The first African slaves arrived in Hispaniola in 1501; by 1517, the natives had been "virtually annihilated" mostly to diseases.
The problem of the justness of Native American's slavery was a key issue for the Spanish Crown. It was Charles V who gave a definite answer to this complicated and delicate matter. To that end, on 25 November 1542, the Emperor abolished slavery by decree in his Leyes Nuevas. This bill was based on the arguments given by the best Spanish theologists and jurists who were unanimous in the condemnation of such slavery as unjust; they declared it illegitimate and outlawed it from America—not just the slavery of Spaniards over Natives—but also the type of slavery practiced among the Natives themselves Thus, Spain became the first country to officially abolish slavery.
However, in the Spanish colonies of Cuba and Puerto Rico, where sugarcane production was highly profitable based on slave labor, African slavery persisted until 1873 in Puerto Rico "with provisions for periods of apprenticeship", and 1886 in Cuba.
Netherlands
Although slavery was illegal inside the Netherlands it flourished throughout the Dutch Empire in the Americas, Africa, Ceylon and Indonesia. The Dutch Slave Coast (Dutch: Slavenkust) referred to the trading posts of the Dutch West India Company on the Slave Coast, which lie in contemporary Ghana, Benin, Togo and Nigeria. Initially the Dutch shipped slaves to Dutch Brazil, and during the second half of the 17th century they had a controlling interest in the trade to the Spanish colonies. Today's Suriname and Guyana became prominent markets in the 18th century. Between 1612 and 1872, the Dutch operated from some 10 fortresses along the Gold Coast (now Ghana), from which slaves were shipped across the Atlantic. Dutch involvement on the Slave Coast increased with the establishment of a trading post in Offra in 1660. Willem Bosman writes in his Nauwkeurige beschrijving van de Guinese Goud- Tand- en Slavekust (1703) that Allada was also called Grand Ardra, being the larger cousin of Little Ardra, also known as Offra. From 1660 onward, Dutch presence in Allada and especially Offra became more permanent. A report from this year asserts Dutch trading posts, apart from Allada and Offra, in Benin City, Grand-Popo, and Savi.
The Offra trading post soon became the most important Dutch office on the Slave Coast. According to a 1670 report, annually 2,500 to 3,000 slaves were transported from Offra to the Americas. These numbers were only feasible in times of peace, however, and dwindled in time of conflict. From 1688 onward, the struggle between the Aja king of Allada and the peoples on the coastal regions, impeded the supply of slaves. The Dutch West India Company chose the side of the Aja king, causing the Offra office to be destroyed by opposing forces in 1692. By 1650 the Dutch had the pre-eminent slave trade in Europe and South East Asia. Later, trade shifted to Ouidah. On the instigation of Governor-General of the Dutch Gold Coast Willem de la Palma, Jacob van den Broucke was sent in 1703 as "opperkommies" (head merchant) to the Dutch trading post at Ouidah, which according to sources was established around 1670. Political unrest caused the Dutch to abandon their trading post at Ouidah in 1725, and they then moved to Jaquim, at which place they built Fort Zeelandia. The head of the post, Hendrik Hertog, had a reputation for being a successful slave trader. In an attempt to extend his trading area, Hertog negotiated with local tribes and mingled in local political struggles. He sided with the wrong party, however, leading to a conflict with Director-General Jan Pranger and to his exile to the island of Appa in 1732. The Dutch trading post on this island was extended as the new centre of the slave trade. In 1733, Hertog returned to Jaquim, this time extending the trading post into Fort Zeelandia. The revival of the slave trade at Jaquim was only temporary, however, as his superiors at the Dutch West India Company noticed that Hertog's slaves were more expensive than at the Gold Coast. From 1735, Elmina became the preferred spot to trade slaves. As of 1778, it was estimated that the Dutch were shipping approximately 6,000 Africans for enslavement in the Dutch West Indies each year. Slavery also characterised the Dutch possessions in Indonesia, Ceylon, and South Africa, where Indonesians have made a significant contribution to the Cape Coloured population of that country. The Dutch part in the Atlantic slave trade is estimated at 5–7 percent, as they shipped about 550,000–600,000 African slaves across the Atlantic, about 75,000 of whom died on board before reaching their destinations. From 1596 to 1829, the Dutch traders sold 250,000 slaves in the Dutch Guianas, 142,000 in the Dutch Caribbean, and 28,000 in Dutch Brazil. In addition, tens of thousands of slaves, mostly from India and some from Africa, were carried to the Dutch East Indies. The Netherlands abolished slavery in 1863. Although the decision was made in 1848, it took many years for the law to be implemented. Furthermore, slaves in Suriname would be fully free only in 1873, since the law stipulated that there was to be a mandatory 10-year transition.
Barbary corsairs
Barbary Corsairs continued to trade in European slaves into the Modern time-period. Muslim pirates, primarily Algerians with the support of the Ottoman Empire, raided European coasts and shipping from the 16th to the 19th centuries, and took thousands of captives, whom they sold or enslaved. Many were held for ransom, and European communities raised funds such as Malta's Monte della Redenzione degli Schiavi to buy back their citizens. The raids gradually ended with the naval decline of the Ottoman Empire in the late 16th and 17th centuries, as well as the European conquest of North Africa throughout the 19th century.
From 1609 to 1616, England lost 466 merchant ships to Barbary pirates. 160 English ships were captured by Algerians between 1677 and 1680. Many of the captured sailors were made into slaves and held for ransom. The corsairs were no strangers to the South West of England where raids were known in a number of coastal communities. In 1627 Barbary Pirates under command of the Dutch renegade Jan Janszoon (Murat Reis), operating from the Moroccan port of Salé, occupied the island of Lundy. During this time there were reports of captured slaves being sent to Algiers.
Ireland, despite its northern position, was not immune from attacks by the corsairs. In June 1631 Janszoon, with pirates from Algiers and armed troops of the Ottoman Empire, stormed ashore at the little harbor village of Baltimore, County Cork. They captured almost all the villagers and took them away to a life of slavery in North Africa. The prisoners were destined for a variety of fates—some lived out their days chained to the oars as galley slaves, while others would spend long years in the scented seclusion of the harem or within the walls of the sultan's palace. Only two of them ever saw Ireland again.
The Congress of Vienna (1814–15), which ended the Napoleonic Wars, led to increased European consensus on the need to end Barbary raiding. The sacking of Palma on the island of Sardinia by a Tunisian squadron, which carried off 158 inhabitants, roused widespread indignation. Britain had by this time banned the slave trade and was seeking to induce other countries to do likewise. States that were more vulnerable to the corsairs complained that Britain cared more for ending the trade in African slaves than stopping the enslavement of Europeans and Americans by the Barbary States.
In order to neutralise this objection and further the anti-slavery campaign, in 1816 Britain sent Lord Exmouth to secure new concessions from Tripoli, Tunis, and Algiers, including a pledge to treat Christian captives in any future conflict as prisoners of war rather than slaves. He imposed peace between Algiers and the kingdoms of Sardinia and Sicily. On his first visit, Lord Exmouth negotiated satisfactory treaties and sailed for home. While he was negotiating, a number of Sardinian fishermen who had settled at Bona on the Tunisian coast were brutally treated without his knowledge. As Sardinians they were technically under British protection, and the government sent Exmouth back to secure reparation. On 17 August, in combination with a Dutch squadron under Admiral Van de Capellen, Exmouth bombarded Algiers. Both Algiers and Tunis made fresh concessions as a result.
The Barbary states had difficulty securing uniform compliance with a total prohibition of slave-raiding, as this had been traditionally of central importance to the North African economy. Slavers continued to take captives by preying on less well-protected peoples. Algiers subsequently renewed its slave-raiding, though on a smaller scale. Europeans at the Congress of Aix-la-Chapelle in 1818 discussed possible retaliation. In 1820 a British fleet under Admiral Sir Harry Neal bombarded Algiers. Corsair activity based in Algiers did not entirely cease until France conquered the state in 1830.
Crimean Khanate
The Crimeans frequently mounted raids into the Danubian principalities, Poland-Lithuania, and Muscovy to enslave people whom they could capture; for each captive, the khan received a fixed share (savğa) of 10% or 20%. These campaigns by Crimean forces were either sefers ("sojourns" – officially declared military operations led by the khans themselves), or çapuls ("despoiling" – raids undertaken by groups of noblemen, sometimes illegally because they contravened treaties concluded by the khans with neighbouring rulers).
For a long time, until the early 18th century, the Crimean Khanate maintained a massive slave trade with the Ottoman Empire and the Middle East, exporting about 2 million slaves from Russia and Poland-Lithuania over the period 1500–1700. Caffa (modern Feodosia) became one of the best-known and significant trading ports and slave markets. In 1769 the last major Tatar raid saw the capture of 20,000 Russian and Ruthenian slaves.
Author and historian Brian Glyn Williams writes:
Early modern sources are full of descriptions of sufferings of Christian slaves captured by the Crimean Tatars in the course of their raids:
British slave trade
Britain played a prominent role in the Atlantic slave trade, especially after 1640, when sugar cane was introduced to the region. At first, most were white Britons, or Irish, enslaved as indentured labour – for a fixed period – in the West Indies. These people may have been criminals, political rebels, the poor with no prospects or others who were simply tricked or kidnapped. Slavery was a legal institution in all of the 13 American colonies and Canada (acquired by Britain in 1763). The profits of the slave trade and of West Indian plantations amounted to under 5% of the British economy at the time of the Industrial Revolution.
A little-known incident in the career of Judge Jeffreys refers to an assize in Bristol in 1685 when he made the mayor of the city, then sitting fully robed beside him on the bench, go into the dock and be fined £1000 for being a "kidnapping knave"; some Bristol traders at the time were known to kidnap their own countrymen and ship them away as slaves.
Somersett's case in 1772 was generally taken at the time to have decided that the condition of slavery did not exist under English law in England. In 1785, English poet William Cowper wrote: "We have no slaves at home – Then why abroad? Slaves cannot breathe in England; if their lungs receive our air, that moment they are free. They touch our country, and their shackles fall. That's noble, and bespeaks a nation proud. And jealous of the blessing. Spread it then, And let it circulate through every vein." The decision proved to be a milestone in the British abolitionist movement, though slavery was not abolished in the British Empire until the passage of the 1833 Slavery Abolition Act. In 1807, following many years of lobbying by the abolitionist movement, led primarily by William Wilberforce, the British Parliament voted to make the slave trade illegal anywhere in the Empire with the Slave Trade Act 1807. Thereafter Britain took a prominent role in combating the trade, and slavery itself was abolished in the British Empire (except for India) with the Slavery Abolition Act 1833. Between 1808 and 1860, the West Africa Squadron seized approximately 1,600 slave ships and freed 150,000 Africans who were aboard. Action was also taken against African leaders who refused to agree to British treaties to outlaw the trade. Akitoye, the 11th Oba of Lagos, is famous for having used British involvement to regain his rule in return for suppressing slavery among the Yoruba people of Lagos in 1851. Anti-slavery treaties were signed with over 50 African rulers. In 1839, the world's oldest international human rights organization, British and Foreign Anti-Slavery Society (now Anti-Slavery International), was formed in Britain as by Joseph Sturge, which worked to outlaw slavery in other countries.
After 1833, the freed African slaves declined employment in the cane fields. This led to the importation of indentured labour again – mainly from India, and also China.
In 1811, Arthur William Hodge was executed for the murder of a slave in the British West Indies. He was not, however, as some have claimed, the first white person to have been lawfully executed for the murder of a slave.
Late Modern history
Germany
During World War II Nazi Germany operated several categories of Arbeitslager (Labor Camps) for different categories of inmates. The largest number of them held Polish gentiles and Jewish civilians forcibly abducted in occupied countries (see Łapanka) to provide labor in the German war industry, repair bombed railroads and bridges or work on farms. By 1944, 20% of all workers were foreigners, either civilians or prisoners of war.
Allied powers
As agreed by the Allies at the Yalta conference, Germans were used as forced labor as part of the reparations to be extracted. By 1947, it is estimated that 400,000 Germans (both civilians and POWs) were being used as forced labor by the U.S., France, the UK and the Soviet Union. German prisoners were for example forced to clear minefields in France and the Low Countries. By December 1945, it was estimated by French authorities that 2,000 German prisoners were being killed or injured each month in accidents. In Norway the last available casualty record, from 29 August 1945, shows that by that time a total of 275 German soldiers died while clearing mines, while 392 had been injured.
Soviet Union
The Soviet Union took over the already extensive katorga system and expanded it immensely, eventually organizing the Gulag to run the camps. In 1954, a year after Stalin's death, the new Soviet government of Nikita Khrushchev began to release political prisoners and close down the camps. By the end of the 1950s, virtually all "corrective labor camps" were reorganized, mostly into the system of corrective labor colonies. Officially, the Gulag was terminated by the MVD order 20 25 January 1960.
During the period of Stalinism, the Gulag labor camps in the Soviet Union were officially called "Corrective labor camps." The term "labor colony"; more exactly, "Corrective labor colony", (, abbr. ИТК), was also in use, most notably the ones for underaged (16 years or younger) convicts and captured besprizorniki (street children, literally, "children without family care"). After the reformation of the camps into the Gulag, the term "corrective labor colony" essentially encompassed labor camps.
A total of around 14 million prisoners passed through the Gulag labor camps.
Oceania
In the first half of the 19th century, small-scale slave raids took place across Polynesia to supply labor and sex workers for the whaling and sealing trades, with examples from both the westerly and easterly extremes of the Polynesian triangle.
By the 1860s this had grown to a larger scale operation with Peruvian slave raids in the South Sea Islands to collect labor for the guano industry.
Hawaii
Ancient Hawaii was a caste society. People were born into specific social classes. Kauwa were those of the outcast or slave class. They are believed to have been war captives or their descendants. Marriage between higher castes and the kauwa was strictly forbidden. The kauwa worked for the chiefs and were often used as human sacrifices at the luakini heiau. (They were not the only sacrifices; law-breakers of all castes or defeated political opponents were also acceptable as victims.)
The kapu system was abolished during the ʻAi Noa in 1819, and with it the distinction between the kauwā slave class and the makaʻāinana (commoners). The 1852 Constitution of the Kingdom of Hawaii officially made slavery illegal.
New Zealand
Before the arrival of European settlers, New Zealand comprised many individual polities, with each Māori tribe (iwi) a separate entity equivalent to a nation. In the traditional Māori society of Aotearoa, prisoners of war became taurekareka, slaves – unless released, ransomed or eaten. With some exceptions, the child of a slave remained a slave.
As far as it is possible to tell, slavery seems to have increased in the early-19th century with increased numbers of prisoners being taken by Māori military leaders (such as Hongi Hika and Te Rauparaha) to satisfy the need for labor in the Musket Wars, to supply whalers and traders with food, flax and timber in return for western goods. The intertribal Musket Wars lasted from 1807 to 1843; northern tribes who had acquired muskets captured large numbers of slaves. About 20,000 Māori died in the wars. An unknown number of slaves were captured. Northern tribes used slaves (called mokai) to grow large areas of potatoes for trade with visiting ships. Chiefs started an extensive sex trade in the Bay of Islands in the 1830s, using mainly slave girls. By 1835 about 70 to 80 ships per year called into the port. One French captain described the impossibility of getting rid of the girls who swarmed over his ship, outnumbering his crew of 70 by 3 to 1. All payments to the girls were stolen by the chief. By 1833 Christianity had become established in the north of New Zealand, and large numbers of slaves were freed.
Slavery was outlawed in 1840 via the Treaty of Waitangi, although it did not end completely until government was effectively extended over the whole of the country with the defeat of the King movement in the Wars of the mid-1860s.
Chatham Islands
One group of Polynesians who migrated to the Chatham Islands became the Moriori who developed a largely pacifist culture. It was originally speculated that they settled the Chathams direct from Polynesia, but it is now widely believed they were disaffected Māori who emigrated from the South Island of New Zealand. Their pacifism left the Moriori unable to defend themselves when the islands were invaded by mainland Māori in the 1830s.
Two Taranaki tribes, Ngati Tama and Ngati Mutunga, displaced by the Musket Wars, carried out a carefully planned invasion of the Chatham Islands, 800 km east of Christchurch, in 1835. About 15% of the Polynesian Moriori natives who had migrated to the islands at about 1500 CE were killed, with many women being tortured to death. The remaining population was enslaved for the purpose of growing food, especially potatoes. The Moriori were treated in an inhumane and degrading manner for many years. Their culture was banned and they were forbidden to marry.
Some 300 Moriori men, women and children were massacred and the remaining 1,200 to 1,300 survivors were enslaved.
Some Māori took Moriori partners. The state of enslavement of Moriori lasted until the 1860s although it had been discouraged by CMS missionaries in northern New Zealand from the late 1820s. In 1870 Ngati Mutunga, one of the invading tribes, argued before the Native Land Court in New Zealand that their gross mistreatment of the Moriori was standard Māori practice or tikanga.
Rapa Nui / Easter Island
The isolated island of Rapa Nui/Easter Island was inhabited by the Rapanui, who suffered a series of slave raids from 1805 or earlier, culminating in a near genocidal experience in the 1860s. The 1805 raid was by American sealers and was one of a series that changed the attitude of the islanders to outside visitors, with reports in the 1820s and 1830s that all visitors received a hostile reception. In December 1862, Peruvian slave raiders took between 1,400 and 2,000 islanders back to Peru to work in the guano industry; this was about a third of the island's population and included much of the island's leadership, the last ariki-mau and possibly the last who could read Rongorongo. After intervention by the French ambassador in Lima, the last 15 survivors were returned to the island, but brought with them smallpox, which further devastated the island.
Abolitionist movements
Slavery has existed, in one form or another, throughout the whole of human history. So, too, have movements to free large or distinct groups of slaves. However, abolitionism should be distinguished from efforts to help a particular group of slaves, or to restrict one practice, such as the slave trade.
Drescher (2009) provides a model for the history of the abolition of slavery, emphasizing its origins in Western Europe. Around the year 1500, slavery had virtually died out in Western Europe, but was a normal phenomenon practically everywhere else. The imperial powers – the British, French, Spanish, Portuguese and Dutch empires, and a few others – built worldwide empires based primarily on plantation agriculture using slaves imported from Africa. However, the powers took care to minimize the presence of slavery in their homelands. In 1807 Britain and soon after, the United States also, both criminalized the international slave trade. The Royal Navy was increasingly effective in intercepting slave ships, freeing the captives and taking the crew for trial in courts.
Although there were numerous slave revolts in the Caribbean, the only successful uprising came in the French colony of Haiti in the 1790s, where the slaves rose up, killed the mulattoes and whites, and established the independent Republic of Haiti.
The continuing profitability of slave-based plantations and the threats of race war slowed the development of abolition movements during the first half of the 19th century. These movements were strongest in Britain, and after 1840 in the United States. The Northern states of the United States abolished slavery, partly in response to the United States Declaration of Independence, between 1777 and 1804. Britain ended slavery in its empire in the 1830s. However, the plantation economies of the southern United States, based on cotton, and those in Brazil and Cuba, based on sugar, expanded and grew even more profitable. The bloody American Civil War ended slavery in the United States in 1865. The system ended in Cuba and Brazil in the 1880s because it was no longer profitable for the owners. Slavery continued to exist in Africa, where Arab slave traders raided black areas for new captives to be sold in the system. European colonial rule and diplomatic pressure slowly put an end to the trade, and eventually to the practice of slavery itself.
Britain
In 1772, the Somersett Case (R. v. Knowles, ex parte Somersett) of the English Court of King's Bench ruled that it was unlawful for a slave to be forcibly taken abroad. The case has since been misrepresented as finding that slavery was unlawful in England (although not elsewhere in the British Empire). A similar case, that of Joseph Knight, took place in Scotland five years later and ruled slavery to be contrary to the law of Scotland.
Following the work of campaigners in the United Kingdom, such as William Wilberforce, Henry Dundas, 1st Viscount Melville and Thomas Clarkson, who founded the Society for Effecting the Abolition of the Slave Trade (Abolition Society) in May 1787, the Act for the Abolition of the Slave Trade was passed by Parliament on 25 March 1807, coming into effect the following year. The act imposed a fine of £100 for every slave found aboard a British ship. The intention was to outlaw entirely the Atlantic slave trade within the whole British Empire.
The significance of the abolition of the British slave trade lay in the number of people hitherto sold and carried by British slave vessels. Britain shipped 2,532,300 Africans across the Atlantic, equalling 41% of the total transport of 6,132,900 individuals. This made the British empire the biggest slave-trade contributor in the world due to the magnitude of the empire, which made the abolition act all the more damaging to the global trade of slaves. Britain used its diplomatic influence to press other nations into treaties to ban their slave trade and to give the Royal Navy the right to interdict slave ships sailing under their national flag.
The Slavery Abolition Act, passed on 1 August 1833, outlawed slavery itself throughout the British Empire, with the exception of India. On 1 August 1834 slaves became indentured to their former owners in an apprenticeship system for six years. Full emancipation was granted ahead of schedule on 1 August 1838. Britain abolished slavery in both Hindu and Muslim India with the Indian Slavery Act, 1843.
The Society for the Mitigation and Gradual Abolition of Slavery Throughout the British Dominions (later London Anti-slavery Society ), was founded in 1823, and existed until 1838.
Domestic slavery practised by the educated African coastal elites (as well as interior traditional rulers) in Sierra Leone was abolished in 1928. A study found practices of domestic slavery still widespread in rural areas in the 1970s.
The British and Foreign Anti-Slavery Society, founded in 1839 and having gone several name changes since, still exists as Anti-Slavery International.
France
There were slaves in Metropolitan France (especially in trade ports such as Nantes or Bordeaux)., but the institution was never officially authorized there. The legal case of Jean Boucaux in 1739 clarified the unclear legal position of possible slaves in France, and was followed by laws that established registers for slaves in mainland France, who were limited to a three-year stay, for visits or learning a trade. Unregistered "slaves" in France were regarded as free. However, slavery was of vital importance to the economy of France's Caribbean possessions, especially Saint-Domingue.
Abolition
In 1793, influenced by the French Declaration of the Rights of Man and of the Citizen of August 1789 and alarmed as the massive slave revolt of August 1791 that had become the Haitian Revolution threatened to ally itself with the British, the Revolutionary French commissioners Léger-Félicité Sonthonax and Étienne Polverel declared general emancipation to reconcile them with France. In Paris, on 4 February 1794, Abbé Grégoire and the Convention ratified this action by officially abolishing slavery in all French territories outside mainland France, freeing all the slaves both for moral and security reasons.
Napoleon restores slavery
Napoleon came to power in 1799 and soon had grandiose plans for the French sugar colonies; to achieve them he reintroduced slavery. Napoleon's major adventure into the Caribbean—sending 30,000 troops in 1802 to retake Saint Domingue (Haiti) from ex-slaves under Toussaint L'Ouverture who had revolted. Napoleon wanted to preserve France's financial benefits from the colony's sugar and coffee crops; he then planned to establish a major base at New Orleans. He therefore re-established slavery in Haiti and Guadeloupe, where it had been abolished after rebellions. Slaves and black freedmen fought the French for their freedom and independence. Revolutionary ideals played a central role in the fighting for it was the slaves and their allies who were fighting for the revolutionary ideals of freedom and equality, while the French troops under General Charles Leclerc fought to restore the order of the ancien régime. The goal of re-establishing slavery explicitly contradicted the ideals of the French Revolution. The French soldiers were unable to cope with tropical diseases, and most died of yellow fever. Slavery was reimposed in Guadeloupe but not in Haiti, which became an independent black republic. Napoleon's vast colonial dreams for Egypt, India, the Caribbean Louisiana and even Australia were all doomed for lack of a fleet capable of matching Britain's Royal Navy. Realizing the fiasco Napoleon liquidated the Haiti project, brought home the survivors and sold off the huge Louisiana territory to the US in 1803.
Napoleon and slavery
In 1794 slavery was abolished in the French Empire. After seizing Lower Egypt in 1798, Napoleon Bonaparte issued a proclamation in Arabic, declaring all men to be free and equal. However, the French bought males as soldiers and females as concubines. Napoleon personally opposed the abolition and restored colonial slavery in 1802, a year after the capitulation of his troops in Egypt.
Napoleon decreed the abolition of the slave trade upon his returning from Elba in an attempt to appease Britain. His decision was confirmed by the Treaty of Paris on 20 November 1815 and by order of Louis XVIII on 8 January 1817. However, trafficking continued despite sanctions.
Victor Schœlcher and the 1848 abolition
Slavery in the French colonies was finally abolished in 1848, three months after the beginning of the revolution against the July Monarchy. It was in large part the result of the tireless 18-year campaign of Victor Schœlcher. On 3 March 1848, he had been appointed under-secretary of the navy, and caused a decree to be issued by the provisional government which acknowledged the principle of the enfranchisement of the slaves through the French possessions. He also wrote the decree of 27 April 1848 in which the French government announced that slavery was abolished in all of its colonies.
United States
In 1688, four German Quakers in Germantown presented a protest against the institution of slavery to their local Quaker Meeting. It was ignored for 150 years but in 1844 it was rediscovered and was popularized by the abolitionist movement. The 1688 Petition was the first American public document of its kind to protest slavery, and in addition was one of the first public documents to define universal human rights.
The American Colonization Society, the primary vehicle for returning black Americans to greater freedom in Africa, established the colony of Liberia in 1821–23, on the premise that former American slaves would have greater freedom and equality there.
Various state colonization societies also had African colonies which were later merged with Liberia, including the Republic of Maryland, Mississippi-in-Africa, and Kentucky in Africa. These societies assisted in the movement of thousands of African Americans to Liberia, with ACS founder Henry Clay stating; "unconquerable prejudice resulting from their color, they never could amalgamate with the free whites of this country. It was desirable, therefore, as it respected them, and the residue of the population of the country, to drain them off". Abraham Lincoln, an enthusiastic supporter of Clay, adopted his position on returning the blacks to their own land.
Slaves in the United States who escaped ownership would often make their way to the Northern United States and Canada via the "Underground Railroad". The more famous of the African American abolitionists include former slaves Harriet Tubman, Sojourner Truth and Frederick Douglass. Many more people who opposed slavery and worked for abolition were northern whites, such as William Lloyd Garrison and John Brown. Slavery was legally abolished in 1865 by the Thirteenth Amendment to the United States Constitution.
While abolitionists agreed on the evils of slavery, there were differing opinions on what should happen after African Americans were freed. By the time of Emancipation, African-Americans were now native to the United States and did not want to leave. Most believed that their labor had made the land theirs as well as that of the whites.
Congress of Vienna
The Declaration of the Powers, on the Abolition of the Slave Trade, of 8 February 1815 (Which also formed ACT, No. XV. of the Final Act of the Congress of Vienna of the same year) included in its first sentence the concept of the "principles of humanity and universal morality" as justification for ending a trade that was "odious in its continuance".
Twentieth century
During the 20th century the issue of slavery was addressed by the League of Nations, who founded commissions to investigate and eradicate the institution of slavery and slave trade worldwide. Their efforts continued the work of the first international attempt to address the issue made by the Brussels Anti-Slavery Conference 1889–90, which had concluded with the Brussels Conference Act of 1890. The 1890 Act was revised by the Convention of Saint-Germain-en-Laye 1919, and when the League of Nations was founded in 1920, a need was felt to revise and continue the struggle against slavery.
The Temporary Slavery Commission (TSC) was founded by the League in 1924, which conducted a global investigation and filed a report, and a convention was drawn up in view of hastening the total abolition of slavery and the slave trade.
The 1926 Slavery Convention, which was founded upon the investigation of the TSC of the League of Nations, was a turning point in banning global slavery.
In 1932, the League formed the Committee of Experts on Slavery (CES) to review the result and enforcement of the 1926 Slavery Convention, which resulted in a new international investigation under the first permanent slavery committee, the Advisory Committee of Experts on Slavery (ACE).
The ACE conducted a major international investigation on slavery and slave trade, inspecting all the colonial empires and the territories under their control between 1934 and 1939.
Article 4 of the Universal Declaration of Human Rights, adopted in 1948 by the UN General Assembly, explicitly banned slavery.
After World War II, legal chattel slavery was formally abolished by law in almost the entire world, with the exception of the Arabian Peninsula and some parts of Africa. Chattel slavery was still legal in Saudi Arabia, in Yemen, in the Trucial States and in Oman, and slaves were supplied to the Arabian Peninsula via the Red Sea slave trade.
When the League of Nations was succeeded by the United Nations (UN) after the end of the World War II, Charles Wilton Wood Greenidge of the Anti-Slavery International worked for the UN to continue the investigation of global slavery conducted by the ACE of the League, and in February 1950 the
Ad hoc Committee on Slavery of the United Nations was inaugurated, which ultimately resulted in the introduction of the Supplementary Convention on the Abolition of Slavery.
The United Nations 1956 Supplementary Convention on the Abolition of Slavery was convened to outlaw and ban slavery worldwide, including child slavery.
In November 1962, Faisal of Saudi Arabia finally prohibited the owning of slaves in Saudi Arabia, followed by the abolition of slavery in Yemen in 1962, slavery in Dubai 1963 and slavery in Oman in 1970.
In December 1966, the UN General Assembly adopted the International Covenant on Civil and Political Rights, which was developed from the Universal Declaration of Human Rights. Article 4 of this international treaty bans slavery. The treaty came into force in March 1976 after it had been ratified by 35 nations.
As of November 2003, 104 nations had ratified the treaty. However, illegal forced labour involves millions of people in the 21st century, 43% for sexual exploitation and 32% for economic exploitation.
In May 2004, the 22 members of the Arab League adopted the Arab Charter on Human Rights, which incorporated the 1990 Cairo Declaration on Human Rights in Islam, which states:
Currently, the Anti-trafficking Coordination Team Initiative (ACT Team Initiative), a coordinated effort between the U.S. Departments of Justice, Homeland Security, and Labor, addresses human trafficking. The International Labour Organization estimates that there are 20.9 million victims of human trafficking globally, including 5.5 million children, of which 55% are women and girls.
Contemporary slavery
According to the Global Slavery Index, slavery continues into the 21st century. It claims that as of 2018, the countries with the most slaves were: India (8 million), China (3.86 million), Pakistan (3.19 million) and North Korea (2.64 million). The countries with highest prevalence of slavery were North Korea (10.5%) and Eritrea (9.3%).
Historiography
Historiography in the United States
The history of slavery originally was the history of the government's laws and policies toward slavery, and the political debates about it. Black history was promoted very largely at black colleges. The situation changed dramatically with the coming of the Civil Rights Movement of the 1950s. Attention shifted to the enslaved humans, the free blacks, and the struggles of the black community against adversity.
Peter Kolchin described the state of historiography in the early 20th century as follows:
Historians James Oliver Horton and Lois E. Horton described Phillips' mindset, methodology and influence:
The racist attitude concerning slaves carried over into the historiography of the Dunning School of Reconstruction era history, which dominated in the early 20th century. Writing in 2005, the historian Eric Foner states:
Beginning in the 1950s, historiography moved away from the tone of the Phillips era. Historians still emphasized the slave as an object. Whereas Phillips presented the slave as the object of benign attention by the owners, historians such as Kenneth Stampp emphasized the mistreatment and abuse of the slave.
In the portrayal of the slave as a victim, the historian Stanley M. Elkins in his 1959 work Slavery: A Problem in American Institutional and Intellectual Life compared the effects of United States slavery to that resulting from the brutality of the Nazi concentration camps. He stated the institution destroyed the will of the slave, creating an "emasculated, docile Sambo" who identified totally with the owner. Elkins' thesis was challenged by historians. Gradually historians recognized that in addition to the effects of the owner-slave relationship, slaves did not live in a "totally closed environment but rather in one that permitted the emergence of enormous variety and allowed slaves to pursue important relationships with people other than their master, including those to be found in their families, churches and communities."
Economic historians Robert W. Fogel and Stanley L. Engerman in the 1970s, through their work Time on the Cross, portrayed slaves as having internalized the Protestant work ethic of their owners. In portraying the more benign version of slavery, they also argue in their 1974 book that the material conditions under which the slaves lived and worked compared favorably to those of free workers in the agriculture and industry of the time. (This was also an argument of Southerners during the 19th century.)
In the 1970s and 1980s, historians made use of sources such as black music and statistical census data to create a more detailed and nuanced picture of slave life. Relying also on 19th-century autobiographies of ex-slaves (known as slave narratives) and the WPA Slave Narrative Collection, a set of interviews conducted with former slaves in the 1930s by the Federal Writers' Project, historians described slavery as the slaves remembered it. Far from slaves' being strictly victims or content, historians showed slaves as both resilient and autonomous in many of their activities. Despite their exercise of autonomy and their efforts to make a life within slavery, current historians recognize the precariousness of the slave's situation. Slave children quickly learned that they were subject to the direction of both their parents and their owners. They saw their parents disciplined just as they came to realize that they also could be physically or verbally abused by their owners. Historians writing during this era include John Blassingame (Slave Community), Eugene Genovese (Roll, Jordan, Roll), Leslie Howard Owens (This Species of Property), and Herbert Gutman (The Black Family in Slavery and Freedom).
Important work on slavery has continued; for instance, in 2003 Steven Hahn published the Pulitzer Prize-winning account, A Nation under Our Feet: Black Political Struggles in the Rural South from Slavery to the Great Migration, which examined how slaves built community and political understanding while enslaved, so they quickly began to form new associations and institutions when emancipated, including black churches separate from white control. In 2010, Robert E. Wright published a model that explains why slavery was more prevalent in some areas than others (e.g. southern than northern Delaware) and why some firms (individuals, corporations, plantation owners) chose slave labor while others used wage, indentured, or family labor instead.
A national Marist Poll of Americans in 2015 asked, "Was slavery the main reason for the Civil War, or not?" 53% said yes and 41% said not. There were sharp cleavages along lines of region and party. In the South, 49% answered not. Nationwide 55 percent said students should be taught slavery was the reason for the Civil War.
In 2018, a conference at the University of Virginia studied the history of slavery and recent views on it. According to historian Orlando Patterson, in the United States, the profession of sociology has neglected the study of slavery.
Economics of slavery in the West Indies
One of the most controversial aspects of the British Empire is its role in first promoting and then ending slavery. In the 18th-century British merchant ships were the largest element in the "Middle Passage" which transported millions of slaves to the Western Hemisphere. Most of those who survived the journey wound up in the Caribbean, where the Empire had highly profitable sugar colonies, and the living conditions were bad (the plantation owners lived in Britain). Parliament ended the international transportation of slaves in 1807 and used the Royal Navy to enforce that ban. In 1833 it bought out the plantation owners and banned slavery. Historians before the 1940s argued that moralistic reformers such as William Wilberforce were primarily responsible.
Historical revisionism arrived when West Indian historian Eric Williams, a Marxist, in Capitalism and Slavery (1944), rejected this moral explanation and argued that abolition was now more profitable, for a century of sugarcane raising had exhausted the soil of the islands, and the plantations had become unprofitable. It was more profitable to sell the slaves to the government than to keep up operations. The 1807 prohibition of the international trade, Williams argued, prevented French expansion on other islands. Meanwhile, British investors turned to Asia, where labor was so plentiful that slavery was unnecessary. Williams went on to argue that slavery played a major role in making Britain prosperous. The high profits from the slave trade, he said, helped finance the Industrial Revolution. Britain enjoyed prosperity because of the capital gained from the unpaid work of slaves.
Since the 1970s numerous historians have challenged Williams from various angles and Gad Heuman has concluded, "More recent research has rejected this conclusion; it is now clear that the colonies of the British Caribbean profited considerably during the Revolutionary and Napoleonic Wars." In his major attack on the Williams's thesis, Seymour Drescher argues that Britain's abolition of the slave trade in 1807 resulted not from the diminishing value of slavery for Britain but instead from the moral outrage of the British voting public. Critics have also argued that slavery remained profitable in the 1830s because of innovations in agriculture so the profit motive was not central to abolition. Richardson (1998) finds Williams's claims regarding the Industrial Revolution are exaggerated, for profits from the slave trade amounted to less than 1% of domestic investment in Britain. Richardson further challenges claims (by African scholars) that the slave trade caused widespread depopulation and economic distress in Africa—indeed that it caused the "underdevelopment" of Africa. Admitting the horrible suffering of slaves, he notes that many Africans benefited directly because the first stage of the trade was always firmly in the hands of Africans. European slave ships waited at ports to purchase cargoes of people who were captured in the hinterland by African dealers and tribal leaders. Richardson finds that the "terms of trade" (how much the ship owners paid for the slave cargo) moved heavily in favor of the Africans after about 1750. That is, indigenous elites inside West and Central Africa made large and growing profits from slavery, thus increasing their wealth and power.
Economic historian Stanley Engerman finds that even without subtracting the associated costs of the slave trade (e.g., shipping costs, slave mortality, mortality of British people in Africa, defense costs) or reinvestment of profits back into the slave trade, the total profits from the slave trade and of West Indian plantations amounted to less than 5% of the British economy during any year of the Industrial Revolution. Engerman's 5% figure gives as much as possible in terms of benefit of the doubt to the Williams argument, not solely because it does not take into account the associated costs of the slave trade to Britain, but also because it carries the full-employment assumption from economics and holds the gross value of slave trade profits as a direct contribution to Britain's national income. Historian Richard Pares, in an article written before Williams's book, dismisses the influence of wealth generated from the West Indian plantations upon the financing of the Industrial Revolution, stating that whatever substantial flow of investment from West Indian profits into industry there was occurred after emancipation, not before.
See also
General
Types of slavery:
Child labour/Verdingkinder/Swiss children coercion reparation initiative
Child slavery
Coolies
Debt bondage
Forced labour
Forced marriage
Gulag
Indentured servitude
Sexual slavery
Types of slave trade:
Atlantic slave trade
Barbary slave trade
Blackbirding
Coastwise slave trade
Indian Ocean slave trade
Trans-Saharan slave trade
Slavery in Africa
Asiento de Negros
Slavery in the Ottoman Empire
Swedish slave trade
White slavery
Present-day slavery:
Human trafficking
Slavery in contemporary Africa
Slavery in the 21st century
Slavery in 21st-century jihadism
People
List of famous slaves
Types of slave soldiers:
Janissary
Mamluk
Saqaliba
Ideals and organizations
Abolitionism:
Compensated emancipation
International Year to Commemorate the Struggle Against Slavery and Its Abolition
Abolitionism in the United States
Anti-Slavery International, founded as the British and Foreign Anti-Slavery Society in 1839
Anti-Slavery Society (1823–1838)
Coalition to Abolish Slavery and Trafficking
Quakers – Religious Society of Friends
Society for Effecting the Abolition of the Slave Trade (1787–1807?)
United States National Slavery Museum
Poems on Slavery by Longfellow
Other
Fazenda
History of Liverpool
History of slavery in the Muslim world
Slave-owning slaves
Slavery in the United States:
North Carolina v. Mann
Origins of the American Civil War
Slavery among Native Americans in the United States
Slavery in the colonial history of the United States
Influx of disease in the Caribbean
List of court cases in the United States involving slavery
Pedro Blanco (slave trader)
Sambo's Grave
Sante Kimes
Slave Trade Act
Slavery and religion
Slavery at common law
Timeline of abolition of slavery and serfdom
William Lynch speech
List of films featuring slavery
Notes
References
Bibliography
The Cambridge World History of Slavery, Cambridge, Cambridge University Press, 2011–2021
Volume 1: The Ancient Mediterranean World, Edited by Keith Bradley and Paul Cartledge, 2011
Volume 2: AD 500–AD 1420, Edited by Craig Perry, David Eltis, Stanley L. Engerman, David Richardson, 2021
Volume 3: AD 1420–AD 1804, Edited by David Eltis and Stanley L. Engerman, 2011
Volume 4: AD 1804–AD 2016, Edited by David Eltis, Stanley L. Engerman, Seymour Drescher and David Richardson, 2017
Davis, David Brion. Slavery and Human Progress (1984).
Davis, David Brion. The Problem of Slavery in Western Culture (1966)
Davis, David Brion. Inhuman Bondage: The Rise and Fall of Slavery in the New World (2006)
Drescher, Seymour. Abolition: A History of Slavery and Antislavery (Cambridge University Press, 2009)
Finkelman, Paul, ed. Slavery and Historiography (New York: Garland, 1989)
Finkelman, Paul, and Joseph Miller, eds. Macmillan Encyclopedia of World Slavery (2 vol 1998)
Hinks, Peter, and John McKivigan, eds. Encyclopedia of Antislavery and Abolition (2 vol. 2007) 795 pp;
Linden, Marcel van der, ed. Humanitarian Intervention and Changing Labor Relations: The Long-Term Consequences of the Abolition of the Slave Trade (Brill Academic Publishers, 2011) online review
McGrath, Elizabeth and Massing, Jean Michel, The Slave in European Art: From Renaissance Trophy to Abolitionist Emblem (London: The Warburg Institute, 2012.)
Miller, Joseph C. The problem of slavery as history: a global approach (Yale University Press, 2012.)
Parish, Peter J. Slavery: History and Historians (1989)
Phillips, William D. Slavery from Roman Times to the Early Atlantic Slave Trade (1984)
Rodriguez, Junius P. ed. The Historical Encyclopedia of World Slavery (2 vol. 1997)
Rodriguez, Junius P. ed. Encyclopedia of Slave Resistance and Rebellion (2 vol. 2007)
Greece and Rome
Bradley, Keith. Slavery and Society at Rome (1994)
Cuffel, Victoria. "The Classical Greek Concept of Slavery," Journal of the History of Ideas Vol. 27, No. 3 (Jul–Sep 1966), pp. 323–42
Finley, Moses, ed. Slavery in Classical Antiquity (1960)
Westermann, William L. The Slave Systems of Greek and Roman Antiquity (1955) 182 pp
Europe: Middle Ages
Rio, Alice. Slavery After Rome, 500-1100 (Oxford University Press, 2017) online review
Stark, Rodney. The victory of reason: How Christianity led to freedom, capitalism, and Western success (Random House, 2006).
Verhulst, Adriaan. "The decline of slavery and the economic expansion of the Early Middle Ages." Past & Present No. 133 (Nov., 1991), pp. 195–203 online
Africa and Middle East
Brown, Audrey, and Anthony Knapp. “NPS Ethnography: African American Heritage & Ethnography.” National Parks Service, U.S. Department of the Interior. 2003 online
Campbell, Gwyn. The Structure of Slavery in Indian Ocean Africa and Asia (Frank Cass, 2004)
Davis, Robert C., Christian Slaves, Muslim Masters: White Slavery in the Mediterranean, The Barbary Coast, and Italy, 1500–1800 (Palgrave Macmillan, New York, 2003)
Hershenzon, Daniel. "Towards a connected history of bondage in the Mediterranean: Recent trends in the field." History Compass 15.8 (2017). on Christian captives
Lovejoy, Paul. Transformations in Slavery: A History of Slavery in Africa (Cambridge UP, 1983)
“The Early Cape Slave Trade.” South African History Online, 2 Apr. 2015 online
Toledano, Ehud R. As If Silent and Absent: Bonds of Enslavement in the Islamic Middle East (Yale University Press, 2007)
Atlantic trade, Latin America and British Empire
Blackburn, Robin. The American Crucible: Slavery, Emancipation, and Human Rights (Verso; 2011) 498 pp; on slavery and abolition in the Americas from the 16th to the late 19th centuries.
Fradera, Josep M. and Christopher Schmidt-Nowara, eds. Slavery and Antislavery in Spain's Atlantic Empire (2013) online
Klein, Herbert S. African Slavery in Latin America and the Caribbean (Oxford University Press, 1988)
Klein, Herbert. The Atlantic Slave Trade (1970)
Klein, Herbert S. Slavery in Brazil (Cambridge University Press, 2009)
Morgan, Kenneth. Slavery and the British Empire: From Africa to America (2008)
Stinchcombe, Arthur L. Sugar Island Slavery in the Age of Enlightenment: The Political Economy of the Caribbean World (Princeton University Press, 1995)
Thomas, Hugh. The Slave Trade: The Story of the Atlantic Slave Trade: 1440–1870 (Simon & Schuster, 1997)
Walvin, James. Black Ivory: Slavery in the British Empire (2nd ed. 2001)
Ward, J.R. British West Indian Slavery, 1750–1834 (Oxford U.P. 1988)
Wright, Gavin. "Slavery and Anglo‐American capitalism revisited." Economic History Review 73.2 (2020): 353–383. online
Wyman‐McCarthy, Matthew. "British abolitionism and global empire in the late 18th century: A historiographic overview." History Compass 16.10 (2018): e12480. https://doi.org/10.1111/hic3.12480
Zeuske, Michael. "Historiography and Research Problems of Slavery and the Slave Trade in a Global-Historical Perspective." International Review of Social History 57#1 (2012): 87–111.
United States
Miller, Randall M., and John David Smith, eds. Dictionary of Afro-American Slavery (1988)
Rael, Patrick. Eighty-eight years: the long death of slavery in the United States, 1777–1865 (U of Georgia Press, 2015)
Sinha, Manisha. The slave's cause: A history of abolition (Yale University Press, 2016).
Wilson, Thomas D. The Ashley Cooper Plan: The Founding of Carolina and the Origins of Southern Political Culture. Chapel Hill, NC: University of North Carolina Press, 2016.
External links
Digital History – Slavery Facts & Myths
Teaching resources about Slavery and Abolition on blackhistory4schools.com
International Slavery Museum. Great Britain.
The Abolitionist Seminar, summaries, lesson plans, documents and illustrations for schools; focus on United States
American Abolitionism, summaries and documents; focus on United States
Slavery
Slavery | 0.770199 | 0.99905 | 0.769468 |
Development economics | Development economics is a branch of economics that deals with economic aspects of the development process in low- and middle- income countries. Its focus is not only on methods of promoting economic development, economic growth and structural change but also on improving the potential for the mass of the population, for example, through health, education and workplace conditions, whether through public or private channels.
Development economics involves the creation of theories and methods that aid in the determination of policies and practices and can be implemented at either the domestic or international level. This may involve restructuring market incentives or using mathematical methods such as intertemporal optimization for project analysis, or it may involve a mixture of quantitative and qualitative methods. Common topics include growth theory, poverty and inequality, human capital, and institutions.
Unlike in many other fields of economics, approaches in development economics may incorporate social and political factors to devise particular plans. Also unlike many other fields of economics, there is no consensus on what students should know. Different approaches may consider the factors that contribute to economic convergence or non-convergence across households, regions, and countries.
Theories of development economics
Mercantilism and physiocracy
The earliest Western theory of development economics was mercantilism, which developed in the 17th century, paralleling the rise of the nation state. Earlier theories had given little attention to development. For example, scholasticism, the dominant school of thought during medieval feudalism, emphasized reconciliation with Christian theology and ethics, rather than development. The 16th- and 17th-century School of Salamanca, credited as the earliest modern school of economics, likewise did not address development specifically.
Major European nations in the 17th and 18th centuries all adopted mercantilist ideals to varying degrees, the influence only ebbing with the 18th-century development of physiocrats in France and classical economics in Britain. Mercantilism held that a nation's prosperity depended on its supply of capital, represented by bullion (gold, silver, and trade value) held by the state. It emphasised the maintenance of a high positive trade balance (maximising exports and minimising imports) as a means of accumulating this bullion. To achieve a positive trade balance, protectionist measures such as tariffs and subsidies to home industries were advocated. Mercantilist development theory also advocated colonialism.
Theorists most associated with mercantilism include Philipp von Hörnigk, who in his Austria Over All, If She Only Will of 1684 gave the only comprehensive statement of mercantilist theory, emphasizing production and an export-led economy. In France, mercantilist policy is most associated with 17th-century finance minister Jean-Baptiste Colbert, whose policies proved influential in later American development.
Mercantilist ideas continue in the theories of economic nationalism and neomercantilism.
Economic nationalism
Following mercantilism was the related theory of economic nationalism, promulgated in the 19th century related to the development and industrialization of the United States and Germany, notably in the policies of the American System in America and the Zollverein (customs union) in Germany. A significant difference from mercantilism was the de-emphasis on colonies, in favor of a focus on domestic production.
The names most associated with 19th-century economic nationalism are the first United States Secretary of the Treasury Alexander Hamilton, the German-American Friedrich List, and the American economist Henry Clay. Hamilton's 1791 Report on Manufactures, his magnum opus, is the founding text of the American System, and drew from the mercantilist economies of Britain under Elizabeth I and France under Colbert. List's 1841 Das Nationale System der Politischen Ökonomie (translated into English as The National System of Political Economy), which emphasized stages of growth. Hamilton professed that developing an industrialized economy was impossible without protectionism because import duties are necessary to shelter domestic "infant industries" until they could achieve economies of scale. Such theories proved influential in the United States, with much higher American average tariff rates on manufactured products between 1824 and the WWII period than most other countries, Nationalist policies, including protectionism, were pursued by American politician Henry Clay, and later by Abraham Lincoln, under the influence of economist Henry Charles Carey.
Forms of economic nationalism and neomercantilism have also been key in Japan's development in the 19th and 20th centuries, and the more recent development of the Four Asian Tigers (Hong Kong, South Korea, Taiwan, and Singapore), and, most significantly, China.
Following Brexit and the 2016 United States presidential election, some experts have argued a new kind of "self-seeking capitalism" popularly known as Trumponomics could have a considerable impact on cross-border investment flows and long-term capital allocation
Post-WWII theories
The origins of modern development economics are often traced to the need for, and likely problems with the industrialization of eastern Europe in the aftermath of World War II. The key authors are Paul Rosenstein-Rodan, Kurt Mandelbaum, Ragnar Nurkse, and Sir Hans Wolfgang Singer. Only after the war did economists turn their concerns towards Asia, Africa, and Latin America. At the heart of these studies, by authors such as Simon Kuznets and W. Arthur Lewis was an analysis of not only economic growth but also structural transformation.
Linear-stages-of-growth model
An early theory of development economics, the linear-stages-of-growth model was first formulated in the 1950s by W. W. Rostow in The Stages of Growth: A Non-Communist Manifesto, following work of Marx and List. This theory modifies Marx's stages theory of development and focuses on the accelerated accumulation of capital, through the utilization of both domestic and international savings as a means of spurring investment, as the primary means of promoting economic growth and, thus, development. The linear-stages-of-growth model posits that there are a series of five consecutive stages of development that all countries must go through during the process of development. These stages are "the traditional society, the pre-conditions for take-off, the take-off, the drive to maturity, and the age of high mass-consumption" Simple versions of the Harrod–Domar model provide a mathematical illustration of the argument that improved capital investment leads to greater economic growth.
Such theories have been criticized for not recognizing that, while necessary, capital accumulation is not a sufficient condition for development. That is to say that this early and simplistic theory failed to account for political, social, and institutional obstacles to development. Furthermore, this theory was developed in the early years of the Cold War and was largely derived from the successes of the Marshall Plan. This has led to the major criticism that the theory assumes that the conditions found in developing countries are the same as those found in post-WWII Europe.
Structural-change theory
Structural-change theory deals with policies focused on changing the economic structures of developing countries from being composed primarily of subsistence agricultural practices to being a "more modern, more urbanized, and more industrially diverse manufacturing and service economy." There are two major forms of structural-change theory: W. Lewis' two-sector surplus model, which views agrarian societies as consisting of large amounts of surplus labor which can be utilized to spur the development of an urbanized industrial sector, and Hollis Chenery's patterns of development approach, which holds that different countries become wealthy via different trajectories. The pattern that a particular country will follow, in this framework, depends on its size and resources, and potentially other factors including its current income level and comparative advantages relative to other nations. Empirical analysis in this framework studies the "sequential process through which the economic, industrial, and institutional structure of an underdeveloped economy is transformed over time to permit new industries to replace traditional agriculture as the engine of economic growth."
Structural-change approaches to development economics have faced criticism for their emphasis on urban development at the expense of rural development which can lead to a substantial rise in inequality between internal regions of a country. The two-sector surplus model, which was developed in the 1950s, has been further criticized for its underlying assumption that predominantly agrarian societies suffer from a surplus of labor. Actual empirical studies have shown that such labor surpluses are only seasonal and drawing such labor to urban areas can result in a collapse of the agricultural sector. The patterns of development approach has been criticized for lacking a theoretical framework.
International dependence theory
International dependence theories gained prominence in the 1970s as a reaction to the failure of earlier theories to lead to widespread successes in international development. Unlike earlier theories, international dependence theories have their origins in developing countries and view obstacles to development as being primarily external in nature, rather than internal. These theories view developing countries as being economically and politically dependent on more powerful, developed countries that have an interest in maintaining their dominant position. There are three different, major formulations of international dependence theory: neocolonial dependence theory, the false-paradigm model, and the dualistic-dependence model. The first formulation of international dependence theory, neocolonial dependence theory, has its origins in Marxism and views the failure of many developing nations to undergo successful development as being the result of the historical development of the international capitalist system.
Neoclassical theory
First gaining prominence with the rise of several conservative governments in the developed world during the 1980s, neoclassical theories represent a radical shift away from International Dependence Theories. Neoclassical theories argue that governments should not intervene in the economy; in other words, these theories are claiming that an unobstructed free market is the best means of inducing rapid and successful development. Competitive free markets unrestrained by excessive government regulation are seen as being able to naturally ensure that the allocation of resources occurs with the greatest efficiency possible and that economic growth is raised and stabilized.
There are several different approaches within the realm of neoclassical theory, each with subtle, but important, differences in their views regarding the extent to which the market should be left unregulated. These different takes on neoclassical theory are the free market approach, public-choice theory, and the market-friendly approach. Of the three, both the free-market approach and public-choice theory contend that the market should be totally free, meaning that any intervention by the government is necessarily bad. Public-choice theory is arguably the more radical of the two with its view, closely associated with libertarianism, that governments themselves are rarely good and therefore should be as minimal as possible.
Academic economists have given varied policy advice to governments of developing countries. See for example, Economy of Chile (Arnold Harberger), Economic history of Taiwan (Sho-Chieh Tsiang). Anne Krueger noted in 1996 that success and failure of policy recommendations worldwide had not consistently been incorporated into prevailing academic writings on trade and development.
The market-friendly approach, unlike the other two, is a more recent development and is often associated with the World Bank. This approach still advocates free markets but recognizes that there are many imperfections in the markets of many developing nations and thus argues that some government intervention is an effective means of fixing such imperfections.
Topics of research
Development economics also includes topics such as third world debt, and the functions of such organisations as the International Monetary Fund and World Bank. In fact, the majority of development economists are employed by, do consulting with, or receive funding from institutions like the IMF and the World Bank. Many such economists are interested in ways of promoting stable and sustainable growth in poor countries and areas, by promoting domestic self-reliance and education in some of the lowest income countries in the world. Where economic issues merge with social and political ones, it is referred to as development studies.
Geography and development
Economists Jeffrey D. Sachs, Andrew Mellinger, and John Gallup argue that a nation's geographical location and topography are key determinants and predictors of its economic prosperity. Areas developed along the coast and near "navigable waterways" are far wealthier and more densely populated than those further inland. Furthermore, countries outside the tropic zones, which have more temperate climates, have also developed considerably more than those located within the Tropic of Cancer and the Tropic of Capricorn. These climates outside the tropic zones, described as "temperate-near," hold roughly a quarter of the world's population and produce more than half of the world's GNP, yet account for only 8.4% of the world's inhabited area. Understanding of these different geographies and climates is imperative, they argue, because future aid programs and policies to facilitate economic development must account for these differences.
Economic development and ethnicity
A growing body of research has been emerging among development economists since the very late 20th century focusing on interactions between ethnic diversity and economic development, particularly at the level of the nation-state. While most research looks at empirical economics at both the macro and the micro level, this field of study has a particularly heavy sociological approach. The more conservative branch of research focuses on tests for causality in the relationship between different levels of ethnic diversity and economic performance, while a smaller and more radical branch argues for the role of neoliberal economics in enhancing or causing ethnic conflict. Moreover, comparing these two theoretical approaches brings the issue of endogeneity (endogenicity) into questions. This remains a highly contested and uncertain field of research, as well as politically sensitive, largely due to its possible policy implications.
The role of ethnicity in economic development
Much discussion among researchers centers around defining and measuring two key but related variables: ethnicity and diversity. It is debated whether ethnicity should be defined by culture, language, or religion. While conflicts in Rwanda were largely along tribal lines, Nigeria's string of conflicts is thought to be – at least to some degree – religiously based. Some have proposed that, as the saliency of these different ethnic variables tends to vary over time and across geography, research methodologies should vary according to the context. Somalia provides an interesting example. Due to the fact that about 85% of its population defined themselves as Somali, Somalia was considered to be a rather ethnically homogeneous nation. However, civil war caused ethnicity (or ethnic affiliation) to be redefined according to clan groups.
There is also much discussion in academia concerning the creation of an index for "ethnic heterogeneity". Several indices have been proposed in order to model ethnic diversity (with regards to conflict). Easterly and Levine have proposed an ethno-linguistic fractionalization index defined as FRAC or ELF defined by:
where si is size of group i as a percentage of total population. The ELF index is a measure of the probability that two randomly chosen individuals belong to different ethno-linguistic groups. Other researchers have also applied this index to religious rather than ethno-linguistic groups. Though commonly used, Alesina and La Ferrara point out that the ELF index fails to account for the possibility that fewer large ethnic groups may result in greater inter-ethnic conflict than many small ethnic groups. More recently, researchers such as Montalvo and Reynal-Querol, have put forward the Q polarization index as a more appropriate measure of ethnic division. Based on a simplified adaptation of a polarization index developed by Esteban and Ray, the Q index is defined as
where si once again represents the size of group i as a percentage of total population, and is intended to capture the social distance between existing ethnic groups within an area.
Early researchers, such as Jonathan Pool, considered a concept dating back to the account of the Tower of Babel: that linguistic unity may allow for higher levels of development. While pointing out obvious oversimplifications and the subjectivity of definitions and data collection, Pool suggested that we had yet to see a robust economy emerge from a nation with a high degree of linguistic diversity. In his research Pool used the "size of the largest native-language community as a percentage of the population" as his measure of linguistic diversity. Not much later, however, Horowitz pointed out that both highly diverse and highly homogeneous societies exhibit less conflict than those in between. Similarly, Collier and Hoeffler provided evidence that both highly homogenous and highly heterogeneous societies exhibit lower risk of civil war, while societies that are more polarized are at greater risk. As a matter of fact, their research suggests that a society with only two ethnic groups is about 50% more likely to experience civil war than either of the two extremes. Nonetheless, Mauro points out that ethno-linguistic fractionalization is positively correlated with corruption, which in turn is negatively correlated with economic growth. Moreover, in a study on economic growth in African countries, Easterly and Levine find that linguistic fractionalization plays a significant role in reducing national income growth and in explaining poor policies. In addition, empirical research in the U.S., at the municipal level, has revealed that ethnic fractionalization (based on race) may be correlated with poor fiscal management and lower investments in public goods. Finally, more recent research would propose that ethno-linguistic fractionalization is indeed negatively correlated with economic growth while more polarized societies exhibit greater public consumption, lower levels of investment and more frequent civil wars.
Economic development and its impact on ethnic conflict
Increasingly, attention is being drawn to the role of economics in spawning or cultivating ethnic conflict. Critics of earlier development theories, mentioned above, point out that "ethnicity" and ethnic conflict cannot be treated as exogenous variables. There is a body of literature that discusses how economic growth and development, particularly in the context of a globalizing world characterized by free trade, appears to be leading to the extinction and homogenization of languages. Manuel Castells asserts that the "widespread destructuring of organizations, delegitimation of institutions, fading away of major social movements, and ephemeral cultural expressions" which characterize globalization lead to a renewed search for meaning; one that is based on identity rather than on practices. Barber and Lewis argue that culturally-based movements of resistance have emerged as a reaction to the threat of modernization (perceived or actual) and neoliberal development.
On a different note, Chua suggests that ethnic conflict often results from the envy of the majority toward a wealthy minority which has benefited from trade in a neoliberal world. She argues that conflict is likely to erupt through political manipulation and the vilification of the minority. Prasch points out that, as economic growth often occurs in tandem with increased inequality, ethnic or religious organizations may be seen as both assistance and an outlet for the disadvantaged. However, empirical research by Piazza argues that economics and unequal development have little to do with social unrest in the form of terrorism. Rather, "more diverse societies, in terms of ethnic and religious demography, and political systems with large, complex, multiparty systems were more likely to experience terrorism than were more homogeneous states with few or no parties at the national level".
Recovery from conflict (civil war)
Violent conflict and economic development are deeply intertwined. Paul Collier describes how poor countries are more prone to civil conflict. The conflict lowers incomes catching countries in a "conflict trap." Violent conflict destroys physical capital (equipment and infrastructure), diverts valuable resources to military spending, discourages investment and disrupts exchange.
Recovery from civil conflict is very uncertain. Countries that maintain stability can experience a "peace dividend," through the rapid re-accumulation of physical capital (investment flows back to the recovering country because of the high return). However, successful recovery depends on the quality of legal system and the protection of private property. Investment is more productive in countries with higher quality institutions. Firms that experienced a civil war were more sensitive to the quality of the legal system than similar firms that had never been exposed to conflict.
Growth indicator controversy
Per capita Gross Domestic Product (GDP per head), real income, median income and disposable income are used by many developmental economists as an approximation of general national well-being. However, these measures are criticized as not measuring economic growth well enough, especially in countries where there is much economic activity that is not part of measured financial transactions (such as housekeeping and self-homebuilding), or where funding is not available for accurate measurements to be made publicly available for other economists to use in their studies (including private and institutional fraud, in some countries).
Even though per-capita GDP as measured can make economic well-being appear smaller than it really is in some developing countries, the discrepancy could be still bigger in a developed country where people may perform outside of financial transactions an even higher-value service than housekeeping or homebuilding as gifts or in their own households, such as counseling, lifestyle coaching, a more valuable home décor service, and time management. Even free choice can be considered to add value to lifestyles without necessarily increasing the financial transaction amounts.
More recent theories of Human Development have begun to see beyond purely financial measures of development, for example with measures such as medical care available, education, equality, and political freedom. One measure used is the Genuine Progress Indicator, which relates strongly to theories of distributive justice. Actual knowledge about what creates growth is largely unproven; however recent advances in econometrics and more accurate measurements in many countries are creating new knowledge by compensating for the effects of variables to determine probable causes out of merely correlational statistics.
Recent developments
Recent theories revolve around questions about what variables or inputs correlate or affect economic growth the most: elementary, secondary, or higher education, government policy stability, tariffs and subsidies, fair court systems, available infrastructure, availability of medical care, prenatal care and clean water, ease of entry and exit into trade, and equality of income distribution (for example, as indicated by the Gini coefficient), and how to advise governments about macroeconomic policies, which include all policies that affect the economy.
Education enables countries to adapt the latest technology and creates an environment for new innovations.
The cause of limited growth and divergence in economic growth lies in the high rate of acceleration of technological change by a small number of developed countries. These countries' acceleration of technology was due to increased incentive structures for mass education which in turn created a framework for the population to create and adapt new innovations and methods. Furthermore, the content of their education was composed of secular schooling that resulted in higher productivity levels and modern economic growth.
Researchers at the Overseas Development Institute also highlight the importance of using economic growth to improve the human condition, raising people out of poverty and achieving the Millennium Development Goals. Despite research showing almost no relation between growth and the achievement of the goals 2 to 7 and statistics showing that during periods of growth poverty levels in some cases have actually risen (e.g. Uganda grew by 2.5% annually between 2000 and 2003, yet poverty levels rose by 3.8%), researchers at the ODI suggest growth is necessary, but that it must be equitable. This concept of inclusive growth is shared even by key world leaders such as former Secretary General Ban Ki-moon, who emphasises that:
"Sustained and equitable growth based on dynamic structural economic change is necessary for making substantial progress in reducing poverty. It also enables faster progress towards the other Millennium Development Goals. While economic growth is necessary, it is not sufficient for progress on reducing poverty."
Researchers at the ODI thus emphasise the need to ensure social protection is extended to allow universal access and that active policy measures are introduced to encourage the private sector to create new jobs as the economy grows (as opposed to jobless growth) and seek to employ people from disadvantaged groups.
Notable development economists
Mahbub ul Haq, Minister of Finance for Islamic Republic of Pakistan, special advisor at UNDP.
Muhammad Yunus, Founder of Grameen Bank, Nobel Peace Prize Laureate by Norwegian Nobel Committee.
Daron Acemoglu, professor of economics at the Massachusetts Institute of Technology, and Clark Medal winner.
Philippe Aghion, professor of economics at the London School of Economics and Collège de France, co-authored textbook in economic growth, forwarded Schumpeterian growth, and established creative destruction theories mathematically with Peter Howitt.
Nava Ashraf, professor of economics at the London School of Economics.
Oriana Bandiera, professor of economics at the London School of Economics and Director of the International Growth Centre.
Abhijit Banerjee, professor of economics at the Massachusetts Institute of Technology and Director of Abdul Latif Jameel Poverty Action Lab, co-recipient of the 2019 Nobel Memorial Prize in Economic Sciences.
Pranab Bardhan, professor of economics at the University of California, Berkeley, author of texts in both trade and development economics, and editor of the Journal of Development Economics from 1985 to 2003.
Kaushik Basu, professor of economics at Cornell University and author of Analytical Development Economics.
Peter Thomas Bauer, former professor of economics at the London School of Economics, author of Dissent on Development.
Tim Besley, professor of economics at the London School of Economics, and commissioner of the UK National Infrastructure Commission.
Jagdish Bhagwati, professor of economics and law at Columbia University
Nancy Birdsall is the founding president of the Center for Global Development (CGD) in Washington, DC, USA, and former executive vice-president of the Inter-American Development Bank.
David E. Bloom, professor of economics and demography at the Harvard School of Public Health.
François Bourguignon, professor of economics and Director of the Paris School of Economics.
Robin Burgess, professor of economics at the London School of Economics and Director of the International Growth Centre.
Francesco Caselli, professor of economics at the London School of Economics.
Paul Collier, author of The Bottom Billion which attempts to tie together a series of traps to explain the self-fulfilling nature of poverty at the lower end of the development scale.
Michael B. Connolly, development economist and a university professor
Partha Dasgupta, professor of economics at the University of Cambridge.
Dave Donaldson, professor of economics at the Massachusetts Institute of Technology and Clark Medal winner.
Angus Deaton, professor of economics at Princeton University and winner of the Nobel Prize in Economics.
Melissa Dell, professor of economics at Harvard University and Clark Medal winner.
Simeon Djankov, research fellow at the Financial Markets Group of the London School of Economics.
Esther Duflo, Director of Abdul Latif Jameel Poverty Action Lab, professor of economics at the Massachusetts Institute of Technology, 2009 MacArthur Fellow, 2010 Clark Medal winner, advocate for field experiment, co-recipient of the 2019 Nobel Memorial Prize in Economic Sciences.
William Easterly, author of The Elusive Quest for Growth: Economists' Adventures and Misadventures in the Tropics and White Man's Burden: How the West's Efforts to Aid the Rest Have Done So Much Ill and So Little Good.
Oded Galor, Israeli-American economist at Brown University; editor-in-chief of the Journal of Economic Growth, the principal journal in economic growth. Developer of the unified growth theory, the newest alternative to theories of endogenous growth.
Maitreesh Ghatak, professor of economics at the London School of Economics.
Peter Howitt, Canadian economist at Brown University; past president of the Canadian Economics Association, introduced the concept of Schumpeterian growth and established creative destruction theory mathematically with Philippe Aghion.
Seema Jayachandran, professor of economics at Northwestern University.
Dean Karlan, American economist at Northwestern University; co-director of the Global Poverty Research Lab at the Buffett Institute for Global Studies; founded Innovations for Poverty Action (IPA), a New Haven, Connecticut, based research outfit dedicated to creating and evaluating solutions to social and international development problems.
Michael Kremer, University Professor at the University of Chicago, co-recipient of the 2019 Nobel Memorial Prize in Economic Sciences.
Eliana La Ferrara, professor at Harvard University's Kennedy School of Government.
W. Arthur Lewis, winner of the 1979 Nobel Prize in Economics for work in development economics.
Justin Yifu Lin, Chinese economist at Peking University; former chief economist of World Bank, one of the most prominent Chinese economists.
Sendhil Mullainathan, professor of computation and behavioural science at the University of Chicago Booth School of Business.
Nathan Nunn, professor of economics at Harvard University.
Benjamin Olken, professor of economics at the Massachusetts Institute of Technology.
Rohini Pande, professor of economics at Yale University.
Lant Pritchett, professor at Harvard University's Kennedy School of Government, and has held several prominent research positions at the World Bank.
Nancy Qian, professor of economics at Northwestern University.
Kate Raworth, Senior Research Associate at the Environmental Change Institute of the University of Oxford, author of Doughnut Economics: Seven Ways to Think Like a 21st-Century Economist, formerly economist for the United Nations Development Programme's Human Development Report and Senior Researcher at Oxfam.
James Robinson, professor of economics at the University of Chicago Harris School of Public Policy Studies.
Dani Rodrik, professor at Harvard University's Kennedy School of Government, has written extensively on globalization.
Mark Rosenzweig, a professor at Yale University and director of Economic Growth Center at Yale
Jeffrey Sachs, professor at Columbia University, author of The End of Poverty: Economic Possibilities of Our Time ( preview) and Common Wealth: Economics for a Crowded Planet.
Amartya Sen, Indian economist, first Asian Nobel Prize winner for economics, author of Development as Freedom, known for incorporating philosophical components into economic models.
Nicholas Stern, professor of economics at the London School of Economics, former President of the British Academy and former World Bank Chief Economist.
Joseph Stiglitz, professor at Columbia University and Nobel Prize winner and former chief economist at the World Bank.
John Sutton, emeritus professor of economics at the London School of Economics.
Erik Thorbecke, a co-originator of Foster–Greer–Thorbecke poverty measure who also played a significant role in the development and popularization of social accounting matrix.
Michael Todaro, known for the Todaro and Harris–Todaro models of migration and urbanization; Economic Development.
Robert M. Townsend, professor at the Massachusetts Institute of Technology known for his Thai Project, a model for many other applied and theoretical projects in economic development.
Anthony Venables, professor of economics at the University of Oxford.
Hernando de Soto, author of The Other Path: The Economic Answer to Terrorism and The Mystery of Capital: Why Capitalism Triumphs in the West and Fails Everywhere Else.
Steven Radelet, professor at Georgetown University and author of The Great Surge-The Ascent of the Developing World.
See also
Chinese economic reform
Democracy and economic growth
Demographic economics
Dependency theory
Development Cooperation Issues
Development Cooperation Stories
Development Cooperation Testimonials
Development studies
Development wave
Environmental determinism
Human Development and Capability Association
International Association for Feminist Economics
International Monetary Fund
International development
Important publications in development economics
Economic development
International development
UN Human Development Index
Gini coefficient
Lorenz curve
Harrod–Domar model
Debt relief
Human security
Kaldor's growth laws
The Poverty of "Development Economics"
Social development
Sustainable development
Women's education and development
Footnotes
Bibliography
Development Economics through the Decades: A Critical Look at 30 Years of the World Development Report World Bank Publications, Washington DC (2009),
The Complete World Development Report, 1978–2009 (Single User DVD): 30th Anniversary Edition World Bank Publications, Washington DC (2009),
Behrman, J.R. (2001). "Development, Economics of," International Encyclopedia of the Social & Behavioral Sciences, pp. 3566–3574 Abstract.
Easterly, William (2002), Elusive Quest for Growth: Economists' Adventures and Misadventures in the Tropics, The MIT Press
Ben Fine and Jomo K.S. (eds, 2005), The New Development Economics: Post Washington Consensus Neoliberal Thinking, Zed Books
Peter Griffiths (2003), The Economist's Tale: A Consultant Encounters Hunger and the World Bank, Zed Books
K.S. Jomo (2005), Pioneers of Development Economics: Great Economists on Development, Zed Books – the contributions of economists such as Marshall and Keynes, not normally considered development economists
Gerald M. Meier (2005), Biography of a Subject: An Evolution of Development Economics, Oxford University Press
Gerald M. Meier, Dudley Seers [editors] (1984), Pioneers in Development, World Bank
Dwight H. Perkins, Steven Radelet, Donald R. Snodgrass, Malcolm Gillis and Michael Roemer (2001). Economics of Development, 5th edition, New York: W. W. Norton.
Jeffrey D. Sachs (2005), The End of Poverty: Economic Possibilities for Our Time, Penguin Books
Debraj Ray (1998). Development Economics, Princeton University Press, . Other editions: Spanish, Antoni Bosch. 2002 Chinese edition, Beijing University Press. 2002, Indian edition, Oxford, 1998. Description, table of contents, and excerpt, ch. 1.
World Institute for Development Economics Research Publications/Discussion Papers
Michael Todaro and Stephen C. Smith, Economic Development, 10th Ed., Addison-Wesley, 2008. Description.
Handbook of Development Economics, Elsevier. Description and table of contents:
Hollis B. Chenery and T. N. Srinivasan, eds. (1988, 1989). Vol. 1 and 2
Jere Behrman and T.N. Srinivasan, eds. (1995). Vol 3A and 3B
T. Paul Schultz and John Strauss, eds. (2008). Vol 4
Dani Rodrik and Mark R. Rosenzweig, eds. (2009). Vol 5
External links
Development Economics and Economic Development, a list of resources for development economics.
Technology in emerging economies (The Economist).
Top 10% institutions in the field of Development, a list of research institutions specialized in Development at Ideas.Repec
Economics
Economic globalization | 0.772815 | 0.995651 | 0.769453 |
Historical demography | Historical demography is the quantitative study of human population in the past. It is concerned with population size, with the three basic components of population change (fertility, mortality, and migration), and with population characteristics related to those components, such as marriage, socioeconomic status, and the configuration of families.
Sources
The sources of historical demography vary according to the period and topics of the study.
For the recent period - beginning in the early nineteenth century in most European countries, and later in the rest of the world - historical demographers make use of data collected by governments, including censuses and vital statistics.
In the early modern period, historical demographers rely heavily on ecclesiastical records of baptisms, marriages, and burials, using methods developed by French historian Louis Henry, as well as hearth and poll tax records. In 1749 the first population census covering the whole country was conducted in the kingdom of Sweden, including today's Finland.
For population size, sources can also include the size of cities and towns, the size and density of smaller settlements, relying on field survey techniques, the presence or absence of agriculture on marginal land, and inferences from historical records. For population health and life expectancy, paleodemography, based on the study of skeletal remains, is another important approach for populations that precede the modern era, as is the study of ages of death recorded on funerary monuments.
The PUMS (Public User Microdata Samples) data set allows researchers to analyze contemporary and historical data sets.
Development of techniques
Historical analysis has played a central role in the study of population, from Thomas Malthus in the eighteenth century to major twentieth-century demographers such as Ansley Coale and Samuel H. Preston. The French historian Louis Henry (1911-1991) was chiefly responsible for the development of historical demography as a distinct subfield of demography. In recent years, new research in historical demography has proliferated owing to the development of massive new population data collections, including the Demographic Data Base in Umeå, Sweden, the Historical Sample of the Netherlands, and the Integrated Public Use Microdata Series (IPUMS).
According to Willigan and Lynch, the main sources used by demographic historians include archaeological methods, parish registers starting about 1500 in Europe, civil registration records, enumerations, national census beginning about 1800, genealogies and family reconstruction studies, population registers, and organizational and institutional records. Statistical methods have included model life tables, time series analysis, event history analysis, causal model building and hypothesis testing, As well as theories of the demographic transition and the epidemiological transition.
References
Further reading
Alter, George C. "Generation to Generation Life Course, Family, and Community." Social Science History (2013) 37#1 pp: 1-26. abstract
Alter, George C., et al. "Introduction: Longitudinal analysis of historical-demographic data." Journal of Interdisciplinary History (2012) 42#4 pp: 503-517. Online
Alter, George C., et al. "Completing Life Histories with Imputed Exit Dates: A Method for Historical Data from Passive Registration Systems," Population (2009) 64:293–318.
Arriaga, Eduardo E. "A New Approach to the Measurements of Urbanization" Economic Development & Cultural Change (1970) 18#2 pp 206–18 in JSTOR
Coale, Ansley J. Regional Model Life Tables and Stable Populations (2nd ed. 1983)
Fauve-Chamoux, Antoinette. "A Personal Account of the History of Historical Demography in Europe at the End of the Glorious Thirty (1967-1975)." Essays in Economic & Business History 35.1 (2017): 175-205.
Gutmann, Myron P. et al. eds. Navigating Time and Space in Population Studies (2012) excerpt and text search
Henry, Louis. Population: analysis and models (London: Edward Arnold, 1976)
Henry, Louis. On the measurement of human fertility: selected writings of Louis Henry (Elsevier Pub. Co, 1972)
Henry, Louis. "The verification of data in historical demography." Population studies 22.1 (1968): 61-81.
Nusteling, Hubert. "Fertility in historical demography and a homeostatic method for reconstituting populations in pre-statistical periods." Historical Methods: A Journal of Quantitative and Interdisciplinary History (2005) 38#3 pp: 126-142. DOI:10.3200/HMTS.38.3.126-142
Smith, Daniel Scott. "A perspective on demographic methods and effects in social history." William and Mary Quarterly (1982 ): 442-468. in JSTOR
Reher, David S., and Roger Schofield. Old and new methods in historical demography (Clarendon Press 1993), 426 pp.
Swanson, David A. and Jacob S. Siegel. The Methods and Materials of Demography (2nd ed. 2004); rewritten version of Henry S. Shryock and Jacob S. Siegel, The Methods and Materials of Demography (1976); compendium of techniques
Swedlund, Alan C. "Historical demography as population ecology." Annual Review of Anthropology (1978) pp: 137-173.
van de Walle, Etienne. "Historical Demography" in Dudley L. Poston and Michael Micklin, eds. Handbook of Population (Springer US, 2005) pp 577–600
Watkins, Susan Cotts, and Myron P. Gutmann. "Methodological issues in the use of population registers for fertility analysis." Historical Methods: A Journal of Quantitative and Interdisciplinary History (1983) 16#3: 109-120.
Willigan, J. Dennis, and Katherine A. Lynch, Sources and Methods of Historical Demography, (New York: Academic Press, 1982) 505 p. Abstract
Wrigely, E. A., ed. An Introduction to English Historical Demography, London: Weidenfeld & Nicolson, 1966.
External links
International Commission for Historical Demography
H-Demog, an international scholarly online discussion list on demographic history
POPULATION STATISTICS in historical perspective
Fields of history
Population | 0.790817 | 0.972954 | 0.769428 |
Societal collapse | Societal collapse (also known as civilizational collapse or systems collapse) is the fall of a complex human society characterized by the loss of cultural identity and of social complexity as an adaptive system, the downfall of government, and the rise of violence. Possible causes of a societal collapse include natural catastrophe, war, pestilence, famine, economic collapse, population decline or overshoot, mass migration, incompetent leaders, and sabotage by rival civilizations. A collapsed society may revert to a more primitive state, be absorbed into a stronger society, or completely disappear.
Virtually all civilizations have suffered such a fate, regardless of their size or complexity. Most never recovered, such as the Western and Eastern Roman Empires, the Maya civilization, and the Easter Island civilization. However, some of them later revived and transformed, such as China, Greece, and Egypt.
Anthropologists, historians, and sociologists have proposed a variety of explanations for the collapse of civilizations involving causative factors such as environmental degradation, depletion of resources, costs of rising complexity, invasion, disease, decay of social cohesion, growing inequality, extractive institutions, long-term decline of cognitive abilities, loss of creativity, and misfortune. However, complete extinction of a culture is not inevitable, and in some cases, the new societies that arise from the ashes of the old one are evidently its offspring, despite a dramatic reduction in sophistication. Moreover, the influence of a collapsed society, such as the Western Roman Empire, may linger on long after its death.
The study of societal collapse, collapsology, is a topic for specialists of history, anthropology, sociology, and political science. More recently, they are joined by experts in cliodynamics and study of complex systems.
Concept
Joseph Tainter frames societal collapse in The Collapse of Complex Societies (1988), a seminal and founding work of the academic discipline on societal collapse. He elaborates that 'collapse' is a "broad term," but in the sense of societal collapse, he views it as "a political process." He further narrows societal collapse as a rapid process (within "few decades") of "substantial loss of sociopolitical structure," giving the fall of the Western Roman Empire as "the most widely known instance of collapse" in the Western world.
Others, particularly in response to the popular Collapse (2005) by Jared Diamond and more recently, have argued that societies discussed as cases of collapse are better understood through resilience and societal transformation, or "reorganization", especially if collapse is understood as a "complete end" of political systems, which according to Shmuel Eisenstadt has not taken place at any point. Eisenstadt also points out that a clear differentiation between total or partial decline and "possibilities of regeneration" is crucial for the preventive purpose of the study of societal collapse. This frame of reference often rejects the term collapse and critiques the notion that cultures simply vanish when the political structures that organize labor for large archaeologically prominent projects do. For example, while the Ancient Maya are often touted as a prime example of collapse, in reality this reorganization was simply the result of the removal of the political system of Divine Kingship largely in the eastern lowlands as many cities in the western highlands of Mesoamerica maintained this system of divine kingship into the 16th century. The Maya continue to maintain cultural and linguistic continuity into the present day.
Societal longevity
The social scientist Luke Kemp analyzed dozens of civilizations, which he defined as "a society with agriculture, multiple cities, military dominance in its geographical region and a continuous political structure," from 3000 BC to 600 AD and calculated that the average life span of a civilization is close to 340 years. Of them, the most durable were the Kushite Kingdom in Northeast Africa (1,150 years), the Aksumite Empire in East Africa (1,100 years), and the Vedic civilization in South Asia and the Olmecs in Mesoamerica (both 1,000 years), and the shortest-lived were the Nanda Empire in India (24) and the Qin dynasty in China (14).
A statistical analysis of empires by complex systems specialist Samuel Arbesman suggests that collapse is generally a random event and does not depend on age. That is analogous to what evolutionary biologists call the Red Queen hypothesis, which asserts that for a species in a harsh ecology, extinction is a persistent possibility.
Contemporary discussions about societal collapse are seeking resilience by suggesting societal transformation.
Causes of collapse
Because human societies are complex systems, common factors may contribute to their decline that are economical, environmental, demographic, social and cultural, and they may cascade into another and build up to the point that could overwhelm any mechanisms that would otherwise maintain stability. Unexpected and abrupt changes, which experts call nonlinearities, are some of the warning signs. In some cases, a natural disaster (such as a tsunami, earthquake, pandemic, massive fire or climate change) may precipitate a collapse. Other factors such as a Malthusian catastrophe, overpopulation, or resource depletion might be contributory factors of collapse, but studies of past societies seem to suggest that those factors did not cause the collapse alone. Significant inequity and exposed corruption may combine with lack of loyalty to established political institutions and result in an oppressed lower class rising up and seizing power from a smaller wealthy elite in a revolution. The diversity of forms that societies evolve corresponds to diversity in their failures. Jared Diamond suggests that societies have also collapsed through deforestation, loss of soil fertility, restrictions of trade and/or rising endemic violence.
In the case of the Western Roman Empire, some argued that it did not collapse but merely transformed.
Natural disasters and climate change
Archeologists have identified signs of a megadrought which lasted for a millennium between 5,000 and 4,000 years ago in Africa and Asia. The drying of the Green Sahara not only turned it into a desert but also disrupted the monsoon seasons in South and Southeast Asia and caused flooding in East Asia, which prevented successful harvests and the development of complex culture. It coincided with and may have caused the decline and the fall of the Akkadian Empire in Mesopotamia and the Indus Valley Civilization. The dramatic shift in climate is known as the 4.2-kiloyear event.
The highly advanced Indus Valley Civilization took root around 3000 BC in what is now northwestern India and Pakistan and collapsed around 1700 BC. Since the Indus script has yet to be deciphered, the causes of its de-urbanization remain a mystery, but there is some evidence pointing to natural disasters. Signs of a gradual decline began to emerge in 1900 BC, and two centuries later, most of the cities had been abandoned. Archeological evidence suggests an increase in interpersonal violence and in infectious diseases like leprosy and tuberculosis. Historians and archeologists believe that severe and long-lasting drought and a decline in trade with Egypt and Mesopotamia caused the collapse. Evidence for earthquakes has also been discovered. Sea level changes are also found at two possible seaport sites along the Makran coast which are now inland. Earthquakes may have contributed to decline of several sites by direct shaking damage or by changes in sea level or in water supply.
Volcanic eruptions can abruptly influence the climate. During a large eruption, sulfur dioxide (SO2) is expelled into the stratosphere, where it could stay for years and gradually get oxidized into sulfate aerosols. Being highly reflective, sulfate aerosols reduce the incident sunlight and cool the Earth's surface. By drilling into glaciers and ice sheets, scientists can access the archives of the history of atmospheric composition. A team of multidisciplinary researchers led by Joseph McConnell of the Desert Research Institute in Reno, Nevada deduced that a volcanic eruption occurred in 43 BC, a year after the assassination of Julius Caesar on the Ides of March (15 March) in 44 BC, which left a power vacuum and led to bloody civil wars. According to historical accounts, it was also a period of poor weather, crop failure, widespread famine, and disease. Analyses of tree rings and cave stalagmites from different parts of the globe provided complementary data. The Northern Hemisphere got drier, but the Southern Hemisphere became wetter. Indeed, the Greek historian Appian recorded that there was a lack of flooding in Egypt, which also faced famine and pestilence. Rome's interest in Egypt as a source of food intensified, and the aforementioned problems and civil unrest weakened Egypt's ability to resist. Egypt came under Roman rule after Cleopatra committed suicide in 30 BC. While it is difficult to say for certain whether Egypt would have become a Roman province if Okmok volcano (in modern-day Alaska) had not erupted, the eruption likely hastened the process.
More generally, recent research pointed to climate change as a key player in the decline and fall of historical societies in China, the Middle East, Europe, and the Americas. In fact, paleoclimatogical temperature reconstruction suggests that historical periods of social unrest, societal collapse, and population crash and significant climate change often occurred simultaneously. A team of researchers from Mainland China and Hong Kong were able to establish a causal connection between climate change and large-scale human crises in pre-industrial times. Short-term crises may be caused by social problems, but climate change was the ultimate cause of major crises, starting with economic depressions. Moreover, since agriculture is highly dependent on climate, any changes to the regional climate from the optimum can induce crop failures.
After around 1130, North America had significant climatic change in the form of a 300-year period of aridity called the Great Drought. The Mississippian culture collapsed during this period. The Ancestral Puebloans left their established homes in the 12th and 13th centuries. Current scholarly consensus is that Ancestral Puebloans responded to pressure from Numic-speaking peoples moving onto the Colorado Plateau, as well as climate change that resulted in agricultural failures.
The Mongol conquests corresponded to a period of cooling in the Northern Hemisphere between the thirteenth and fourteenth centuries, when the Medieval Warm Period was giving way to the Little Ice Age, which caused ecological stress. In Europe, the cooling climate did not directly facilitate the Black Death, but it caused wars, mass migration, and famine, which helped diseases spread.
A more recent example is the General Crisis of the Seventeenth Century in Europe, which was a period of inclement weather, crop failure, economic hardship, extreme intergroup violence, and high mortality because of the Little Ice Age. The Maunder Minimum involved sunspots being exceedingly rare. Episodes of social instability track the cooling with a time lap of up to 15 years, and many developed into armed conflicts, such as the Thirty Years' War (1618–1648), which started as a war of succession to the Bohemian throne. Animosity between Protestants and Catholics in the Holy Roman Empire (in modern-day Germany) added fuel to the fire. Soon, it escalated to a huge conflict that involved all major European powers and devastated much of Germany. When the war had ended, some regions of the empire had seen their populations drop by as much as 70%. However, not all societies faced crises during this period. Tropical countries with high carrying capacities and trading economies did not suffer much because the changing climate did not induce an economic depression in those places.
Foreign invasions and mass migration
Between ca. 4000 and 3000 BCE, neolithic populations in western Eurasia declined, probably due to the plague and other viral hemorrhagic fevers. This decline was followed by the Indo-European migrations. Around 3,000 BC, people of the pastoralist Yamnaya culture from the Pontic–Caspian steppe, who had high levels of WSH ancestry, embarked on a massive expansion throughout Eurasia, which is considered to be associated with the dispersal of the Indo-European languages by most contemporary linguists, archaeologists, and geneticists. The expansion of WSHs resulted in the virtual disappearance of the Y-DNA of Early European Farmers (EEFs) from the European gene pool, significantly altering the cultural and genetic landscape of Europe. EEF mtDNA however remained frequent, suggesting admixture between WSH males and EEF females.
A mysterious loose confederation of fierce maritime marauders known as the Sea Peoples was identified as one of the main causes of the Late Bronze Age Collapse in the Eastern Mediterranean. It is possible that the Sea Peoples were themselves victims of the environmental changes that led to widespread famine and precipitated the Collapse.
In the third century BC, a Eurasian nomadic people, the Xiongnu, began threatening China's frontiers, but by the first century BC, they had been completely expelled. They then turned their attention westward and displaced various other tribes in Eastern and Central Europe, which led to a cascade of events. Attila rose to power as leader of the Huns and initiated a campaign of invasions and looting and went as far as Gaul (modern-day France). Attila's Huns were clashing with the Roman Empire, which had already been divided into two-halves for ease of administration: the Eastern Roman Empire and the Western Roman Empire. Despite managing to stop Attila at the Battle of Chalons in 451 AD, the Romans were unable to prevent Attila from attacking Roman Italy the next year. Northern Italian cities like Milan were ravaged. The Huns never again posed a threat to the Romans after Attila's death, but the rise of the Huns also forced the Germanic peoples out of their territories and made those groups press their way into parts of France, Spain, Italy, and even as far south as North Africa. The city of Rome itself came under attack by the Visigoths in 410 and was plundered by the Vandals in 455. A combination of internal strife, economic weakness, and relentless invasions by the Germanic peoples pushed the Western Roman Empire into terminal decline. The last Western Roman Emperor, Romulus Augustulus, was dethroned in 476 by the German Odoacer, who declared himself King of Italy.
In the eleventh century AD, North Africa's populous and flourishing civilization collapsed after it had exhausted its resources in internal fighting and suffering devastation from the invasion of the Bedouin tribes of Banu Sulaym and Banu Hilal. Ibn Khaldun noted that all of the lands ravaged by Banu Hilal invaders had become arid desert.
In 1206, a warlord achieved dominance over all Mongols with the title Genghis Khan and began his campaign of territorial expansion. The Mongols' highly flexible and mobile cavalry enabled them to conquer their enemies with efficiency and swiftness. In the brutal pillaging that followed Mongol invasions during the thirteenth and fourteenth centuries, the invaders decimated the populations of China, Russia, the Middle East, and Islamic Central Asia. Later Mongol leaders, such as Timur, destroyed many cities, slaughtered thousands of people, and irreparably damaged the ancient irrigation systems of Mesopotamia. The invasions transformed a settled society to a nomadic one. In China, for example, a combination of war, famine, and pestilence during the Mongol conquests halved the population, a decline of around 55 million people. The Mongols also displaced large numbers of people and created power vacuums. The Khmer Empire went into decline and was replaced by the Thais, who were pushed southward by the Mongols. The Vietnamese, who succeeded in defeating the Mongols, also turned their attention to the south and by 1471 began to subjugate the Chams. When Vietnam's Later Lê dynasty went into decline in the late 1700s, a bloody civil war erupted between the Trịnh family in the north and the Nguyễn family in the south. More Cham provinces were seized by the Nguyễn warlords. Finally, Nguyễn Ánh emerged victorious and declared himself Emperor of Vietnam (changing the name from Annam) with the title Gia Long and established the Nguyễn dynasty. The last remaining principality of Champa, Panduranga (modern-day Phan Rang, Vietnam), survived until 1832, when Emperor Minh Mạng (Nguyễn Phúc Đảm) conquered it after centuries of Cham–Vietnamese wars. Vietnam's policy of assimilation involved the forcefeeding of pork to Muslims and beef to Hindus, which fueled resentment. An uprising followed, the first and only war between Vietnam and the jihadists, until it was crushed.
Famine, economic depression, and internal strife
Around 1210 BC, the New Kingdom of Egypt shipped large amounts of grain to the disintegrating Hittite Empire. Thus, there had been a food shortage in Anatolia but not the Nile Valley. However, that soon changed. Although Egypt managed to deliver a decisive and final defeat to the Sea Peoples at the Battle of Xois, Egypt itself went into steep decline. The collapse of all other societies in the Eastern Mediterranean disrupted established trade routes and caused widespread economic depression. Government workers became underpaid, which resulted in the first labor strike in recorded history and undermined royal authority. There was also political infighting between different factions of government. Bad harvest from the reduced flooding at the Nile led to a major famine. Food prices rose to eight times their normal values and occasionally even reached twenty-four times. Runaway inflation followed. Attacks by the Libyans and Nubians made things even worse. Throughout the Twentieth Dynasty (~1187–1064 BC), Egypt devolved from a major power in the Mediterranean to a deeply divided and weakened state, which later came to be ruled by the Libyans and the Nubians.
Between 481 BC and 221 BC, the Period of the Warring States in China ended by King Zheng of the Qin dynasty succeeding in defeating six competing factions and thus becoming the first Chinese emperor, titled Qin Shi Huang. A ruthless but efficient ruler, he raised a disciplined and professional army and introduced a significant number of reforms, such as unifying the language and creating a single currency and system of measurement. In addition, he funded dam constructions and began building the first segment of what was to become the Great Wall of China to defend his realm against northern nomads. Nevertheless, internal feuds and rebellions made his empire fall apart after his death in 210 B.C.
In the early fourteenth century AD, Britain suffered repeated rounds of crop failures from unusually heavy rainfall and flooding. Much livestock either starved or drowned. Food prices skyrocketed, and King Edward II attempted to rectify the situation by imposing price controls, but vendors simply refused to sell at such low prices. In any case, the act was abolished by the Lincoln Parliament in 1316. Soon, people from commoners to nobles were finding themselves short of food. Many resorted to begging, crime, and eating animals they otherwise would not eat. People in northern England had to deal with raids from Scotland. There were even reports of cannibalism.
In Continental Europe, things were at least just as bad. The Great Famine of 1315–1317 coincided with the end of the Medieval Warm Period and the start of the Little Ice Age. Some historians suspect that the change in climate was due to Mount Tarawera in New Zealand erupting in 1314. The Great Famine was, however, only one of the calamities striking Europe that century, as the Hundred Years' War and Black Death would soon follow. (Also see the Crisis of the Late Middle Ages.) Recent analysis of tree rings complement historical records and show that the summers of 1314–1316 were some of the wettest on record over a period of 700 years.
Disease outbreaks
Historically, the dawn of agriculture led to the rise of contagious diseases. Compared to their hunting-gathering counterparts, agrarian societies tended to be sedentary, have higher population densities, be in frequent contact with livestock, and be more exposed to contaminated water supplies and higher concentrations of garbage. Poor sanitation, a lack of medical knowledge, superstitions, and sometimes a combination of disasters exacerbated the problem. The journalist Michael Rosenwald wrote that "history shows that past pandemics have reshaped societies in profound ways. Hundreds of millions of people have died. Empires have fallen. Governments have cracked. Generations have been annihilated."
From the description of symptoms by the Greek physician Galen, which included coughing, fever, (blackish) diarrhea, swollen throat, and thirst, modern experts identified the probable culprits of the Antonine Plague (165–180 AD) to have been smallpox or measles. The disease likely started in China and spread to the West via the Silk Road. Roman troops first contracted the disease in the East before they returned home. Striking a virgin population, the Antonine Plague had dreadful mortality rates; between one third to half of the population, 60 to 70 million people, perished. Roman cities suffered from a combination of overcrowding, poor hygiene, and unhealthy diets. They quickly became epicenters. Soon, the disease reached as far as Gaul and mauled Roman defenses along the Rhine. The ranks of the previously formidable Roman army had to be filled with freed slaves, German mercenaries, criminals, and gladiators. That ultimately failed to prevent the Germanic tribes from crossing the Rhine. On the civilian side, the Antonine Plague created drastic shortages of businessmen, which disrupted trade, and farmers, which led to a food crisis. An economic depression followed and government revenue fell. Some accused Emperor Marcus Aurelius and Co-Emperor Lucius Verus, both of whom victims of the disease, of affronting the gods, but others blamed Christians. However, the Antonine Plague strengthened the position of the monotheistic religion of Christianity in the formerly-polytheistic society, as Christians won public admiration for their good works. Ultimately the Roman army, the Roman cities, the size of the empire and its trade routes, which were required for Roman power and influence to exist, facilitated the spread of the disease. The Antonine Plague is considered by some historians as a useful starting point for understanding the decline and fall of the Western Roman Empire. It was followed by the Plague of Cyprian (249–262 AD) and the Plague of Justinian (541-542). Together, they cracked the foundations of the Roman Empire.
In the sixth century AD, while the Western Roman Empire had already succumbed to attacks by the Germanic tribes, the Eastern Roman Empire stood its ground. In fact, a peace treaty with the Persians allowed Emperor Justinian the Great to concentrate on recapturing territories belonging to the Western Empire. His generals, Belisarius and Narses, achieved a number of important victories against the Ostrogoths and the Vandals. However, their hope of keeping the Western Empire was dashed by the arrival of what became known as the Plague of Justinian (541-542). According to the Byzantine historian Procopius of Caesarea, the epidemic originated in China and Northeastern India and reached the Eastern Roman Empire via trade routes terminating in the Mediterranean. Modern scholarship has deduced that the epidemic was caused by the bacterium Yersinia pestis, the same one that would later bring the Black Death, the single deadliest pandemic in human history, but how many actually died from it remains uncertain. Current estimates put the figure between thirty and fifty million people, a significant portion of the human population at that time. The Plague arguably cemented the fate of Rome.
The epidemic also devastated the Sasanian Empire in Persia. Caliph Abu Bakr seized the opportunity to launch military campaigns that overran the Sassanians and captured Roman-held territories in the Caucasus, the Levant, Egypt, and elsewhere in North Africa. Before the Justinian Plague, the Mediterranean world had been commercially and culturally stable. After the Plague, it fractured into a trio of civilizations battling for power: the Islamic Civilization, the Byzantine Empire, and what later became known as Medieval Europe. With so many people dead, the supply of workers, many of whom were slaves, was critically short. Landowners had no choice but to lend pieces of land to serfs to work the land in exchange for military protection and other privileges. That sowed the seeds of feudalism.
There is evidence that the Mongol expeditions may have spread the bubonic plague across much of Eurasia, which helped to spark the Black Death of the early fourteenth century. The Italian historian Gabriele de’ Mussi wrote that the Mongols catapulted the corpses of those who contracted the plague into Caffa (now Feodossia, Crimea) during the siege of that city and that soldiers who were transported from there brought the plague to Mediterranean ports. However, that account of the origin of the Black Death in Europe remains controversial, though plausible, because of the complex epidemiology of the plague. Modern epidemiologists do not believe that the Black Death had a single source of spreading into Europe. Research into the past on this topic is further complicated by politics and the passage of time. It is difficult to distinguish between natural epidemics and biological warfare, both of which are common throughout human history. Biological weapons are economical because they turn an enemy casualty into a delivery system and so were favored in armed conflicts of the past. Furthermore, more soldiers died of disease than in combat until recently. In any case, by the 1340s, Black Death killed 200 million people. The widening trade routes in the Late Middle Ages helped the plague spread rapidly. It took the European population more than two centuries to return to its level before the pandemic. Consequently, it destabilized most of society and likely undermined feudalism and the authority of the Church.
With labor in short supply, workers' bargaining power increased dramatically. Various inventions that reduced the cost of labor, saved time, and raised productivity, such as the three-field crop rotation system, the iron plow, the use of manure to fertilize the soil, and the water pumps, were widely adopted. Many former serfs, now free from feudal obligations, relocated to the cities and changed profession to crafts and trades. The more successful ones became the new middle class. Trade flourished as demands for a myriad of consumer goods rose. Society became wealthier and could afford to fund the arts and the sciences.
Encounters between European explorers and Native Americans exposed the latter to a variety of diseases of extraordinary virulence. Having migrated from Northeastern Asia 15,000 years ago, Native Americans had not been introduced to the plethora of contagious diseases that emerged after the rise of agriculture in the Old World. As such, they had immune systems that were ill-equipped to handle the diseases to which their counterparts in Eurasia had become resistant. When the Europeans arrived in the Americas, in short order, the indigenous populations of the Americas found themselves facing smallpox, measles, whooping cough, and the bubonic plague, among others. In tropical areas, malaria, yellow fever, dengue fever, river blindness, and others appeared. Most of these tropical diseases were traced to Africa. Smallpox ravaged Mexico in the 1520s and killed 150,000 in Tenochtitlán alone, including the emperor, and Peru in the 1530s, which aided the European conquerors. A combination of Spanish military attacks and evolutionarily novel diseases finished off the Aztec Empire in the sixteenth century. It is commonly believed that the death of as much as 90% or 95% of the Native American population of the New World was caused by Old World diseases, though new research suggests tuberculosis from seals and sea lions played a significant part.
Similar events took place in Oceania and Madagascar. Smallpox was externally brought to Australia. The first recorded outbreak, in 1789, devastated the Aboriginal population. The extent of the outbreak is disputed, but some sources claim that it killed about 50% of coastal Aboriginal populations on the east coast. There is an ongoing historical debate concerning two rival and irreconcilable theories about how the disease first entered the continent (see History of smallpox). Smallpox continued to be a deadly disease and killed an estimated 300 million people in the twentieth century alone, but a vaccine, the first of any kind, had been available since 1796.
As humans spread around the globe, human societies flourish and become more dependent on trade, and because urbanization means that people leave sparsely-populated rural areas for densely-populated neighborhoods, infectious diseases spread much more easily. Outbreaks are frequent, even in the modern era, but medical advances have been able to alleviate their impacts. In fact, the human population grew tremendously in the twentieth century, as did the population of farm animals, from which diseases could jump to humans, but in the developed world and increasingly also in the developing world, people are less likely to fall victim to infectious diseases than ever before. For instance, the advent of antibiotics, starting with penicillin in 1928, has resulted in the saving of the lives of hundreds of millions of people suffering from bacterial infections. However, there is no guarantee that would continue because bacteria are becoming increasingly resistant to antibiotics, and doctors and public health experts such as former Chief Medical Officer for England Sally Davies have even warned of an incoming "antibiotic apocalypse." The World Health Organization warned in 2019 that the spread of vaccine scepticism has been accompanied by the resurrection of long-conquered diseases like measles. This lead the WHO to name the antivaccination movement one of the world's top 10 public-health threats.
Institutional unemployment
During the Roman Empire, citizen employment was vastly being replaced by slave labor. Slaves were replacing many of the jobs citizens were doing. Slaves were receiving apprenticeships and education and were even learning to replace the jobs of skilled craftsman.
Since slaves do not pay taxes and were replacing most jobs from citizens, this reduced the revenue the state could accrue from their citizens.
This high level of unemployment also led to high levels of poverty, which reduced demand for businesses relying on slave labor.
As taxes fell, so did government revenue. To compensate for this economic slowdown and mitigate the high levels of poverty, the Roman government implemented a form of welfare called the dole, providing citizens free money and free grain.
Paying for the dole required high levels of government spending, exacerbating the Roman debt and also producing inflation. With slavery replacing most labor, tax revenues also plummeted, further exacerbating the government's debt.
To pay off the enormous debt, the Romans began to devalue the currency and produce more coinage. Eventually, this overwhelmed the Roman Empire and partially contributed to its collapse.
Demographic dynamics
Several key features of human societal collapse can be related to population dynamics. For example, the native population of Cusco, Peru at the time of the Spanish conquest was stressed by an imbalanced sex ratio.
There is strong evidence that humans also display population cycles. Societies as diverse as those of England and France during the Roman, medieval, and early modern eras, of Egypt during Greco-Roman and Ottoman rule, and of various dynasties in China all showed similar patterns of political instability and violence becoming considerably more common after times of relative peace, prosperity, and sustained population growth. Quantitatively, periods of unrest included many times more events of instability per decade and occurred when the population was declining, rather than increasing. Pre-industrial agrarian societies typically faced instability after one or two centuries of stability. However, a population approaching its carrying capacity alone is not enough to trigger general decline if the people remained united and the ruling class strong. Other factors had to be involved, such as having more aspirants for positions of the elite than the society could realistically support (elite overproduction), which led to social strife, and chronic inflation, which caused incomes to fall and threatened the fiscal health of the state. In particular, an excess in especially young adult male population predictably led to social unrest and violence, as the third and higher-order parity sons had trouble realizing their economic desires and became more open to extreme ideas and actions. Adults in their 20s are especially prone to radicalization. Most historical periods of social unrest lacking in external triggers, such as natural calamities, and most genocides can be readily explained as a result of a built-up youth bulge. As those trends intensified, they jeopardized the social fabric, which facilitated the decline.
Theories
Historical theories have evolved from being purely social and ethical, to ideological and ethnocentric, and finally to multidisciplinary studies. They have become much more sophisticated.
Cognitive decline and loss of creativity
The anthropologist Joseph Tainter theorized that collapsed societies essentially exhausted their own designs and were unable to adapt to natural diminishing returns for what they knew as their method of survival. The philosopher Oswald Spengler argued that a civilization in its "winter" would see a disinclination for abstract thinking. The psychologists David Rand and Jonathan Cohen theorized that people switch between two broad modes of thinking. The first is fast and automatic but rigid, and the second is slow and analytical but more flexible. Rand and Cohen believe that explains why people continue with self-destructive behaviors when logical reasoning would have alerted them of the dangers ahead. People switch from the second to the first mode of thinking after the introduction of an invention that dramatically increases the standards of living. Rand and Cohen pointed to the recent examples of the antibiotic overuse leading to resistant bacteria and failure to save for retirement. Tainter noted that according to behavioral economics, the human decision-making process tends to be more irrational than rational and that as the rate of innovation declines, as measured by the number of inventions relative to the amount of money spent on research and development, it becomes progressively harder for there to be a technological solution to the problem of societal collapse.
Social and environmental dynamics
What produces modern sedentary life, unlike nomadic hunter-gatherers, is extraordinary modern economic productivity. Tainter argues that exceptional productivity is actually more the sign of hidden weakness because of a society's dependence on it and its potential to undermine its own basis for success by not being self limiting, as demonstrated in Western culture's ideal of perpetual growth.
As a population grows and technology makes it easier to exploit depleting resources, the environment's diminishing returns are hidden from view. Societal complexity is then potentially threatened if it develops beyond what is actually sustainable, and a disorderly reorganization were to follow. The scissors model of Malthusian collapse, in which the population grows without limit but not resources, is the idea of great opposing environmental forces cutting into each other.
The complete breakdown of economic, cultural, and social institutions with ecological relationships is perhaps the most common feature of collapse. In his book Collapse: How Societies Choose to Fail or Succeed, Jared Diamond proposes five interconnected causes of collapse that may reinforce each other: non-sustainable exploitation of resources, climate changes, diminishing support from friendly societies, hostile neighbors, and inappropriate attitudes for change.
Energy return on investment
Energy has played a crucial role throughout human history. Energy is linked to the birth, growth, and decline of each and every society. Energy surplus is required for the division of labor and the growth of cities. Massive energy surplus is needed for widespread wealth and cultural amenities. Economic prospects fluctuate in tandem with a society's access to cheap and abundant energy.
Political scientist Thomas Homer-Dixon and ecologist Charles Hall proposed an economic model called energy return on investment (EROI), which measures the amount of surplus energy a society gets from using energy to obtain energy. Energy shortages drive up prices and as such provide an incentive to explore and extract previously uneconomical sources, which may still be plentiful, but more energy would be required, and the EROI is then not as high as initially thought.
There would be no surplus if EROI approaches 1:1. Hall showed that the real cutoff is well above that and estimated that 3:1 to sustain the essential overhead energy costs of a modern society. The EROI of the most preferred energy source, petroleum, has fallen in the past century from 100:1 to the range of 10:1 with clear evidence that the natural depletion curves all are downward decay curves. An EROI of more than ~3 then is what appears necessary to provide the energy for socially important tasks, such as maintaining government, legal and financial institutions, a transportation infrastructure, manufacturing, building construction and maintenance, and the lifestyles of all members of a given society.
The social scientist Luke Kemp indicated that alternative sources of energy, such as solar panels, have a low EROI because they have low energy density, meaning they require a lot of land, and require substantial amounts of rare earth metals to produce. Hall and colleagues reached the same conclusion. There is no on-site pollution, but the EROI of renewable energy sources may be too low for them to be considered a viable alternative to fossil fuels, which continue to provide the majority of the energy used by humans.
The mathematician Safa Motesharrei and his collaborators showed that the use of non-renewable resources such as fossil fuels allows populations to grow to one order of magnitude larger than they would using renewable resources alone and as such is able to postpone societal collapse. However, when collapse finally comes, it is much more dramatic. Tainter warned that in the modern world, if the supply of fossil fuels were somehow cut off, shortages of clean water and food would ensue, and millions would die in a few weeks in the worst-case scenario.
Homer-Dixon asserted that a declining EROI was one of the reasons that the Roman Empire declined and fell. The historian Joseph Tainter made the same claim about the Maya Empire.
Models of societal response
According to Joseph Tainter (1990), too many scholars offer facile explanations of societal collapse by assuming one or more of the following three models in the face of collapse:
The Dinosaur, a large-scale society in which resources are being depleted at an exponential rate, but nothing is done to rectify the problem because the ruling elite are unwilling or unable to adapt to those resources' reduced availability. In this type of society, rulers tend to oppose any solutions that diverge from their present course of action but favor intensification and commit an increasing number of resources to their present plans, projects, and social institutions.
The Runaway Train, a society whose continuing function depends on constant growth (cf. Frederick Jackson Turner's Frontier Thesis). This type of society, based almost exclusively on acquisition (such as pillaging or exploitation), cannot be sustained indefinitely. The Assyrian, Roman and Mongol Empires, for example, all fractured and collapsed when no new conquests could be achieved.
The House of Cards, a society that has grown to be so large and include so many complex social institutions that it is inherently unstable and prone to collapse. This type of society has been seen with particular frequency among Eastern Bloc and other communist nations, in which all social organizations are arms of the government or ruling party, such that the government must either stifle association wholesale (encouraging dissent and subversion) or exercise less authority than it asserts (undermining its legitimacy in the public eye).
Tainter's critique
Tainter argues that those models, though superficially useful, cannot severally or jointly account for all instances of societal collapse. Often, they are seen as interconnected occurrences that reinforce one another.
Tainter considers that social complexity is a recent and comparatively-anomalous occurrence, requiring constant support. He asserts that collapse is best understood by grasping four axioms. In his own words (p. 194):
human societies are problem-solving organizations;
sociopolitical systems require energy for their maintenance;
increased complexity carries with it increased costs per capita; and
investment in sociopolitical complexity as a problem-solving response reaches a point of declining marginal returns.
With those facts in mind, collapse can simply be understood as a loss of the energy needed to maintain social complexity. Collapse is thus the sudden loss of social complexity, stratification, internal and external communication and exchange, and productivity.
Toynbee's theory of decay
In his acclaimed 12-volume work, A Study of History (1934–1961), the British historian Arnold J. Toynbee explored the rise and fall of 28 civilizations and came to the conclusion that civilizations generally collapsed mainly by internal factors, factors of their own making, but external pressures also played a role. He theorized that all civilizations pass through several distinct stages: genesis, growth, time of troubles, universal state, and disintegration.
For Toynbee, a civilization is born when a "creative minority" successfully responds to the challenges posed by its physical, social, and political environment. However, the fixation on the old methods of the "creative minority" leads it to eventually cease to be creative and degenerate into merely a "dominant minority" (that forces the majority to obey without meriting obedience), which fails to recognize new ways of thinking. He argues that creative minorities deteriorate from a worship of their "former self", by which they become prideful, and they fail in adequately addressing the next challenge that they face. Similarly, the German philosopher Oswald Spengler discussed the transition from Kultur to Zivilisation in his The Decline of the West (1918).
Toynbee argues that the ultimate sign a civilization has broken down is when the dominant minority forms a Universal State, which stifles political creativity. He states:
He argues that as civilizations decay, they form an "Internal Proletariat" and an "External Proletariat." The Internal proletariat is held in subjugation by the dominant minority inside the civilization, and grows bitter; the external proletariat exists outside the civilization in poverty and chaos and grows envious. He argues that as civilizations decay, there is a "schism in the body social", whereby abandon and self-control together replace creativity, and truancy and martyrdom together replace discipleship by the creative minority.
He argues that in that environment, people resort to archaism (idealization of the past), futurism (idealization of the future), detachment (removal of oneself from the realities of a decaying world), and transcendence (meeting the challenges of the decaying civilization with new insight, as a prophet). He argues that those who transcend during a period of social decay give birth to a new Church with new and stronger spiritual insights around which a subsequent civilization may begin to form after the old has died. Toynbee's use of the word 'church' refers to the collective spiritual bond of a common worship, or the same unity found in some kind of social order.
The historian Carroll Quigley expanded upon that theory in The Evolution of Civilizations (1961, 1979). He argued that societal disintegration involves the metamorphosis of social instruments, which were set up to meet actual needs, into institutions, which serve their own interest at the expense of social needs. However, in the 1950s, Toynbee's approach to history, his style of civilizational analysis, started to face skepticism from mainstream historians who thought it put an undue emphasis on the divine, which led to his academic reputation declining. For a time, however, Toynbee's Study remained popular outside academia. Interest revived decades later with the publication of The Clash of Civilizations (1997) by the political scientist Samuel P. Huntington, who viewed human history as broadly the history of civilizations and posited that the world after the end of the Cold War will be multipolar and one of competing major civilizations, which are divided by "fault lines."
Systems science
Developing an integrated theory of societal collapse that takes into account the complexity of human societies remains an open problem. Researchers currently have very little ability to identify internal structures of large distributed systems like human societies. Genuine structural collapse seems, in many cases, the only plausible explanation supporting the idea that such structures exist. However, until they can be concretely identified, scientific inquiry appears limited to the construction of scientific narratives, using systems thinking for careful storytelling about systemic organization and change.
In the 1990s, the evolutionary anthropologist and quantitative historian Peter Turchin noticed that the equations used to model the populations of predators and preys can also be used to describe the ontogeny of human societies. He specifically examined how social factors such as income inequality were related to political instability. He found recurring cycles of unrest in historical societies such as Ancient Egypt, China, and Russia. He specifically identified two cycles, one long and one short. The long one, what he calls the "secular cycle," lasts for approximately two to three centuries. A society starts out fairly equal. Its population grows and the cost of labor drops. A wealthy upper class emerges, and life for the working class deteriorates. As inequality grows, a society becomes more unstable with the lower-class being miserable and the upper-class entangled in infighting. Exacerbating social turbulence eventually leads to collapse. The shorter cycle lasts for about 50 years and consists of two generations, one peaceful and one turbulent. Looking at US history, for example, Turchin identified times of serious sociopolitical instability in 1870, 1920, and 1970. He announced in 2010 that he had predicted that in 2020, the US would witness a period of unrest at least on the same level as 1970 because the first cycle coincides with the turbulent part of the second in around 2020. He also warned that the US was not the only Western nation under strain.
However, Turchin's model can only paint the broader picture and cannot pinpoint how bad things can get and what precisely triggers a collapse. The mathematician Safa Motesharrei also applied predator-prey models to human society, with the upper class and the lower class being the two different types of "predators" and natural resources being the "prey." He found that either extreme inequality or resource depletion facilitates a collapse. However, a collapse is irreversible only if a society experiences both at the same time, as they "fuel each other."
See also
Apocalypticism
Decadence
Doomer
Doomsday cult
Human extinction
John B. Calhoun's mouse experiments
Lost city
Millenarianism
Ruins
Survivalism
Social alienation
Weltschmerz
Malthusian and environmental collapse themes
Collapsology
Behavioral sink – rat colony collapse
Catastrophism
Earth 2100
Ecological collapse
Global catastrophic risk
Human overpopulation
Medieval demography
Millennium Ecosystem Assessment
Overshoot
Cultural and institutional collapse themes
Civil war
Degrowth
Economic collapse
Failed state
Fragile state
Group cohesiveness
Language death
Progress trap
Social cycle theory
Sociocultural evolution
State collapse
Urban decay
Systems science
Failure mode and effects analysis
Fault tree analysis
Hazard analysis
Risk assessment
Systems engineering
Notes
References
Bibliography
Further reading
Comment by Prof. Michael Kelly, disagreeing with the paper by Ehrlich and Ehrlich; and response by the authors
Homer-Dixon, Thomas. (2006). The Upside of Down: Catastrophe, Creativity, and the Renewal of Civilization. Washington DC: Island Press.
Huesemann, Michael H., and Joyce A. Huesemann (2011). Technofix: Why Technology Won’t Save Us or the Environment, Chapter 6, "Sustainability or Collapse", New Society Publishers, Gabriola Island, British Columbia, Canada, , 464 pp.
Wright, Ronald. (2004). A Short History of Progress. New York: Carroll & Graf Publishers. .
External links
Doomsday scenarios
Economic problems
Social systems
Theories of history
Dark ages | 0.770709 | 0.998248 | 0.769358 |
Medieval technology | Medieval technology is the technology used in medieval Europe under Christian rule. After the Renaissance of the 12th century, medieval Europe saw a radical change in the rate of new inventions, innovations in the ways of managing traditional means of production, and economic growth. The period saw major technological advances, including the adoption of gunpowder, the invention of vertical windmills, spectacles, mechanical clocks, and greatly improved water mills, building techniques (Gothic architecture, medieval castles), and agriculture in general (three-field crop rotation).
The development of water mills from their ancient origins was impressive, and extended from agriculture to sawmills both for timber and stone. By the time of the Domesday Book, most large villages had turnable mills, around 6,500 in England alone. Water-power was also widely used in mining for raising ore from shafts, crushing ore, and even powering bellows.
Many European technical advancements from the 12th to 14th centuries were either built on long-established techniques in medieval Europe, originating from Roman and Byzantine antecedents, or adapted from cross-cultural exchanges through trading networks with the Islamic world, China, and India. Often, the revolutionary aspect lay not in the act of invention itself, but in its technological refinement and application to political and economic power. Though gunpowder along with other weapons had been started by Chinese, it was the Europeans who developed and perfected its military potential, precipitating European expansion and eventual imperialism in the Modern Era.
Also significant in this respect were advances in maritime technology. Advances in shipbuilding included the multi-masted ships with lateen sails, the sternpost-mounted rudder and the skeleton-first hull construction. Along with new navigational techniques such as the dry compass, the Jacob's staff and the astrolabe, these allowed economic and military control of the seas adjacent to Europe and enabled the global navigational achievements of the dawning Age of Exploration.
At the turn to the Renaissance, Gutenberg's invention of mechanical printing made possible a dissemination of knowledge to a wider population, that would not only lead to a gradually more egalitarian society, but one more able to dominate other cultures, drawing from a vast reserve of knowledge and experience. The technical drawings of late-medieval artist-engineers Guido da Vigevano and Villard de Honnecourt can be viewed as forerunners of later Renaissance artist-engineers such as Taccola or Leonardo da Vinci.
Civil technologies
The following is a list of some important medieval technologies. The approximate date or first mention of a technology in medieval Europe is given. Technologies were often a matter of cultural exchange and date and place of first inventions are not listed here (see main links for a more complete history of each).
Agriculture
Carruca (6th to 9th centuries)
A type of heavy wheeled plough commonly found in Northern Europe. The device consisted of four major parts. The first part was a coulter at the bottom of the plough. This knife was used to vertically cut into the top sod to allow for the plowshare to work. The plowshare was the second pair of knives which cut the sod horizontally, detaching it from the ground below. The third part was the moldboard, which curled the sod outward. The fourth part of the device was the team of eight oxen guided by the farmer. This type of plough eliminated the need for cross-plowing by turning over the furrow instead of merely pushing it outward. This type of wheeled plough made seed placement more consistent throughout the farm as the blade could be locked in at a certain level relative to the wheels. A disadvantage to this type of plough was its poor maneuverability. Since this equipment was large and led by a small herd of oxen, turning the plough was difficult and time-consuming. This caused many farmers to turn away from traditional square fields and adopt a longer, more rectangular field to ensure maximum efficiency.
Ard (plough) (5th century)
While ploughs have been used since ancient times, during the medieval period plough technology improved rapidly. The medieval plough, constructed from wooden beams, could be yoked to either humans or a team of oxen and pulled through any type of terrain. This allowed for faster clearing of forest lands for agriculture in parts of Northern Europe where the soil contained rocks and dense tree roots. With more food being produced, more people were able to live in these areas.
Horse collar (6th to 9th centuries)
Once oxen started to be replaced by horses on farms and in fields, the yoke became obsolete due to its shape not working well with a horses' posture. The first design for a horse collar was a throat-and-girth-harness. These types of harnesses were unreliable though due to them not being sufficiently set in place. The loose straps were prone to slipping and shifting positions as the horse was working and often caused asphyxiation. Around the eighth century, the introduction of the rigid collar eliminated the problem of choking. The rigid collar was "placed over the horses head and rested on its shoulders. This permitted unobstructed breathing and placed the weight of the plow or wagon where the horse could best support it."
Horseshoes (9th century)
While horses are already able to travel on all terrain without a protective covering on the hooves, horseshoes allowed horses to travel faster along the more difficult terrains. The practice of shoeing horses was initially practiced in the Roman Empire but lost popularity throughout the Middle Ages until around the 11th century. Although horses in the southern lands could easily work while on the softer soil, the rocky soil of the north proved to be damaging to the horses' hooves. Since the north was the problematic area, this is where shoeing horses first became popular. The introduction of gravel roadways was also cause for the popularity of horseshoeing. The loads a shoed horse could take on these roads were significantly higher than one that was barefoot. By the 14th century, not only did horses have shoes, but many farmers were shoeing oxen and donkeys in order to help prolong the life of their hooves. The size and weight of the horseshoe changed significantly over the course of the Middle Ages. In the 10th century, horseshoes were secured by six nails and weighed around one-quarter of a pound, but throughout the years, the shoes grew larger and by the 14th century, the shoes were being secured with eight nails and weighed nearly half a pound.
Crop rotation
Two-field system
In this simpler form of crop rotation, one field would grow a crop while the other was allowed to lie fallow. The second field would be used to feed livestock and regain lost nutrients through being fertilized by their waste. Every year, the two fields would switch in order to ensure fields did not become nutrient deficient. In the 11th century, this system was introduced into Sweden and spread to become the most popular form of farming. The system of crop rotation is still used today by many farmers, who will grow corn one year in a field and will then grow beans or other legumes in the field the next year.
Three-field system (8th century)
While the two-field system was used by medieval farmers, a different system was also being developed at the same time. In a three-field system, one field holds a spring crop, such as barley or oats, another field holds a winter crop, such as wheat or rye, and the third field is an off-field that is left alone to grow and is used to help feed livestock. By rotating the three crops to a new part of the land after each year, the off-field regains some of the nutrients lost during the growing of the two crops. This system increases agricultural productivity over the two-field system by only having one-third of the land unused instead of one half. Many scholars believe it helped increase yields by up to 50%.
Wine press (12th century)
During the medieval period the wine press had been constantly evolving into a more modern and efficient machine that would give wine makers more wine with less work. This device was the first practical means of pressing wine on a flat surface. The wine press was made of a giant wooden basket that was bound together by wooden or metal rings. At the top of the basket was a large disc that would depress the contents in the basket, crushing the grapes and producing the juice to be fermented.
The wine press was an expensive piece of machinery that only the wealthy could afford, and grape stomping was still often used as a less expensive alternative. While white wines required the use of a wine press in order to preserve the color of the wine by removing the juices quickly from the skin, red wine did not need to be pressed until the end of the juice removal process since the color did not matter. Many red wine winemakers used their feet to smash the grapes then used a press to remove any juice that remained in the grape skins.
Qanat (water ducts) (5th century)
Ancient and medieval civilizations needed and used water to grow the human population as well as to partake in daily activities. One of the ways that ancient and medieval people gained access to water was through qanats, which were a water duct system that would bring water from an underground source or river source to villages or cities. A qanat is a tunnel that is just big enough that a single digger could travel through the tunnel and find the source of water as well as allow for water to travel through the duct system to farm land or villages for irrigation or drinking purposes. These tunnels had a gradual slope which used gravity to pull the water from either an aquifer or a water well. This system was originally found in middle eastern areas and is still used today in places where surface water is hard to find. Qanats were very helpful in not losing water while being transported as well. The most famous water duct system was the Roman aqueduct system, and medieval inventors used the aqueduct system as a blueprint for getting water to villages more quickly and easily than diverting rivers. After aqueducts and qanats much other water based technology was created and used in medieval periods including water mills, dams, wells and other such technology for easy access to water.
Architecture and construction
Pendentive architecture (6th century)
A specific spherical form in the upper corners to support a dome. Although the first experimentation was made in the 3rd century, it wasn't until the 6th century in the Byzantine Empire that its full potential was achieved.
Artesian well (1126)
A thin rod with a hard iron cutting edge is placed in the bore hole and repeatedly struck with a hammer, underground water pressure forces the water up the hole without pumping. Artesian wells are named after the town of Artois in France, where the first one was drilled by Carthusian monks in 1126.
Central heating through underfloor channels (9th century)
In the early medieval Alpine upland, a simpler central heating system where heat travelled through underfloor channels from the furnace room replaced the Roman hypocaust at some places. In Reichenau Abbey a network of interconnected underfloor channels heated the 300 m2 large assembly room of the monks during the winter months. The degree of efficiency of the system has been calculated at 90%.
Rib vault (12th century)
An essential element for the rise of Gothic architecture, rib vaults allowed vaults to be built for the first time over rectangles of unequal lengths. It also greatly facilitated scaffolding and largely replaced the older groin vault.
Chimney (12th century)
The first basic chimney appeared in a Swiss monastery in 820. The earliest true chimney did not appear until the 12th century, with the fireplace appearing at the same time.
Segmental arch bridge (1345)
The Ponte Vecchio in Florence is considered medieval Europe's first stone segmental arch bridge since the end of classical civilizations.
Treadwheel crane (1220s)
The earliest reference to a treadwheel in archival literature is in France about 1225, followed by an illuminated depiction in a manuscript of probably also French origin dating to 1240. Apart from tread-drums, windlasses and occasionally cranks were employed for powering cranes.
Stationary harbour crane (1244)
Stationary harbour cranes are considered a new development of the Middle Ages; its earliest use being documented for Utrecht in 1244. The typical harbour crane was a pivoting structure equipped with double treadwheels. There were two types: wooden gantry cranes pivoting on a central vertical axle and stone tower cranes which housed the windlass and treadwheels with only the jib arm and roof rotating. These cranes were placed on docksides for the loading and unloading of cargo where they replaced or complemented older lifting methods like see-saws, winches and yards. Slewing cranes which allowed a rotation of the load and were thus particularly suited for dockside work appeared as early as 1340.
Floating crane
Beside the stationary cranes, floating cranes which could be flexibly deployed in the whole port basin came into use by the 14th century.
Mast crane
Some harbour cranes were specialised at mounting masts to newly built sailing ships, such as in Gdańsk, Cologne and Bremen.
Wheelbarrow (1170s)
The wheelbarrow proved useful in building construction, mining operations, and agriculture. Literary evidence for the use of wheelbarrows appeared between 1170 and 1250 in north-western Europe. The first depiction is in a drawing by Matthew Paris in the mid-13th century.
Art
Oil paint (by 1125)
As early as the 13th century, oil was used to add details to tempera paintings and paint wooden statues. Flemish painter Jan van Eyck developed the use of a stable oil mixture for panel painting around 1410.
Clocks
Hourglass (1338)
Reasonably dependable, affordable and accurate measure of time. Unlike water in a clepsydra, the rate of flow of sand is independent of the depth in the upper reservoir, and the instrument is not liable to freeze. Hourglasses are a medieval innovation (first documented in Siena, Italy).
Mechanical clocks (13th to 14th centuries)
A European innovation, these weight-driven clocks were used primarily in clock towers.
Mechanics
Compound crank
The Italian physician Guido da Vigevano combines in his 1335 Texaurus, a collection of war machines intended for the recapture of the Holy Land, two simple cranks to form a compound crank for manually powering war carriages and paddle wheel boats. The devices were fitted directly to the vehicle's axle respectively to the shafts turning the paddle wheels.
Metallurgy
Blast furnace (1150–1350)
Cast iron had been made in China since before the 4th century BC. European cast iron first appears in Middle Europe (for instance Lapphyttan in Sweden, Dürstel in Switzerland and the Märkische Sauerland in Germany) around 1150, in some places according to recent research even before 1100. The technique is considered to be an independent European development.
Milling
Ship mill (6th century)
The ship mill is a Byzantine invention, designed to mill grains using hydraulic power. The technology eventually spread to the rest of Europe and was in use until ca. 1800.
Paper mill (13th century)
The first certain use of a water-powered paper mill, evidence for which is elusive in both Chinese and Muslim paper making, dates to 1282.
Rolling mill (15th century)
Used to produce metal sheets of an even thickness. First used on soft, malleable metals, such as lead, gold and tin. Leonardo da Vinci described a rolling mill for wrought iron.
Tidal mills (6th century)
The earliest tidal mills were excavated on the Irish coast where watermillers knew and employed the two main waterwheel types: a 6th-century tide mill at Killoteran near Waterford was powered by a vertical waterwheel, while the tide changes at Little Island were exploited by a twin-flume horizontal-wheeled mill (c. 630) and a vertical undershot waterwheel alongside it. Another early example is the Nendrum Monastery mill from 787 which is estimated to have developed seven to eight horsepower at its peak.
Vertical windmills (1180s)
Invented in Europe as the pivotable post mill, the first surviving mention of one comes from Yorkshire in England in 1185. They were efficient at grinding grain or draining water. Stationary tower mills were also developed in the 13th century.
Water hammer (12th century at the latest)
Used in metallurgy to forge the metal blooms from bloomeries and Catalan forges, they replaced manual hammerwork. The water hammer was eventually superseded by steam hammers in the 19th century.
Navigation
Dry compass (12th century)
The first European mention of the directional compass is in Alexander Neckam's On the Natures of Things, written in Paris around 1190. It was either transmitted from China or the Arabs or an independent European innovation. Dry compass were invented in the Mediterranean around 1300.
Astronomical compass (1269)
The French scholar Pierre de Maricourt describes in his experimental study Epistola de magnete (1269) three different compass designs he has devised for the purpose of astronomical observation.
Stern-mounted rudders (1180s)
The first depiction of a pintle-and-gudgeon rudder on church carvings dates to around 1180. They first appeared with cogs in the North and Baltic Seas and quickly spread to Mediterranean. The iron hinge system was the first stern rudder permanently attached to the ship hull and made a vital contribution to the navigation achievements of the Age of Discovery and thereafter.
Printing, paper and reading
Movable type printing press (1440s)
Johannes Gutenberg's great innovation was not the printing itself, but instead of using carved plates as in woodblock printing, he used separate letters (types) from which the printing plates for pages were made up. This meant the types were recyclable and a page cast could be made up far faster.
Paper (13th century)
Paper was invented in China and transmitted through Islamic Spain in the 13th century. In Europe, the paper-making processes was mechanized by water-powered mills and paper presses (see paper mill).
Rotating bookmark (13th century)
A rotating disc and string device used to mark the page, column, and precise level in the text where a person left off reading in a text. Materials used were often leather, velum, or paper.
Spectacles (1280s)
The first spectacles, invented in Florence, used convex lenses which were of help only to the far-sighted. Concave lenses were not developed prior to the 15th century.
Watermark (1282)
This medieval innovation was used to mark paper products and to discourage counterfeiting. It was first introduced in Bologna, Italy.
Science and learning
Theory of impetus (6th century)
A scientific theory that was introduced by John Philoponus who made criticism of Aristotelian principles of physics, and it served as an inspiration to medieval scholars as well as to Galileo Galilei who ten centuries later, during the Scientific Revolution, extensively cited Philoponus in his works while making the case as to why Aristotelian physics was flawed. It is the intellectual precursor to the concepts of inertia, momentum and acceleration in classical mechanics.
The first extant treatise of magnetism (13th century)
The first extant treatise describing the properties of magnets was done by Petrus Peregrinus de Maricourt when he wrote Epistola de magnete.
Arabic numerals (13th century)
The first recorded mention in Europe was in 976, and they were first widely published in 1202 by Fibonacci with his Liber Abaci.
University
The first medieval universities were founded between the 11th and 13th centuries leading to a rise in literacy and learning. By 1500, the institution had spread throughout most of Europe and played a key role in the Scientific Revolution. Today, the educational concept and institution has been globally adopted.
Textile industry and garments
Functional button (13th century)
German buttons appeared in 13th-century Germany as an indigenous innovation. They soon became widespread with the rise of snug-fitting clothing.
Horizontal loom (11th century)
Horizontal looms operated by foot-treadles were faster and more efficient.
Silk (6th century)
Manufacture of silk began in Eastern Europe in the 6th century and in Western Europe in the 11th or 12th century. Silk had been imported over the Silk Road since antiquity. The technology of "silk throwing" was mastered in Tuscany in the 13th century. The silk works used waterpower and some regard these as the first mechanized textile mills.
Spinning wheel (13th century)
Brought to Europe probably from India.
Miscellaneous
Chess (1450)
The earliest predecessors of the game originated in 6th-century AD India and spread via Persia and the Muslim world to Europe. Here the game evolved into its current form in the 15th century.
Forest glass (c. 1000)
This type of glass uses wood ash and sand as the main raw materials and is characterised by a variety of greenish-yellow colours.
Grindstones (834)
Grindstones are a rough stone, usually sandstone, used to sharpen iron. The first rotary grindstone (turned with a leveraged handle) occurs in the Utrecht Psalter, illustrated between 816 and 834. According to Hägermann, the pen drawing is a copy of a late-antique manuscript. A second crank which was mounted on the other end of the axle is depicted in the Luttrell Psalter from around 1340.
Liquor (12th century)
Primitive forms of distillation were known to the Babylonians, as well as Indians in the first centuries AD. Early evidence of distillation also comes from alchemists working in Alexandria, Roman Egypt, in the 1st century. The medieval Arabs adopted the distillation process, which later spread to Europe. Texts on the distillation of waters, wine, and other spirits were written in Salerno and Cologne in the twelfth and thirteenth centuries.
Liquor consumption rose dramatically in Europe in and after the mid-14th century, when distilled liquors were commonly used as remedies for the Black Death. These spirits would have had a much lower alcohol content (about 40% ABV) than the alchemists' pure distillations, and they were likely first thought of as medicinal elixirs. Around 1400, methods to distill spirits from wheat, barley, and rye were discovered. Thus began the "national" drinks of Europe, including gin (England) and grappa (Italy). In 1437, "burned water" (brandy) was mentioned in the records of the County of Katzenelnbogen in Germany.
Magnets (12th century)
Magnets were first referenced in the Roman d'Enéas, composed between 1155 and 1160.
Mirrors (1180)
The first mention of a "glass" mirror is in 1180 by Alexander Neckham who said "Take away the lead which is behind the glass and there will be no image of the one looking in."
Illustrated surgical atlas (1345)
Guido da Vigevano (c. 1280 − 1349) was the first author to add illustrations to his anatomical descriptions. His Anathomia provides pictures of neuroanatomical structures and techniques such as the dissection of the head by means of trephination, and depictions of the meninges, cerebrum, and spinal cord.
Quarantine (1377)
Initially a 40-day-period, the quarantine was introduced by the Republic of Ragusa as a measure of disease prevention related to the Black Death. It was later adopted by Venice from where the practice spread all around in Europe.
Rat traps (1170s)
The first mention of a rat trap is in the medieval romance Yvain, the Knight of the Lion by Chrétien de Troyes.
Military technologies
Armour
Quilted armour (pre-5th–14th Century)
There was a vast amount of armour technology available through the 5th to 16th centuries. Most soldiers during this time wore padded or quilted armor. This was the cheapest and most available armor for the majority of soldiers. Quilted armour was usually just a jacket made of thick linen and wool meant to pad or soften the impact of blunt weapons and light blows. Although this technology predated the 5th century, it was still extremely prevalent because of the low cost and the weapon technology at the time made the bronze armor of the Greeks and Romans obsolete. Quilted armour was also used in conjunction with other types of armour. Usually worn over or under leather, mail, and later plate armour.
Cuir Bouilli (5th–10th Century)
Hardened leather armour also called Cuir Bouilli was a step up from quilted armour. Made by boiling leather in either water, wax or oil to soften it so it can be shaped, it would then be allowed to dry and become very hard. Large pieces of armour could be made such as breastplates, helmets, and leg guards, but many times smaller pieces would be sewn into the quilting of quilted armour or strips would be sewn together on the outside of a linen jacket. This was not as affordable as the quilted armour but offered much better protection against edged slashing weapons.
Chain mail (11th–16th Century)
The most common type during the 11th through the 16th centuries was the Hauberk, also known earlier than the 11th century as the Carolingian byrnie. Made of interlinked rings of metal, it sometimes consisted of a coif that covered the head and a tunic that covered the torso, arms, and legs down to the knees. Chain mail was very effective at protecting against light slashing blows but ineffective against stabbing or thrusting blows. The great advantage was that it allowed great freedom of movement and was relatively light with significant protection over quilted or hardened leather armour. It was far more expensive than the hardened leather or quilted armour because of the massive amount of labor it required to create. This made it unattainable for most soldiers and only the more wealthy soldiers could afford it. Later, toward the end of the 13th century banded mail became popular. Constructed of washer shaped rings of iron overlapped and woven together by straps of leather as opposed to the interlinked metal rings of chain mail, banded mail was much more affordable to manufacture. The washers were so tightly woven together that it was very difficult penetrate and offered greater protection from arrow and bolt attacks.
Jazerant (11th century)
The Jazerant or Jazeraint was an adaptation of chain mail in which the chain mail would be sewn in between layers of linen or quilted armour. Exceptional protection against light slashing weapons and slightly improved protection against small thrusting weapons, but little protection against large blunt weapons such as maces and axes. This gave birth to reinforced chain mail and became more prevalent in the 12th and 13th century. Reinforced armour was made up of chain mail with metal plates or hardened leather plates sewn in. This greatly improved protection from stabbing and thrusting blows.
Scale armour (12th century)
A type of Lamellar armour, was made up entirely of small, overlapping plates. Either sewn together, usually with leather straps, or attached to a backing such as linen, or a quilted armor. Scale armour does not require the labor to produce that chain mail does and therefore is more affordable. It also affords much better protection against thrusting blows and pointed weapons. Though, it is much heavier, more restrictive and impedes free movement.
Plate armour (14th century)
Plate armour covered the entire body. Although parts of the body were already covered in plate armour as early as 1250, such as the Poleyns for covering the knees and Couters – plates that protected the elbows, the first complete full suit without any textiles was seen around 1410–1430. Components of medieval armour that made up a full suit consisted of a cuirass, a gorget, vambraces, gauntlets, cuisses, greaves, and sabatons held together by internal leather straps. Improved weaponry such as crossbows and the long bow had greatly increased range and power. This made penetration of the chain mail hauberk much easier and more common. By the mid-15th century most plate was worn alone and without the need of a hauberk. Advances in metal working such as the blast furnace and new techniques for carburizing made plate armour nearly impenetrable and the best armour protection available at the time. Although plate armour was fairly heavy, because each suit was custom tailored to the wearer, it was very easy to move around in. A full suit of plate armour was extremely expensive and mostly unattainable for the majority of soldiers. Only very wealthy land owners and nobility could afford it. The quality of plate armour increases as more armour makers became more proficient in metal working. A suit of plate armour became a symbol of social status and the best made were personalized with embellishments and engravings. Plate armour saw continued use in battle until the 17th century.
Cavalry
Arched saddle (11th century)
The arched saddle enabled mounted knights to wield lances underarm and prevent the charge from turning into an unintentional pole-vault. This innovation gave birth to true shock cavalry, enabling fighters to charge on full gallop.
Spurs (11th century)
Spurs were invented by the Normans and appeared at the same time as the cantled saddle. They enabled the horseman to control his horse with his feet, replacing the whip and leaving his arms free. Rowel spurs familiar from cowboy films were already known in the 13th century. Gilded spurs were the ultimate symbol of the knighthood – even today someone is said to "earn his spurs" by proving his or her worthiness.
Stirrup (6th century)
Stirrups were invented by steppe nomads in what is today Mongolia and northern China in the 4th century. They were introduced in Byzantium in the 6th century and in the Carolingian Empire in the 8th. They allowed a mounted knight to wield a sword and strike from a distance leading to a great advantage for mounted cavalry.
Gunpowder weapons
Cannon (1324)
Cannons are first recorded in Europe at the siege of Metz in 1324. In 1350 Petrarch wrote "these instruments which discharge balls of metal with most tremendous noise and flashes of fire...were a few years ago very rare and were viewed with greatest astonishment and admiration, but now they are become as common and familiar as kinds of arms."
Volley gun
See Ribauldequin.
Corned gunpowder (late 14th century)
First practiced in Western Europe, corning the black powder allowed for more powerful and faster ignition of cannons. It also facilitated the storage and transportation of black powder. Corning constituted a crucial step in the evolution of gunpowder warfare.
Very large-calibre cannon (late 14th century)
Extant examples include the wrought-iron Pumhart von Steyr, Dulle Griet and Mons Meg as well as the cast-bronze Faule Mette and Faule Grete (all from the 15th century).
Mechanical artillery
Counterweight trebuchet (12th century)
Powered solely by the force of gravity, these catapults revolutionized medieval siege warfare and construction of fortifications by hurling huge stones unprecedented distances. Originating somewhere in the eastern Mediterranean basin, counterweight trebuchets were introduced in the Byzantine Empire around 1100 CE, and was later adopted by the Crusader states and as well by the other armies of Europe and Asia.
Missile weapons
Greek fire (7th century)
An incendiary weapon which could even burn on water is also attributed to the Byzantines, where they installed it on their ships. It played a crucial role in the Byzantine Empire's victory over the Umayyad Caliphate during the 717-718 Siege of Constantinople.
Grenade (8th century)
Rudimentary incendiary grenades appeared in the Byzantine Empire, as the Byzantine soldiers learned that Greek fire, a Byzantine invention of the previous century, could not only be thrown by flamethrowers at the enemy, but also in stone and ceramic jars.
Longbow with massed, disciplined archery (13th century)
Having a high rate of fire and penetration power, the longbow contributed to the eventual demise of the medieval knight class. Used particularly by the English to great effect against the French cavalry during the Hundred Years' War (1337–1453).
Steel crossbow (late 14th century)
European innovation came with several different cocking aids to enhance draw power, making the weapons also the first hand-held mechanical crossbows.
Miscellaneous
Combined arms tactics (14th century)
The battle of Halidon Hill 1333 was the first battle where intentional and disciplined combined arms infantry tactics were employed. The English men-at-arms dismounted aside the archers, combining thus the staying power of super-heavy infantry and striking power of their two-handed weapons with the missiles and mobility of the archers using longbows and shortbows. Combining dismounted knights and men-at-arms with archers was the archetypal Western Medieval battle tactics until the battle of Flodden 1513 and final emergence of firearms.
Gallery
Notes and references
Bibliography
Andrews, Francis B. The Medieval Builder and His Methods. New York: Barnes & Noble, 1973. Medieval construction technique, with a brief chapter on tools.
Blair, John, and Nigel Ramsay, editors. English Medieval Industries: Craftsmen, Techniques, Products London: Hambledon Press. 1991.
Crosby, Alfred. The Measure of Reality : Quantification in Western Europe, 1250-1600. Cambridge: Cambridge University Press, 1997
Jared Diamond, Guns, germs and steel. A short history of everybody for the last 13'000 years, 1997.
Gies, Frances and Joseph. Cathedral, Forge, and Waterwheel: Technology and Invention in the Middle Ages. New York: HarperCollins, 1994.
Gimpel, Jean. The Medieval Machine: The Industrial Revolution of the Middle Ages. London: Pimlico, (2nd ed. 1992)
Long, Pamela O., editor. Science and Technology in Medieval Society. in Annals of the New York Academy of Sciences, vol 441 New York: New York Academy of Sciences, 1985 A series of papers on highly specific topics.
Singer, Charles, editor. History of Technology. Oxford: Oxford University Press, 1954. Volumes II and III cover the Middle Ages with great scope and detail. This is the standard work.
White, Lynn Jr., "The Study of Medieval Technology, 1924-1974: Personal Reflections" Technology and Culture 16.4 (October 1975), pp. 519–530. A chronology and basic bibliography of landmark studies.
See also
Earlier periods:
Ancient Greek technology
Ancient Roman technology
Medieval period:
Medieval medicine of Western Europe
Medieval transport
Renaissance of the 12th century
Islamic Golden Age
List of inventions in the medieval Islamic world
History of science and technology in the Indian subcontinent
General:
History of technology
External links
The Medieval Technology Pages
Technology in the Medieval Age
Technology by period | 0.772569 | 0.995778 | 0.769308 |
Democratization | Democratization, or democratisation, is the structural government transition from an authoritarian government to a more democratic political regime, including substantive political changes moving in a democratic direction.
Whether and to what extent democratization occurs can be influenced by various factors, including economic development, historical legacies, civil society, and international processes. Some accounts of democratization emphasize how elites drove democratization, whereas other accounts emphasize grassroots bottom-up processes. How democratization occurs has also been used to explain other political phenomena, such as whether a country goes to a war or whether its economy grows.
The opposite process is known as democratic backsliding or autocratization.
Description
Theories of democratization seek to explain a large macro-level change of a political regime from authoritarianism to democracy. Symptoms of democratization include reform of the electoral system, increased suffrage and reduced political apathy.
Measures of democratization
Democracy indices enable the quantitative assessment of democratization. Some common democracy indices are Freedom House, Polity data series, V-Dem Democracy indices and Democracy Index. Democracy indices can be quantitative or categorical. Some disagreements among scholars concern the concept of democracy and how to measure democracy – and what democracy indices should be used.
Waves of democratization
One way to summarize the outcome theories of democratization seek to account is with the idea of waves of democratization
A wave of democratization refers to a major surge of democracy in history. And Samuel P. Huntington identified three waves of democratization that have taken place in history. The first one brought democracy to Western Europe and Northern America in the 19th century. It was followed by a rise of dictatorships during the Interwar period. The second wave began after World War II, but lost steam between 1962 and the mid-1970s. The latest wave began in 1974 and is still ongoing. Democratization of Latin America and the former Eastern Bloc is part of this third wave.
Waves of democratization can be followed by waves of de-democratization. Thus, Huntington, in 1991, offered the following depiction.
• First wave of democratization, 1828–1926
• First wave of de-democratization, 1922–42
• Second wave of democratization, 1943–62
• Second wave of de-democratization, 1958–75
• Third wave of democratization, 1974–
The idea of waves of democratization has also been used and scrutinized by many other authors, including Renske Doorenspleet, John Markoff, Seva Gunitsky, and Svend-Erik Skaaning.
According to Seva Gunitsky, from the 18th century to the Arab Spring (2011–2012), 13 democratic waves can be identified.
The V-Dem Democracy Report identified for the year 2023 9 cases of stand-alone democratization in East Timor, The Gambia, Honduras, Fiji, Dominican Republic, Solomon Islands, Montenegro, Seychelles, and Kosovo and 9 cases of U-Turn Democratization in Thailand, Maldives, Tunisia, Bolivia, Zambia, Benin, North Macedonia, Lesotho, and Brazil.
By country
Throughout the history of democracy, enduring democracy advocates succeed almost always through peaceful means when there is a window of opportunity. One major type of opportunity include governments weakened after a violent shock. The other main avenue occurs when autocrats are not threatened by elections, and democratize while retaining power. The path to democracy can be long with setbacks along the way.
Athens
Benin
Brazil
Chile
France
The French Revolution (1789) briefly allowed a wide franchise. The French Revolutionary Wars and the Napoleonic Wars lasted for more than twenty years. The French Directory was more oligarchic. The First French Empire and the Bourbon Restoration restored more autocratic rule. The French Second Republic had universal male suffrage but was followed by the Second French Empire. The Franco-Prussian War (1870–71) resulted in the French Third Republic.
Germany
Germany established its first democracy in 1919 with the creation of the Weimar Republic, a parliamentary republic created following the German Empire's defeat in World War I. The Weimar Republic lasted only 14 years before it collapsed and was replaced by Nazi dictatorship. Historians continue to debate the reasons why the Weimar Republic's attempt at democratization failed. After Germany was militarily defeated in World War II, democracy was reestablished in West Germany during the U.S.-led occupation which undertook the denazification of society.
United Kingdom
In Great Britain, there was renewed interest in Magna Carta in the 17th century. The Parliament of England enacted the Petition of Right in 1628 which established certain liberties for subjects. The English Civil War (1642–1651) was fought between the King and an oligarchic but elected Parliament, during which the idea of a political party took form with groups debating rights to political representation during the Putney Debates of 1647. Subsequently, the Protectorate (1653–59) and the English Restoration (1660) restored more autocratic rule although Parliament passed the Habeas Corpus Act in 1679, which strengthened the convention that forbade detention lacking sufficient cause or evidence. The Glorious Revolution in 1688 established a strong Parliament that passed the Bill of Rights 1689, which codified certain rights and liberties for individuals. It set out the requirement for regular parliaments, free elections, rules for freedom of speech in Parliament and limited the power of the monarch, ensuring that, unlike much of the rest of Europe, royal absolutism would not prevail. Only with the Representation of the People Act 1884 did a majority of the males get the vote.
Greece
Indonesia
Italy
In September 1847, violent riots inspired by Liberals broke out in Reggio Calabria and in Messina in the Kingdom of the Two Sicilies, which were put down by the military. On 12 January 1848 a rising in Palermo spread throughout the island and served as a spark for the Revolutions of 1848 all over Europe. After similar revolutionary outbursts in Salerno, south of Naples, and in the Cilento region which were backed by the majority of the intelligentsia of the Kingdom, on 29 January 1848 King Ferdinand II of the Two Sicilies was forced to grant a constitution, using for a pattern the French Charter of 1830. This constitution was quite advanced for its time in liberal democratic terms, as was the proposal of a unified Italian confederation of states. On 11 February 1848, Leopold II of Tuscany, first cousin of Emperor Ferdinand I of Austria, granted the Constitution, with the general approval of his subjects. The Habsburg example was followed by Charles Albert of Sardinia (Albertine Statute; later became the constitution of the unified Kingdom of Italy and remained in force, with changes, until 1948) and by Pope Pius IX (Fundamental Statute). However, only King Charles Albert maintained the statute even after the end of the riots.
The Kingdom of Italy, after the unification of Italy in 1861, was a constitutional monarchy. The new kingdom was governed by a parliamentary constitutional monarchy dominated by liberals. The Italian Socialist Party increased in strength, challenging the traditional liberal and conservative establishment. From 1915 to 1918, the Kingdom of Italy took part in World War I on the side of the Entente and against the Central Powers. In 1922, following a period of crisis and turmoil, the Italian fascist dictatorship was established. During World War II, Italy was first part of the Axis until it surrendered to the Allied powers (1940–1943) and then, as part of its territory was occupied by Nazi Germany with fascist collaboration, a co-belligerent of the Allies during the Italian resistance and the subsequent Italian Civil War, and the liberation of Italy (1943–1945). The aftermath of World War II left Italy also with an anger against the monarchy for its endorsement of the Fascist regime for the previous twenty years. These frustrations contributed to a revival of the Italian republican movement. Italy became a republic after the 1946 Italian institutional referendum held on 2 June, a day celebrated since as Festa della Repubblica. Italy has a written democratic constitution, resulting from the work of a Constituent Assembly formed by the representatives of all the anti-fascist forces that contributed to the defeat of Nazi and Fascist forces during the liberation of Italy and the Italian Civil War, and coming into force on 1 January 1948.
Japan
In Japan, limited democratic reforms were introduced during the Meiji period (when the industrial modernization of Japan began), the Taishō period (1912–1926), and the early Shōwa period. Despite pro-democracy movements such as the Freedom and People's Rights Movement (1870s and 1880s) and some proto-democratic institutions, Japanese society remained constrained by a highly conservative society and bureaucracy. Historian Kent E. Calder notes that writers that "Meiji leadership embraced constitutional government with some pluralist features for essentially tactical reasons" and that pre-World war II Japanese society was dominated by a "loose coalition" of "landed rural elites, big business, and the military" that was averse to pluralism and reformism. While the Imperial Diet survived the impacts of Japanese militarism, the Great Depression, and the Pacific War, other pluralistic institutions, such as political parties, did not. After World War II, during the Allied occupation, Japan adopted a much more vigorous, pluralistic democracy.
Madagascar
Malawi
Latin America
Countries in Latin America became independent between 1810 and 1825, and soon had some early experiences with representative government and elections. All Latin American countries established representative institutions soon after independence, the early cases being those of Colombia in 1810, Paraguay and Venezuela in 1811, and Chile in 1818. Adam Przeworski shows that some experiments with representative institutions in Latin America occurred earlier than in most European countries. Mass democracy, in which the working class had the right to vote, become common only in the 1930s and 1940s.
Portugal
Senegal
Spain
South Africa
South Korea
Soviet Union
Switzerland
Roman Republic
Tunisia
Ukraine
United States of America
The American Revolution (1765–1783) created the United States. The new Constitution established a relatively strong federal national government that included an executive, a national judiciary, and a bicameral Congress that represented states in the Senate and the population in the House of Representatives. In many fields, it was a success ideologically in the sense that a true republic was established that never had a single dictator, but voting rights were initially restricted to white male property owners (about 6% of the population). Slavery was not abolished in the Southern states until the constitutional Amendments of the Reconstruction era following the American Civil War (1861–1865). The provision of Civil Rights for African-Americans to overcome post-Reconstruction Jim Crow segregation in the South was achieved in the 1960s.
Causes and factors
There is considerable debate about the factors which affect (e.g., promote or limit) democratization. Factors discussed include economic, political, cultural, individual agents and their choices, international and historical.
Economic factors
Economic development and modernization theory
Scholars such as Seymour Martin Lipset; Carles Boix and Susan Stokes, and Dietrich Rueschemeyer, Evelyne Stephens, and John Stephens argue that economic development increases the likelihood of democratization. Initially argued by Lipset in 1959, this subsequently been referred to as modernization theory. According to Daniel Treisman, there is "a strong and consistent relationship between higher income and both democratization and democratic survival in the medium term (10–20 years), but not necessarily in shorter time windows." Robert Dahl argued that market economies provided favorable conditions for democratic institutions.
A higher GDP/capita correlates with democracy and some claim the wealthiest democracies have never been observed to fall into authoritarianism. The rise of Hitler and of the Nazis in Weimar Germany can be seen as an obvious counter-example, but although in early 1930s Germany was already an advanced economy, by that time, the country was also living in a state of economic crisis virtually since the first World War (in the 1910s), a crisis which was eventually worsened by the effects of the Great Depression. There is also the general observation that democracy was very rare before the industrial revolution. Empirical research thus led many to believe that economic development either increases chances for a transition to democracy, or helps newly established democracies consolidate. One study finds that economic development prompts democratization but only in the medium run (10–20 years). This is because development may entrench the incumbent leader but make it more difficult for him deliver the state to a son or trusted aide when he exits. However, the debate about whether democracy is a consequence of wealth, a cause of it, or both processes are unrelated, is far from conclusive. Another study suggests that economic development depends on the political stability of a country to promote democracy. Clark, Robert and Golder, in their reformulation of Albert Hirschman's model of Exit, Voice and Loyalty, explain how it is not the increase of wealth in a country per se which influences a democratization process, but rather the changes in the socio-economic structures that come together with the increase of wealth. They explain how these structure changes have been called out to be one of the main reasons several European countries became democratic. When their socioeconomic structures shifted because modernization made the agriculture sector more efficient, bigger investments of time and resources were used for the manufacture and service sectors. In England, for example, members of the gentry began investing more in commercial activities that allowed them to become economically more important for the state. This new kind of productive activities came with new economic power were assets became more difficult for the state to count and hence more difficult to tax. Because of this, predation was no longer possible and the state had to negotiate with the new economic elites to extract revenue. A sustainable bargain had to be reached because the state became more dependent of its citizens remaining loyal and, with this, citizens had now leverage to be taken into account in the decision making process for the country.
Adam Przeworski and Fernando Limongi argue that while economic development makes democracies less likely to turn authoritarian, there is insufficient evidence to conclude that development causes democratization (turning an authoritarian state into a democracy). Economic development can boost public support for authoritarian regimes in the short-to-medium term. Andrew J. Nathan argues that China is a problematic case for the thesis that economic development causes democratization. Michael Miller finds that development increases the likelihood of "democratization in regimes that are fragile and unstable, but makes this fragility less likely to begin with."
There is research to suggest that greater urbanization, through various pathways, contributes to democratization.
Numerous scholars and political thinkers have linked a large middle class to the emergence and sustenance of democracy, whereas others have challenged this relationship.
In "Non-Modernization" (2022), Daron Acemoglu and James A. Robinson argue that modernization theory cannot account for various paths of political development "because it posits a link between economics and politics that is not conditional on institutions and culture and that presumes a definite endpoint—for example, an 'end of history'."
A meta-analysis by Gerardo L. Munck of research on Lipset's argument shows that a majority of studies do not support the thesis that higher levels of economic development leads to more democracy.
A 2024 study linked industrialization to democratization, arguing that large-scale employment in manufacturing made mass mobilization easier to occur and harder to repress.
Capital Mobility
Theories on causes to democratization such as economic development focus rather on the aspect of gaining capital, capital mobility focuses on the movement of money and the across borders of countries and different financial instruments and the corresponding restrictions. Over the past decades they have been multiple theories as to what the relationship is between capital mobility and democratization.
The “doomsway view” is that capital mobility is an inherent threat to underdeveloped democracies by worsening the economic inequalities and favoring the interests of powerful elites and external actors over broader societal, which might lead to depending on money from outside, therefore affected by the economic situation in other countries. Sylvia Maxfield argues that a bigger demand for transparency in both the private and public sectors by some investors can contribute to a strengthening of democratic institutions and can encourage democratic consolidation.
A 2016 study found that preferential trade agreements can increase democratization of a country, especially in case of trade with other democracies. A 2020 study found increased trade between democracies reduces democratic backsliding, while trade between democracies and autocracies reduces democratization of the autocracies. Trade and capital mobility often involve international organizations, such as the International Money Fund (IMF), World Bank, and World Trade Organization (WTO), which can condition financial assistance or trade agreements on democratic reforms.
Classes, cleavages and alliances
Sociologist Barrington Moore Jr., in his influential Social Origins of Dictatorship and Democracy (1966), argues that the distribution of power among classes – the peasantry, the bourgeoise and the landed aristocracy – and the nature of alliances between classes determined whether democratic, authoritarian or communist revolutions occurred. Moore also argued there were at least "three routes to the modern world" – the liberal democratic, the fascist, and the communist – each deriving from the timing of industrialization and the social structure at the time of transition. Thus, Moore challenged modernization theory, by stressing that there was not one path to the modern world and that economic development did not always bring about democracy.
Many authors have questioned parts of Moore's arguments. Dietrich Rueschemeyer, Evelyne Stephens, and John D. Stephens, in Capitalist Development and Democracy (1992), raise questions about Moore's analysis of the role of the bourgeoisie in democratization. Eva Bellin argues that under certain circumstances, the bourgeoise and labor are more likely to favor democratization, but less so under other circumstances. Samuel Valenzuela argues that, counter to Moore's view, the landed elite supported democratization in Chile. A comprehensive assessment conducted by James Mahoney concludes that "Moore's specific hypotheses about democracy and authoritarianism receive only limited and highly conditional support."
A 2020 study linked democratization to the mechanization of agriculture: as landed elites became less reliant on the repression of agricultural workers, they became less hostile to democracy.
According to political scientist David Stasavage, representative government is "more likely to occur when a society is divided across multiple political cleavages." A 2021 study found that constitutions that emerge through pluralism (reflecting distinct segments of society) are more likely to induce liberal democracy (at least, in the short term).
Political-economic factors
Rulers' need for taxation
Robert Bates and Donald Lien, as well as David Stasavage, have argued that rulers' need for taxes gave asset-owning elites the bargaining power to demand a say on public policy, thus giving rise to democratic institutions. Montesquieu argued that the mobility of commerce meant that rulers had to bargain with merchants in order to tax them, otherwise they would leave the country or hide their commercial activities. Stasavage argues that the small size and backwardness of European states, as well as the weakness of European rulers, after the fall of the Roman Empire meant that European rulers had to obtain consent from their population to govern effectively.
According to Clark, Golder, and Golder, an application of Albert O. Hirschman's exit, voice, and loyalty model is that if individuals have plausible exit options, then a government may be more likely to democratize. James C. Scott argues that governments may find it difficult to claim a sovereignty over a population when that population is in motion. Scott additionally asserts that exit may not solely include physical exit from the territory of a coercive state, but can include a number of adaptive responses to coercion that make it more difficult for states to claim sovereignty over a population. These responses can include planting crops that are more difficult for states to count, or tending livestock that are more mobile. In fact, the entire political arrangement of a state is a result of individuals adapting to the environment, and making a choice as to whether or not to stay in a territory. If people are free to move, then the exit, voice, and loyalty model predicts that a state will have to be of that population representative, and appease the populace in order to prevent them from leaving. If individuals have plausible exit options then they are better able to constrain a government's arbitrary behaviour through threat of exit.
Inequality and democracy
Daron Acemoglu and James A. Robinson argued that the relationship between social equality and democratic transition is complicated: People have less incentive to revolt in an egalitarian society (for example, Singapore), so the likelihood of democratization is lower. In a highly unequal society (for example, South Africa under Apartheid), the redistribution of wealth and power in a democracy would be so harmful to elites that these would do everything to prevent democratization. Democratization is more likely to emerge somewhere in the middle, in the countries, whose elites offer concessions because (1) they consider the threat of a revolution credible and (2) the cost of the concessions is not too high. This expectation is in line with the empirical research showing that democracy is more stable in egalitarian societies.
Other approaches to the relationship between inequality and democracy have been presented by Carles Boix, Stephan Haggard and Robert Kaufman, and Ben Ansell and David Samuels.
In their 2019 book The Narrow Corridor and a 2022 study in the American Political Science Review, Acemoglu and Robinson argue that the nature of the relationship between elites and society determine whether stable democracy emerges. When elites are overly dominant, despotic states emerge. When society is overly dominant, weak states emerge. When elites and society are evenly balance, inclusive states emerge.
Natural resources
Research shows that oil wealth lowers levels of democracy and strengthens autocratic rule. According to Michael Ross, petroleum is the sole resource that has "been consistently correlated with less democracy and worse institutions" and is the "key variable in the vast majority of the studies" identifying some type of resource curse effect. A 2014 meta-analysis confirms the negative impact of oil wealth on democratization.
Thad Dunning proposes a plausible explanation for Ecuador's return to democracy that contradicts the conventional wisdom that natural resource rents encourage authoritarian governments. Dunning proposes that there are situations where natural resource rents, such as those acquired through oil, reduce the risk of distributive or social policies to the elite because the state has other sources of revenue to finance this kind of policies that is not the elite wealth or income. And in countries plagued with high inequality, which was the case of Ecuador in the 1970s, the result would be a higher likelihood of democratization. In 1972, the military coup had overthrown the government in large part because of the fears of elites that redistribution would take place. That same year oil became an increasing financial source for the country. Although the rents were used to finance the military, the eventual second oil boom of 1979 ran parallel to the country's re-democratization. Ecuador's re-democratization can then be attributed, as argued by Dunning, to the large increase of oil rents, which enabled not only a surge in public spending but placated the fears of redistribution that had grappled the elite circles. The exploitation of Ecuador's resource rent enabled the government to implement price and wage policies that benefited citizens at no cost to the elite and allowed for a smooth transition and growth of democratic institutions.
The thesis that oil and other natural resources have a negative impact on democracy has been challenged by historian Stephen Haber and political scientist Victor Menaldo in a widely cited article in the American Political Science Review (2011). Haber and Menaldo argue that "natural resource reliance is not an exogenous variable" and find that when tests of the relationship between natural resources and democracy take this point into account "increases in resource reliance are not associated with authoritarianism."
Cultural factors
Values and religion
It is claimed by some that certain cultures are simply more conducive to democratic values than others. This view is likely to be ethnocentric. Typically, it is Western culture which is cited as "best suited" to democracy, with other cultures portrayed as containing values which make democracy difficult or undesirable. This argument is sometimes used by undemocratic regimes to justify their failure to implement democratic reforms. Today, however, there are many non-Western democracies. Examples include: India, Japan, Indonesia, Namibia, Botswana, Taiwan, and South Korea. Research finds that "Western-educated leaders significantly and substantively improve a country's democratization prospects".
Huntington presented an influential, but also controversial arguments about Confucianism and Islam. Huntington held that that "In practice Confucian or Confucian-influenced societies have been inhospitable to democracy." He also held that "Islamic doctrine ... contains elements that may be both congenial and uncongenial to democracy," but generally thought that Islam was an obstacle to democratization. In contrast, Alfred Stepan was more optimistic about the compatibility of different religions and democracy.
Steven Fish and Robert Barro have linked Islam to undemocratic outcomes. However, Michael Ross argues that the lack of democracies in some parts of the Muslim world has more to do with the adverse effects of the resource curse than Islam. Lisa Blaydes and Eric Chaney have linked the democratic divergence between the West and the Middle-East to the reliance on mamluks (slave soldiers) by Muslim rulers whereas European rulers had to rely on local elites for military forces, thus giving those elites bargaining power to push for representative government.
Robert Dahl argued, in On Democracy, that countries with a "democratic political culture" were more prone for democratization and democratic survival. He also argued that cultural homogeneity and smallness contribute to democratic survival. Other scholars have however challenged the notion that small states and homogeneity strengthen democracy.
A 2012 study found that areas in Africa with Protestant missionaries were more likely to become stable democracies. A 2020 study failed to replicate those findings.
Sirianne Dahlum and Carl Henrik Knutsen offer a test of the Ronald Inglehart and Christian Welzel revised version of modernization theory, which focuses on cultural traits triggered by economic development that are presumed to be conducive to democratization. They find "no empirical support" for the Inglehart and Welzel thesis and conclude that "self-expression values do not enhance democracy levels or democratization chances, and neither do they stabilize existing democracies."
Education
It has long been theorized that education promotes stable and democratic societies. Research shows that education leads to greater political tolerance, increases the likelihood of political participation and reduces inequality. One study finds "that increases in levels of education improve levels of democracy and that the democratizing effect of education is more intense in poor countries".
It is commonly claimed that democracy and democratization were important drivers of the expansion of primary education around the world. However, new evidence from historical education trends challenges this assertion. An analysis of historical student enrollment rates for 109 countries from 1820 to 2010 finds no support for the claim that democratization increased access to primary education around the world. It is true that transitions to democracy often coincided with an acceleration in the expansion of primary education, but the same acceleration was observed in countries that remained non-democratic.
Wider adoption of voting advice applications can lead to increased education on politics and increased voter turnout.
Social capital and civil society
Civil society refers to a collection of non-governmental organizations and institutions that advance the interests, priorities and will of citizens. Social capital refers to features of social life—networks, norms, and trust—that allow individuals to act together to pursue shared objectives.
Robert Putnam argues that certain characteristics make societies more likely to have cultures of civic engagement that lead to more participatory democracies. According to Putnam, communities with denser horizontal networks of civic association are able to better build the "norms of trust, reciprocity, and civic engagement" that lead to democratization and well-functioning participatory democracies. By contrasting communities in Northern Italy, which had dense horizontal networks, to communities in Southern Italy, which had more vertical networks and patron-client relations, Putnam asserts that the latter never built the culture of civic engagement that some deem as necessary for successful democratization.
Sheri Berman has rebutted Putnam's theory that civil society contributes to democratization, writing that in the case of the Weimar Republic, civil society facilitated the rise of the Nazi Party. According to Berman, Germany's democratization after World War I allowed for a renewed development in the country's civil society; however, Berman argues that this vibrant civil society eventually weakened democracy within Germany as it exacerbated existing social divisions due to the creation of exclusionary community organizations. Subsequent empirical research and theoretical analysis has lent support for Berman's argument. Yale University political scientist Daniel Mattingly argues civil society in China helps the authoritarian regime in China to cement control. Clark, M. Golder, and S. Golder also argue that despite many believing democratization requires a civic culture, empirical evidence produced by several reanalyses of past studies suggest this claim is only partially supported. Philippe C. Schmitter also asserts that the existence of civil society is not a prerequisite for the transition to democracy, but rather democratization is usually followed by the resurrection of civil society (even if it did not exist previously).
Research indicates that democracy protests are associated with democratization. According to a study by Freedom House, in 67 countries where dictatorships have fallen since 1972, nonviolent civic resistance was a strong influence over 70 percent of the time. In these transitions, changes were catalyzed not through foreign invasion, and only rarely through armed revolt or voluntary elite-driven reforms, but overwhelmingly by democratic civil society organizations utilizing nonviolent action and other forms of civil resistance, such as strikes, boycotts, civil disobedience, and mass protests. A 2016 study found that about a quarter of all cases of democracy protests between 1989 and 2011 lead to democratization.
Theories based on political agents and choices
Elite-opposition negotiations and contingency
Scholars such as Dankwart A. Rustow, and Guillermo O'Donnell and Philippe C. Schmitter in their classic Transitions from Authoritarian Rule: Tentative Conclusions about Uncertain Democracies (1986), argued against the notion that there are structural "big" causes of democratization. These scholars instead emphasize how the democratization process occurs in a more contingent manner that depends on the characteristics and circumstances of the elites who ultimately oversee the shift from authoritarianism to democracy.
O'Donnell and Schmitter proposed a strategic choice approach to transitions to democracy that highlighted how they were driven by the decisions of different actors in response to a core set of dilemmas. The analysis centered on the interaction among four actors: the hard-liners and soft-liners who belonged to the incumbent authoritarian regime, and the moderate and radical oppositions against the regime. This book not only became the point of reference for a burgeoning academic literature on democratic transitions, it was also read widely by political activists engaged in actual struggles to achieve democracy.
Adam Przeworski, in Democracy and the Market (1991), offered the first analysis of the interaction between rulers and opposition in transitions to democracy using rudimentary game theory. and he emphasizes the interdependence of political and economic transformations.
Elite-driven democratization
Scholars have argued that processes of democratization may be elite-driven or driven by the authoritarian incumbents as a way for those elites to retain power amid popular demands for representative government. If the costs of repression are higher than the costs of giving away power, authoritarians may opt for democratization and inclusive institutions. According to a 2020 study, authoritarian-led democratization is more likely to lead to lasting democracy in cases when the party strength of the authoritarian incumbent is high. However, Michael Albertus and Victor Menaldo argue that democratizing rules implemented by outgoing authoritarians may distort democracy in favor of the outgoing authoritarian regime and its supporters, resulting in "bad" institutions that are hard to get rid of. According to Michael K. Miller, elite-driven democratization is particularly likely in the wake of major violent shocks (either domestic or international) which provide openings to opposition actors to the authoritarian regime. Dan Slater and Joseph Wong argue that dictators in Asia chose to implement democratic reforms when they were in positions of strength in order to retain and revitalize their power.
According to a study by political scientist Daniel Treisman, influential theories of democratization posit that autocrats "deliberately choose to share or surrender power. They do so to prevent revolution, motivate citizens to fight wars, incentivize governments to provide public goods, outbid elite rivals, or limit factional violence." His study shows that in many cases, "democratization occurred not because incumbent elites chose it but because, in trying to prevent it, they made mistakes that weakened their hold on power. Common mistakes include: calling elections or starting military conflicts, only to lose them; ignoring popular unrest and being overthrown; initiating limited reforms that get out of hand; and selecting a covert democrat as leader. These mistakes reflect well-known cognitive biases such as overconfidence and the illusion of control."
Sharun Mukand and Dani Rodrik dispute that elite-driven democratization produce liberal democracy. They argue that low levels of inequality and weak identity cleavages are necessary for liberal democracy to emerge. A 2020 study by several political scientists from German universities found that democratization through bottom-up peaceful protests led to higher levels of democracy and democratic stability than democratization prompted by elites.
The three dictatorship types, monarchy, civilian and military have different approaches to democratization as a result of their individual goals. Monarchic and civilian dictatorships seek to remain in power indefinitely through hereditary rule in the case of monarchs or through oppression in the case of civilian dictators. A military dictatorship seizes power to act as a caretaker government to replace what they consider a flawed civilian government. Military dictatorships are more likely to transition to democracy because at the onset, they are meant to be stop-gap solutions while a new acceptable government forms.
Research suggests that the threat of civil conflict encourages regimes to make democratic concessions. A 2016 study found that drought-induced riots in Sub-Saharan Africa lead regimes, fearing conflict, to make democratic concessions.
Scrambled constituencies
Mancur Olson theorizes that the process of democratization occurs when elites are unable to reconstitute an autocracy. Olson suggests that this occurs when constituencies or identity groups are mixed within a geographic region. He asserts that this mixed geographic constituencies requires elites to for democratic and representative institutions to control the region, and to limit the power of competing elite groups.
Death or ouster of dictator
One analysis found that "Compared with other forms of leadership turnover in autocracies—such as coups, elections, or term limits—which lead to regime collapse about half of the time, the death of a dictator is remarkably inconsequential. ... of the 79 dictators who have died in office (1946–2014)... in the vast majority (92%) of cases, the regime persists after the autocrat's death."
Women's suffrage
One of the critiques of Huntington's periodization is that it doesn't give enough weight to universal suffrage. Pamela Paxton argues that once women's suffrage is taken into account, the data reveal "a long, continuous democratization period from 1893–1958, with only war-related reversals."
International factors
War and national security
Jeffrey Herbst, in his paper "War and the State in Africa" (1990), explains how democratization in European states was achieved through political development fostered by war-making and these "lessons from the case of Europe show that war is an important cause of state formation that is missing in Africa today." Herbst writes that war and the threat of invasion by neighbors caused European state to more efficiently collect revenue, forced leaders to improve administrative capabilities, and fostered state unification and a sense of national identity (a common, powerful association between the state and its citizens). Herbst writes that in Africa and elsewhere in the non-European world "states are developing in a fundamentally new environment" because they mostly "gained Independence without having to resort to combat and have not faced a security threat since independence." Herbst notes that the strongest non-European states, South Korea and Taiwan, are "largely 'warfare' states that have been molded, in part, by the near constant threat of external aggression."
Elizabeth Kier has challenged claims that total war prompts democratization, showing in the cases of the UK and Italy during World War I that the policies adopted by the Italian government during World War I prompted a fascist backlash whereas UK government policies towards labor undermined broader democratization.
War and peace
Wars may contribute to the state-building that precedes a transition to democracy, but war is mainly a serious obstacle to democratization. While adherents of the democratic peace theory believe that democracy causes peace, the territorial peace theory makes the opposite claim that peace causes democracy. In fact, war and territorial threats to a country are likely to increase authoritarianism and lead to autocracy.
This is supported by historical evidence showing that in almost all cases, peace has come before democracy. A number of scholars have argued that there is little support for the hypothesis that democracy causes peace, but strong evidence for the opposite hypothesis that peace leads to democracy.
Christian Welzel's human empowerment theory posits that existential security leads to emancipative cultural values and support for a democratic political organization. This is in agreement with theories based on evolutionary psychology. The so-called regality theory finds that people develop a psychological preference for a strong leader and an authoritarian form of government in situations of war or perceived collective danger. On the other hand, people will support egalitarian values and a preference for democracy in situations of peace and safety. The consequence of this is that a society will develop in the direction of autocracy and an authoritarian government when people perceive collective danger, while the development in the democratic direction requires collective safety.
International institutions
A number of studies have found that institutional institutions have helped facilitate democratization. Thomas Risse wrote in 2009, "there is a consensus in the literature on Eastern Europe that the EU membership perspective had a huge anchoring effects for the new democracies." Scholars have also linked NATO expansion with playing a role in democratization. international forces can significantly affect democratization. Global forces like the diffusion of democratic ideas and pressure from international financial institutions to democratize have led to democratization.
Promotion, and foreign influence and intervention
The European Union has contributed to the spread of democracy, in particular by encouraging democratic reforms in aspiring member states. Thomas Risse wrote in 2009, "there is a consensus in the literature on Eastern Europe that the EU membership perspective had a huge anchoring effects for the new democracies."
Steven Levitsky and Lucan Way have argued that close ties to the West increased the likelihood of democratization after the end of the Cold War, whereas states with weak ties to the West adopted competitive authoritarian regimes.
A 2002 study found that membership in regional organizations "is correlated with transitions to democracy during the period from 1950 to 1992."
A 2004 study found no evidence that foreign aid led to democratization.
Democracies have often been imposed by military intervention, for example in Japan and Germany after World War II. In other cases, decolonization sometimes facilitated the establishment of democracies that were soon replaced by authoritarian regimes. For example, Syria, after gaining independence from French mandatory control at the beginning of the Cold War, failed to consolidate its democracy, so it eventually collapsed and was replaced by a Ba'athist dictatorship.
Robert Dahl argued in On Democracy that foreign interventions contributed to democratic failures, citing Soviet interventions in Central and Eastern Europe and U.S. interventions in Latin America. However, the delegitimization of empires contributed to the emergence of democracy as former colonies gained independence and implemented democracy.
Geographic factors
Some scholars link the emergence and sustenance of democracies to areas with access to the sea, which tends to increase the mobility of people, goods, capital, and ideas.
Historical factors
Historical legacies
In seeking to explain why North America developed stable democracies and Latin America did not, Seymour Martin Lipset, in The Democratic Century (2004), holds that the reason is that the initial patterns of colonization, the subsequent process of economic incorporation of the new colonies, and the wars of independence differ. The divergent histories of Britain and Iberia are seen as creating different cultural legacies that affected the prospects of democracy. A related argument is presented by James A. Robinson in "Critical Junctures and Developmental Paths" (2022).
Sequencing and causality
Scholars have discussed whether the order in which things happen helps or hinders the process of democratization. An early discussion occurred in the 1960s and 1970s. Dankwart Rustow argued that "'the most effective sequence' is the pursuit of national unity, government authority, and political equality, in that order." Eric Nordlinger and Samuel Huntington stressed "the importance of developing effective governmental institutions before the emergence of mass participation in politics." Robert Dahl, in Polyarchy: Participation and Opposition (1971), held that the "commonest sequence among the older and more stable polyarchies has been some approximation of the ... path [in which] competitive politics preceded expansion in participation."
In the 2010s, the discussion focused on the impact of the sequencing between state building and democratization. Francis Fukuyama, in Political Order and Political Decay (2014), echoes Huntington's "state-first" argument and holds that those "countries in which democracy preceded modern state-building have had much greater problems achieving high-quality governance." This view has been supported by Sheri Berman, who offers a sweeping overview of European history and concludes that "sequencing matters" and that "without strong states...liberal democracy is difficult if not impossible to achieve."
However, this state-first thesis has been challenged. Relying on a comparison of Denmark and Greece, and quantitative research on 180 countries across 1789–2019, Haakon Gjerløw, Carl Henrik Knutsen, Tore Wig, and Matthew C. Wilson, in One Road to Riches? (2022), "find little evidence to support the stateness-first argument." Based on a comparison of European and Latin American countries, Sebastián Mazzuca and Gerardo Munck, in A Middle-Quality Institutional Trap (2021), argue that counter to the state-first thesis, the "starting point of political developments is less important than whether the State–democracy relationship is a virtuous cycle, triggering causal mechanisms that reinforce each."
In sequences of democratization for many countries, Morrison et al. found elections as the most frequent first element of the sequence of democratization but found this ordering does not necessarily predict successful democratization.
The democratic peace theory claims that democracy causes peace, while the territorial peace theory claims that peace causes democracy.
Notes
References
Sources
Further reading
Key works
Acemoglu, Daron, and James A. Robinson. 2006. Economic Origins of Dictatorship and Democracy. New York, NY: Cambridge University Press.
Albertus, Michael and Victor Menaldo. 2018. Authoritarianism and the Elite Origins of Democracy. New York: Cambridge University Press.
Berman, Sheri. 2019. Democracy and Dictatorship in Europe: From the Ancien Régime to the Present Day. New York: Oxford University Press.
Boix, Carles. 2003. Democracy and Redistribution. New York: Cambridge University Press
Brancati, Dawn. 2016. Democracy Protests: Origins, Features and Significance. New York: Cambridge University Press
Carothers, Thomas. 1999. Aiding Democracy Abroad: The Learning Curve. Washington, DC: Carnegie Endowment for International Peace.
Collier, Ruth Berins. 1999. Paths Toward Democracy: Working Class and Elites in Western Europe and South America. New York: Cambridge University Press
Coppedge, Michael, Amanda Edgell, Carl Henrik Knutsen, and Staffan I. Lindberg (eds.). 2022. Why Democracies Develop and Decline. New York, NY: Cambridge University Press.
Fukuyama, Francis. 2014. Political Order and Political Decay. From the Industrial Revolution to the Globalization of Democracy. New York: Farrar, Straus and Giroux.
Haggard, Stephen and Robert Kaufman. 2016. Dictators and Democrats: Elites, Masses, and Regime Change. Princeton, NJ: Princeton University Press.
Inglehart, Ronald and Christian Welzel. 2005. Modernization, Cultural Change and Democracy: The Human Development Sequence. New York: Cambridge University Press.
Hadenius, Axel. 2001. Institutions and Democratic Citizenship. Oxford: Oxford University Press.
Levitsky, Steven, and Lucan A. Way. 2010. Competitive Authoritarianism: Hybrid Regimes After the Cold War. New York, NY: Cambridge University Press.
Linz, Juan J., and Alfred Stepan. 1996. Problems of Democratic Transition and Consolidation: Southern Europe, South America and Post-Communist Europe. Baltimore, MD: The Johns Hopkins University Press.
Lipset, Seymour Martin. 1959. "Some Social Requisites of Democracy: Economic Development and Political Legitimacy." American Political Science Review 53(1): 69–105.
Mainwaring, Scott, and Aníbal Pérez-Liñán. 2014. Democracies and Dictatorships in Latin America. Emergence, Survival, and Fall. New York: Cambridge University Press.
Møller, Jørgen and Svend-Erik Skaaning (eds.). 2016. The State-Democracy Nexus. Conceptual Distinctions, Theoretical Perspectives, and Comparative Approaches. London: Routledge.
O'Donnell, Guillermo, and Philippe C. Schmitter. 1986. Transitions from Authoritarian Rule. Tentative Conclusions about Uncertain Democracies. Baltimore, MD: The Johns Hopkins University Press.
Przeworski, Adam. 1991. Democracy and the Market. Political and Economic Reforms in Eastern Europe and Latin America. New York, NY: Cambridge University Press.
Przeworski, Adam, Michael E. Alvarez, José Antonio Cheibub, and Fernando Limongi. 2000. Democracy and Development: Political Institutions and Well-Being in the World, 1950–1990. New York, NY: Cambridge University Press.
Rosenfeld, Bryn. 2020. The Autocratic Middle Class: How State Dependency Reduces the Demand for Democracy. Princeton, NJ, Princeton University Press.
Schaffer, Frederic C. Democracy in Translation: Understanding Politics in an Unfamiliar Culture. 1998. Ithaca, NY: Cornell University Press.
Teele, Dawn Langan. 2018. Forging the Franchise: The Political Origins of the Women's Vote. Princeton, NJ: Princeton University Press.
Teorell, Jan. 2010. Determinants of Democratization: Explaining Regime Change in the World, 1972 -2006. New York, NY: Cambridge University Press.
Tilly, Charles. 2004. Contention and Democracy in Europe, 1650–2000. New York: Cambridge University Press.
Tilly, Charles. 2007. Democracy. New York: Cambridge University Press.
Vanhanen, Tatu. 2003. Democratization: A Comparative Analysis of 170 Countries. Routledge.
Welzel, Christian. 2013. Freedom Rising: Human Empowerment and the Quest for Emancipation. New York: Cambridge University Press.
Weyland, Kurt. 2014. Making Waves: Democratic Contention in Europe and Latin America since the Revolutions of 1848. New York: Cambridge University Press
Zakaria, Fareed. The Future of Freedom: Illiberal Democracy at Home and Abroad. 2003. New York: W.W. Norton.
Ziblatt, Daniel. 2017. Conservative Parties and the Birth of Democracy. New York: Cambridge University Press.
Overviews of the research
Bunce, Valerie. 2000. "Comparative Democratization: Big and Bounded Generalizations." Comparative Political Studies 33(6–7): 703–34.
Cheibub, José Antonio, and James Raymond Vreeland. 2018. "Modernization Theory: Does Economic Development Cause Democratization?" pp. 3–21, in Carol Lancaster and Nicolas van de Walle (eds.), Oxford Handbook of the Politics of Development. New York, NY: Oxford University Press.
Coppedge, Michael. 2012. Democratization and Research Methods. New York, NY: Cambridge University Press.
Geddes, Barbara. 1999. "What Do We Know About Democratization After Twenty Years?" Annual Review of Political Science 2:1, 115–144.
Mazzuca, Sebastián. 2010. "Macrofoundations of Regime Change: Democracy, State Formation, and Capitalist Development." Comparative Politics 43(1): 1–19.
Møller, Jørgen, and Svend-Erik Skaaning. 2013. Democracy and Democratization in Comparative Perspective: Conceptions, Conjunctures, Causes and Consequences. London, UK: Routledge.
Munck, Gerardo L. 2015. "Democratic Transitions," pp. 97–100, in James D. Wright (ed.), International Encyclopedia of the Social and Behavioral Sciences 2nd edn., Vol. 6. Oxford, UK: Elsevier Science.
Potter, David. 1997. "Explaining Democratization," pp. 1–40, in David Potter, David Goldblatt, Margaret Kiloh, and Paul Lewis (eds.), Democratization. Cambridge, UK: Polity Press and The Open University.
Welzel, Christian. 2009. "Theories of Democratization", pp. 74–91, in Christian W. Haerpfer, Patrick Bernhagen, Ronald F. Inglehart, and Christian Welzel (eds.), Democratization. Oxford, UK: Oxford University Press.
Wucherpfennig, Julian, and Franziska Deutsch. 2009. "Modernization and Democracy: Theories and Evidence Revisited." Living Reviews in Democracy Vol. 1, p. 1–9. 9p.
External links
International IDEA (International Institute for Democracy and Electoral Assistance)
Muno, Wolfgang. 2012. "Democratization". InterAmerican Wiki: Terms – Concepts – Critical Perspectives.
Podcast: Democracy Paradox, hundreds of interviews with democracy experts around the world
Comparative politics
Political terminology
Politics
Democracy
Law reform
Global politics
Types of democracy | 0.773653 | 0.99438 | 0.769305 |
Historical negationism | Historical negationism, also called historical denialism, is falsification or distortion of the historical record. This is not the same as historical revisionism, a broader term that extends to newly evidenced, fairly reasoned academic reinterpretations of history. In attempting to revise and influence the past, historical negationism acts as illegitimate historical revisionism by using techniques inadmissible in proper historical discourse, such as presenting known forged documents as genuine, inventing ingenious but implausible reasons for distrusting genuine documents, attributing conclusions to books and sources that report the opposite, manipulating statistical series to support the given point of view, and deliberately mistranslating traditional or modern texts.
Some countries, such as Germany, have criminalized the negationist revision of certain historical events, while others take a more cautious position for various reasons, such as protection of free speech. Others have in the past mandated negationist views, such as the US state of California, where it is claimed that some schoolchildren have been explicitly prevented from learning about the California genocide. Notable examples of negationism include denials of the Holocaust, Nakba, Holodomor, Armenian genocide, the Lost Cause of the Confederacy, and the clean Wehrmacht myth. In literature, it has been imaginatively depicted in some works of fiction, such as Nineteen Eighty-Four, by George Orwell. In modern times, negationism may spread via political, religious agendas through state media, mainstream media, and new media, such as the Internet.
Origin of the term
The term negationism (négationnisme) was first coined by the French historian Henry Rousso in his 1987 book The Vichy Syndrome which looked at the French popular memory of Vichy France and the French Resistance. Rousso posited that it was necessary to distinguish between legitimate historical revisionism in Holocaust studies and politically motivated denial of the Holocaust, which he termed negationism.
Purposes
Usually, the purpose of historical negation is to achieve a national, political aim, by transferring war guilt, demonizing an enemy, providing an illusion of victory, or preserving a friendship.
Ideological influence
The principal functions of negationist history are the abilities to control ideological and political influence.
History is a social resource that contributes to shaping national identity, culture, and the public memory. Through the study of history, people are imbued with a particular cultural identity; therefore, by negatively revising history, the negationist can craft a specific, ideological identity. Because historians are credited as people who single-mindedly pursue truth, by way of fact, negationist historians capitalize on the historian's professional credibility, and present their pseudohistory as true scholarship. By adding a measure of credibility to the work of revised history, the ideas of the negationist historian are more readily accepted in the public mind. As such, professional historians recognize the revisionist practice of historical negationism as the work of "truth-seekers" finding different truths in the historical record to fit their political, social, and ideological contexts.
Political influence
History provides insight into past political policies and consequences, and thus assists people in extrapolating political implications for contemporary society. Historical negationism is applied to cultivate a specific political myth, sometimes with official consent from the government, whereby self-taught, amateur, and dissident academic historians either manipulate or misrepresent historical accounts to achieve political ends. For example, after the late 1930s in the Soviet Union, the ideology of the Communist Party of the Soviet Union and historiography in the Soviet Union treated reality and the party line as the same intellectual entity, especially in regards to the Russian Civil War and peasant rebellions; Soviet historical negationism advanced a specific, political, and ideological agenda about Russia and its place in world history.
Techniques
Historical negationism applies the techniques of research, quotation, and presentation for deception of the reader and denial of the historical record. In support of the "revised history" perspective, the negationist historian uses false documents as genuine sources, presents specious reasons to distrust genuine documents, exploits published opinions by quoting out of historical context, manipulates statistics, and mistranslates texts in other languages. The revision techniques of historical negationism operate in the intellectual space of public debate for the advancement of a given interpretation of history and the cultural perspective of the "revised history". As a document, the revised history is used to negate the validity of the factual, documentary record, and so reframe explanations and perceptions of the discussed historical event, to deceive the reader, the listener, and the viewer; therefore, historical negationism functions as a technique of propaganda. Rather than submit their works for peer review, negationist historians rewrite history and use logical fallacies to construct arguments that will obtain the desired results, a "revised history" that supports an agenda – political, ideological, religious, etc.
In the practice of historiography, the British historian Richard J. Evans describes the technical differences, between professional historians and negationist historians, commenting: "Reputable and professional historians do not suppress parts of quotations from documents that go against their own case, but take them into account, and, if necessary, amend their own case, accordingly. They do not present, as genuine, documents which they know to be forged, just because these forgeries happen to back up what they are saying. They do not invent ingenious, but implausible, and utterly unsupported reasons for distrusting genuine documents, because these documents run counter to their arguments; again, they amend their arguments, if this is the case, or, indeed, abandon them altogether. They do not consciously attribute their own conclusions to books and other sources, which, in fact, on closer inspection, actually say the opposite. They do not eagerly seek out the highest possible figures in a series of statistics, independently of their reliability, or otherwise, simply because they want, for whatever reason, to maximize the figure in question, but rather, they assess all the available figures, as impartially as possible, to arrive at a number that will withstand the critical scrutiny of others. They do not knowingly mistranslate sources in foreign languages to make them more serviceable to themselves. They do not willfully invent words, phrases, quotations, incidents and events, for which there is no historical evidence, to make their arguments more plausible."
Deception
Deception includes falsifying information, obscuring the truth, and lying to manipulate public opinion about the historical event discussed in the revised history. The negationist historian applies the techniques of deception to achieve either a political or an ideological goal, or both. The field of history distinguishes among history books based upon credible, verifiable sources, which were peer-reviewed before publication; and deceptive history books, based upon unreliable sources, which were not submitted for peer review. The distinction among types of history books rests upon the research techniques used in writing a history. Verifiability, accuracy, and openness to criticism are central tenets of historical scholarship. When these techniques are sidestepped, the presented historical information might be deliberately deceptive, a "revised history".
Denial
Denial is defensively protecting information from being shared with other historians, and claiming that facts are untrue, especially denial of the war crimes and crimes against humanity perpetrated in the course of the World War II (1939–1945) and the Holocaust (1933–1945). The negationist historian protects the historical-revisionism project by blame shifting, censorship, distraction, and media manipulation; occasionally, denial by protection includes risk management for the physical security of revisionist sources.
Relativization and trivialization
Comparing certain historical atrocities to other crimes is the practice of relativization, interpretation by moral judgements, to alter public perception of the first historical atrocity. Although such comparisons often occur in negationist history, their pronouncement is not usually part of revisionist intentions upon the historical facts, but an opinion of moral judgement.
The Holocaust and Nazism: The historian Deborah Lipstadt says that the concept of "comparable Allied wrongs", such as the expulsion of Germans after World War II from Nazi-colonized lands and the formal Allied war crimes, is at the centre of, and is a continually repeated theme of, contemporary Holocaust denial, and that such relativization presents "immoral equivalencies".
Some proponents of the Lost Cause of the Confederacy use certain historical examples of non-chattel slavery in discussions on the role of the slave system of the South, however insofar as it is to further this ideological point it obscures and downplays the specificities of the American slave system- both its place in history, and comparison to other such systems overall. For example, court decisions and statutes mandated multi-generational slavery unlike many other slave systems, and further beyond that stated that a freedman could never become a citizen of the United States. These measures, related to perpetuation and racist characterization besides further restrictions on life even outside slavery in support of it distinguished the system compared to all, or an overwhelming amount of historical systems
Connected to the Lost Cause of the Confederacy is the Irish slaves myth, a pseudo-historical narrative which conflates the experiences of Irish indentured servants and enslaved Africans in the Americas. This distortion, which was historically promoted by Irish nationalists such as John Mitchel, has in the modern-day been promoted by white supremacists in the United States to negate the mistreatment experienced by African Americans (such as racism and segregation), also opposing slavery reparations.
Examples
Book burning
Repositories of literature have been targeted throughout history (e.g., the Grand Library of Baghdad, the burning of liturgical and historical books of the St. Thomas Christians by the archbishop of Goa Aleixo de Menezes) including recently, such as the 1981 Burning of Jaffna library and the destruction of Iraqi libraries by ISIS during the fall of Mosul in 2014. Similarly, British officials destroyed documents in Operation Legacy to avoid records on colonial rule falling into the hands of countries declaring independence from Britain and any scrutiny of the British state.
Chinese book burning
The Burning of books and burying of scholars, or "Fires of Qin", refers to the burning of writings and slaughter of scholars during the Qin dynasty of ancient China, between the period of 213 and 210 BC. "Books" at this point refers to writings on bamboo strips, which were then bound together. The exact extent of the damage is hard to assess; technological books were to be spared and even the "objectionable" books, poetry and philosophy in particular, were preserved in imperial archives and allowed to be kept by the official scholar.
United States history
Confederate revisionism
The historical negationism of American Civil War revisionists and Neo-Confederates claims that the Confederate States (1861–1865) were the defenders rather than the instigators of the American Civil War, and that the Confederacy's motivation for secession from the United States was the maintenance of the Southern states' rights and limited government, rather than the preservation and expansion of chattel slavery.
Regarding Neo-Confederate revisionism of the U.S. Civil War, the historian Brooks D. Simpson says: "This is an active attempt to reshape historical memory, an effort by white Southerners to find historical justifications for present-day actions. The neo–Confederate movement's ideologues have grasped that if they control how people remember the past, they'll control how people approach the present and the future. Ultimately, this is a very conscious war for memory and heritage. It's a quest for legitimacy, the eternal quest for justification."
In the early 20th century, Mildred Rutherford, the historian general of the United Daughters of the Confederacy (UDC), led the attack against American history textbooks that did not present the "Lost Cause of the Confederacy" version of the history of the U.S. Civil War. To that pedagogical end, Rutherford assembled a "massive collection" of documents that included "essay contests on the glory of the Ku Klux Klan and personal tributes to faithful slaves". About the historical negationism of the United Daughters of the Confederacy, the historian David Blight says: "All UDC members and leaders were not as virulently racist as Rutherford, but all, in the name of a reconciled nation, participated in an enterprise that deeply influenced the white supremacist vision of Civil War memory."
California genocide
Between 1846 and 1873, following the conquest of California by the United States, the region's Indigenous Californian population plummeted from around 150,000 to around 30,000 due to disease, famine, forced removals, slavery, and massacres. Many historians refer to the massacres as the California genocide. Between 9,500 and 16,000 California Natives were killed by both government forces and white settlers in massacres during this period. Despite the well documented evidence of the widespread massacres and atrocities, the public school curriculum and history textbooks approved by the California Department of Education ignore the history of this genocide.
According to author Clifford Trafzer, although many historians have pushed for recognition of the genocide in public school curricula, government-approved textbooks omit mention of the genocide because of the dominance of conservative publishing companies with an ideological impetus to deny the genocide, the fear of publishing companies being branded as un-American for discussing it, and the unwillingness of state and federal government officials to acknowledge the genocide due to the possibility of having to pay reparations to indigenous communities affected by it.
War crimes
Japanese war crimes
The post-war minimization of the war crimes of Japanese imperialism is an example of "illegitimate" historical revisionism; some contemporary Japanese revisionists, such as Yūko Iwanami (granddaughter of General Hideki Tojo), propose that Japan's invasion of China, and World War II, itself, were justified reactions to the Western imperialism of the time. On 2 March 2007, Japanese prime minister Shinzō Abe denied that the military had forced women into sexual slavery during the war, saying, "The fact is, there is no evidence to prove there was coercion". Before he spoke, some Liberal Democratic Party legislators also sought to revise Yōhei Kōno's apology to former comfort women in 1993; likewise, there was the controversial negation of the six-week Nanking Massacre in 1937–1938.
Shinzō Abe was general secretary of a group of parliament members concerned with history education that is associated with the Japanese Society for History Textbook Reform, and was a special advisor to Nippon Kaigi, which are two openly revisionist groups denying, downplaying, or justifying various Japanese war crimes. Editor-in-chief of the conservative Yomiuri Shimbun Tsuneo Watanabe criticized the Yasukuni Shrine as a bastion of revisionism: "The Yasukuni Shrine runs a museum where they show items in order to encourage and worship militarism. It's wrong for the prime minister to visit such a place". Other critics note that men, who would contemporarily be perceived as "Korean" and "Chinese", are enshrined for the military actions they effected as Japanese Imperial subjects.
Hiroshima and Nagasaki bombings
The Hibakusha ("explosion-affected people") of Hiroshima and Nagasaki seek compensation from their government and criticize it for failing to "accept responsibility for having instigated and then prolonged an aggressive war long after Japan's defeat was apparent, resulting in a heavy toll in Japanese, Asian and American lives". EB Sledge expressed concern that such revisionism, in his words "mellowing", would allow the harsh facts of the history that led to the bombings to be forgotten. Historians Hill and Koshiro have stated that attempts to minimize the importance of the bombings as "righteous revenge and salvation" would be revisionism, and that while the Japanese should recognize their atrocities led the bombing, Americans also have to accept the fact that their own actions "caused massive destruction and suffering that has lasted for fifty years".
Croatian war crimes in World War II
Some Croats, including some high-ranked officials and political leaders during the 1990s and far-right organization members, have attempted to minimize the magnitude of the genocide perpetrated against Serbs and other ethnic minorities in the World War II puppet state of Nazi Germany, the Independent State of Croatia. By 1989, the future President of Croatia Franjo Tuđman (who had been a Partisan during World War II), had embraced Croatian nationalism and published Horrors of War: Historical Reality and Philosophy, in which he questioned the official number of victims killed by the Ustaše during the Second World War, particularly at the Jasenovac concentration camp. Yugoslav and Serbian historiography had long exaggerated the number of victims at the camp. Tuđman criticized the long-standing figures, but also described the camp as a "work camp", giving an estimate of between 30,000 and 40,000 deaths. Tuđman's government's toleration of Ustaša symbols and their crimes often dismissed in public, frequently strained relations with Israel.
Croatia's far-right often advocates the false theory that Jasenovac was a "labour camp" where mass murder did not take place. In 2017, two videos of former Croatian president Stjepan Mesić from 1992 were made public in which he stated that Jasenovac was not a death camp. The far-right NGO "The Society for Research of the Threefold Jasenovac Camp" also advocates this disproven theory, in addition to claiming that the camp was used by the Yugoslav authorities following the war to imprison Ustasha members and regular Home Guard army troops until 1948, then alleged Stalinists until 1951. Its members include journalist Igor Vukić, who wrote his own book advocating the theory, Catholic priest Stjepan Razum and academic Josip Pečarić. The ideas promoted by its members have been amplified by mainstream media interviews and book tours. The last book, "The Jasenovac Lie Revealed" written by Vukić, prompted the Simon Wiesenthal Center to urge Croatian authorities to ban such works, noting that they "would immediately be banned in Germany and Austria and rightfully so". In 2016, Croatian filmmaker Jakov Sedlar released a documentary Jasenovac – The Truth which advocated the same theories, labelling the camp as a "collection and labour camp". The film contained alleged falsifications and forgeries, in addition to denial of crimes and hate speech towards politicians and journalists.
Serbian war crimes in World War II
Among far-right and nationalist groups, denial and revisionism of Serbian war crimes are carried out through the downplaying of Milan Nedić and Dimitrije Ljotić's roles in the extermination of Serbia's Jews in concentration camps, in the German-occupied Territory of the Military Commander in Serbia by a number of Serbian historians. Serbian collaborationist armed forces were involved, either directly or indirectly, in the mass killings of Jews as well as Roma and those Serbs who sided with any anti-German resistance and the killing of many Croats and Muslims. Since the end of the war, Serbian collaboration in the Holocaust has been the subject of historical revisionism by Serbian leaders. In 1993, the Serbian Academy of Sciences and Arts listed Nedić among The 100 most prominent Serbs. There is also the denial of Chetnik collaboration with Axis forces and crimes committed during World War II. Serbian historian Jelena Djureinovic states in her book The Politics of Memory of the Second World War in Contemporary Serbia: Collaboration, Resistance and Retribution that "during those years, the WWII nationalist Chetniks have been recast as an anti-fascist movement equivalent to Tito's Partisans, and as victims of communism". The glorification of the Chetnik movement has now become the central theme of Serbia's WWII memory politics. Chetnik leaders convicted under communist rule of collaboration with the Nazis have been rehabilitated by Serbian courts, and television programmes have contributed to spreading a positive image of the movement, "distorting the real picture of what happened during WWII".
Serbian war crimes in the Yugoslav wars
There have been a number of far-right and nationalist authors and political activists who have publicly disagreed with mainstream views of Serbian war crimes in the Yugoslav wars of 1991–1999. Some high-ranked Serbian officials and political leaders who categorically claimed that no genocide against Bosnian Muslims took place at all, include former president of Serbia Tomislav Nikolić, Bosnian Serb leader Milorad Dodik, Serbian Minister of Defence Aleksandar Vulin and Serbian far-right leader Vojislav Šešelj. Among the points of contention are whether the victims of massacres such as the Račak massacre and Srebrenica massacre were unarmed civilians or armed resistance fighters, whether death and rape tolls were inflated, and whether prison camps such as Sremska Mitrovica camp were sites of mass war crimes. These authors are called "revisionists" by scholars and organizations, such as ICTY.
The Report about Case Srebrenica by Darko Trifunovic, commissioned by the government of the Republika Srpska, was described by the International Criminal Tribunal for the former Yugoslavia as "one of the worst examples of revisionism in relation to the mass executions of Bosnian Muslims committed in Srebrenica in July 1995". Outrage and condemnation by a wide variety of Balkan and international figures eventually forced the Republika Srpska to disown the report. In 2017 legislation that banned the teaching of the Srebrenica genocide and Sarajevo siege in schools was introduced in Republika Srpska, initiated by President Milorad Dodik and his SNSD party, who stated that it was "impossible to use here the textbooks ... which say the Serbs have committed genocide and kept Sarajevo under siege. This is not correct and this will not be taught here". In 2019 Republika Srpska authorities appointed Israeli historian Gideon Greif – who has worked at Yad Vashem for more than three decades – to head its own revisionist commission to "determine the truth" about Srebrenica.
Massacres of Poles in Volhynia and Eastern Galicia
The issue of the Volyn massacres was largely non-existent in Ukrainian scholarly literature for many years, and until very recently, Ukrainian historiography did not undertake any objective research of the events in Volyn. Until 1991 any independent Ukrainian historic research was only possible abroad, mainly in the US and the Canadian diaspora. Despite publishing a number of works devoted to the history of UPA, the Ukrainian emigration researchers (with only few exceptions) remained completely mute about the Volyn events for many years. Until very recently much of the remaining documentation was closed in Ukrainian state archives, unavailable to researchers. As a result, Ukrainian historiography lacks broader reliable research of the events and the presence of the issue in Ukrainian publications is still very limited. The young generation of Ukrainian historians is often infected with Ukrainocentrism, and often borrows the stereotypes and myths about Poland and Poles from the biased publications of the Ukrainian diaspora.
In September 2016, after Poland's Sejm had passed a resolution declaring 11 July a National Day of Remembrance of the victims of the Genocide of the Citizens of the Polish Republic committed by Ukrainian Nationalists and formally called the Massacres of Poles in Volhynia and Eastern Galicia a genocide, the Verkhovna Rada of Ukraine passed a resolution condemning "the one-sided political assessment of the historical events in Poland", rejecting the term "genocide".
Indonesian mass killings of 1965–66
Discussion of the killings was taboo in Indonesia and, if mentioned at all, usually called peristiwa enam lima, the incident of '65. Inside and outside Indonesia, public discussion of the killings increased during the 1990s and especially after 1998 when the New Order government collapsed. Jailed and exiled members of the Sukarno regime, as well as ordinary people, told their stories in increasing numbers. Foreign researchers began to publish increasingly more on the topic, with the end of the military regime and its doctrine of coercing such research attempts into futility.
The killings are skipped over in most Indonesian histories and have been scarcely examined by Indonesians, and has received comparatively little international attention. Indonesian textbooks typically depict the killings as a "patriotic campaign" that resulted in less than 80,000 deaths. In 2004, the textbooks were briefly changed to include the events, but this new curriculum discontinued in 2006 following protests from the military and Islamic groups. The textbooks which mentioned the mass killings were subsequently burnt by order of Indonesia's Attorney General. John Roosa's Pretext for Mass Murder (2006) was initially banned by the Attorney General's Office. The Indonesian parliament set up a truth and reconciliation commission to analyse the killings, but it was suspended by the Indonesian High Court. An academic conference regarding the killings was held in Singapore in 2009. A hesitant search for mass graves by survivors and family members began after 1998, although little has been found. Over three decades later, great enmity remains in Indonesian society over the events.
Turkey and the Armenian genocide
Turkish laws such as Article 301, that state "a person who publicly insults Turkishness, or the Republic or [the] Turkish Grand National Assembly of Turkey, shall be punishable by imprisonment", were used to criminally charge the writer Orhan Pamuk with disrespecting Turkey, for saying that "Thirty thousand Kurds, and a million Armenians, were killed in these lands, and nobody, but me, dares to talk about it". The controversy occurred as Turkey was first vying for membership in the European Union (EU) where the suppression of dissenters is looked down upon. Article 301 originally was part of penal-law reforms meant to modernize Turkey to European Union standards, as part of negotiating Turkey's accession to the EU. In 2006, the charges were dropped due to pressure from the European Union and United States on the Turkish government.
On 7 February 2006, five journalists were tried for insulting the judicial institutions of the State, and for aiming to prejudice a court case (per Article 288 of the Turkish penal code). The reporters were on trial for criticizing the court-ordered closing of a conference in Istanbul regarding the Armenian genocide during the time of the Ottoman Empire. The conference continued elsewhere, transferring locations from a state to a private university. The trial continued until 11 April 2006, when four of the reporters were acquitted. The case against the fifth journalist, Murat Belge, proceeded until 8 June 2006, when he was also acquitted. The purpose of the conference was to critically analyse the official Turkish view of the Armenian genocide in 1915; a taboo subject in Turkey. The trial proved to be a test case between Turkey and the European Union; the EU insisted that Turkey should allow increased freedom of expression rights, as a condition to membership.
South Korean war crimes in Vietnam
At the request of the United States, South Korea under Park Chung Hee sent approximately 320,000 South Korean troops to fight alongside the United States and South Vietnam during the Vietnam War. Various civilian groups have accused the South Korean military of many "My Lai-style massacres", while the Korean Ministry of Defense has denied all such accusations. Korean forces are alleged to have perpetrated the Binh Tai, Bình An/Tây Vinh, Bình Hòa, and Hà My massacres and several other massacres across Vietnam, killing as many as 9000 Vietnamese civilians.
In 2023, a South Korean court ruled in favour of a Vietnamese victim of South Korean atrocities during the war and ordered that the South Korean government compensate the surviving victim. In response, the South Korean government repeated its earlier denials of the atrocities, and later announced its appeal of the decision. This strained relations with Vietnam, as a spokesperson for Vietnam's fording ministry called the decision "extremely regrettable".
Iran
The Islamic Republic of Iran uses historical negationism against religious minorities in order to maintain legitimacy and relevancy of the regime. One example is the regime's approach to the Baháʼí community. In 2008, an erroneous and misleading biography of Báb was presented to all primary school children.
In his official 2013 Nowruz address, Supreme Leader of Iran Grand Ayatollah Ali Khamenei questioned the veracity of the Holocaust, remarking that "The Holocaust is an event whose reality is uncertain and if it has happened, it's uncertain how it has happened." This was consistent with Khamenei's previous comments regarding the Holocaust.
Soviet and Russian history
In his book, The Stalin School of Falsification, Leon Trotsky cited a range of historical documents such as private letters, telegrams, party speeches, meeting minutes, and suppressed texts such as Lenin's Testament, to argue that the Stalinist faction routinely distorted political events, forged a theoretical basis for irreconcilable concepts such as the notion of "Socialism in One Country" and misrepresented the views of opponents. He also argued that the Stalinist regime employed an array of professional historians as well as economists to justify policy manoeuvering and safeguarding its own set of material interests.
During the existence of the Russian Soviet Federative Socialist Republic (1917–1991) and the Soviet Union (1922–1991), the Communist Party of the Soviet Union (CPSU) attempted to ideologically and politically control the writing of both academic and popular history. These attempts were most successful in the 1934–1952 period. According to Klaus Mehnert, writing in 1952, the Soviet government attempted to control academic historiography (the writing of history by academic historians) to promote ideological and ethno-racial imperialism by Russians. During the 1928–1956 period, modern and contemporary history was generally composed according to the wishes of the CPSU, not the requirements of accepted historiographic method.
During and after the rule of Nikita Khrushchev (1956–1964), Soviet historiographic practice was more complicated. In this period, Soviet historiography was characterized by complex competition between Stalinist and anti-Stalinist Marxist historians. To avoid the professional hazard of politicized history, some historians chose pre-modern, medieval history or classical history, where ideological demands were relatively relaxed and conversation with other historians in the field could be fostered. Prescribed ideology could still introduce biases in historians' work, but not all of Soviet historiography was affected. Control over party history and the legal status of individual ex-party members played a large role in dictating the ideological diversity and thus the faction in power within the CPSU. The official History of the Communist Party of the Soviet Union (Bolsheviks) was revised to delete references to leaders purged from the party, especially during the rule of Joseph Stalin (1922–1953).
In the historiography of the Cold War, a controversy over negationist historical revisionism exists, where numerous revisionist scholars in the West have been accused of whitewashing the crimes of Stalinism, overlooking the Katyn massacre in Poland, disregarding the validity of the Venona Project messages with regards to Soviet espionage in the United States, as well as the denial of the Holodomor of 1932–1933.
In 2009, Russia established the Presidential Commission of the Russian Federation to Counter Attempts to Falsify History to the Detriment of Russia's Interests to "defend Russia against falsifiers of history". Some critics, like Heorhiy Kasyanov from the National Academy of Sciences of Ukraine, said the Kremlin was trying to whitewash Soviet history in order to justify its denial of human rights: "It's part of the Russian Federation's policy to create an ideological foundation for what is happening in Russia right now." Historian and author Orlando Figes, a professor at the University of London, who views the new commission is part of a clampdown on historical scholarship, stated: "They're idiots if they think they can change the discussion of Soviet history internationally, but they can make it hard for Russian historians to teach and publish. It's like we're back to the old days." The commission was disestablished in 2012.
Azerbaijan
In relation to Armenia
Many scholars, among them Victor Schnirelmann, Willem Floor, Robert Hewsen, George Bournoutian and others state that in Soviet and post-Soviet Azerbaijan since the 1960s there is a practice of revising primary sources on the South Caucasus in which any mention about Armenians is removed. In the revised texts, Armenian is either simply removed or is replaced by Albanian; there are many other examples of such falsifications, all of which have the purpose of creating an impression that historically Armenians were not present in this territory. Willem M. Floor and Hasan Javadi in the English edition of "The Heavenly Rose-Garden: A History of Shirvan & Daghestan" by Abbasgulu Bakikhanov specifically point out to the instances of distortions and falsifications made by Ziya Bunyadov in his Russian translation of this book. According to Bournoutian and Hewsen these distortions are widespread in these works; they thus advise the readers in general to avoid the books produced in Azerbaijan in Soviet and post-Soviet times if these books do not contain the facsimile copy of original sources. Shnirelman thinks that this practice is being realized in Azerbaijan according to state order. Philip L. Kohl brings an example of a theory advanced by Azerbaijani archaeologist Akhundov about Albanian origin of Khachkars as an example of patently false cultural origin myths.
The Armenian cemetery in Julfa, a cemetery near the town of Julfa, in the Nakhchivan exclave of Azerbaijan originally housed around 10,000 funerary monuments. The tombstones consisted mainly of thousands of khachkars, uniquely decorated cross-stones characteristic of medieval Christian Armenian art. The cemetery was still standing in the late 1990s, when the government of Azerbaijan began a systematic campaign to destroy the monuments. After studying and comparing satellite photos of Julfa taken in 2003 and 2009, the American Association for the Advancement of Science came to the conclusion in December 2010 that the cemetery was demolished and levelled. After the director of the Hermitage Museum Mikhail Piotrovsky expressed his protest about the destruction of Armenian khachkars in Julfa, he was accused by Azerbaijanis of supporting the "total falsification of the history and culture of Azerbaijan". Several appeals were filed by both Armenian and international organizations, condemning the Azerbaijani government and calling on it to desist from such activity. In 2006, Azerbaijan barred European Parliament members from investigating the claims, charging them with a "biased and hysterical approach" to the issue and stating that it would only accept a delegation if it visited Armenian-occupied territory as well. In the spring of 2006, a journalist from the Institute for War and Peace Reporting who visited the area reported that no visible traces of the cemetery remained. In the same year, photographs taken from Iran showed that the cemetery site had been turned into a military shooting range. The destruction of the cemetery has been widely described by Armenian sources, and some non-Armenian sources, as an act of "cultural genocide."
In Azerbaijan, the Armenian genocide is officially denied and is considered a hoax. According to the state ideology of Azerbaijan, a genocide of Azerbaijanis, carried out by Armenians and Russians, took place starting from 1813. Mahmudov has claimed that Armenians first appeared in Karabakh in 1828. Azerbaijani academics and politicians have claimed that foreign historians falsify the history of Azerbaijan and criticism was directed towards a Russian documentary about the regions of Karabakh and Nakhchivan and the historical Armenian presence in these areas. According to the institute director of the Azerbaijan National Academy of Sciences Yagub Mahmudov, prior to 1918 "there was never an Armenian state in the South Caucasus". According to Mahmudov, Ilham Aliyev's statement in which he said that "Irevan is our [Azerbaijan's] historic land, and we, Azerbaijanis must return to these historic lands", was based "historical facts" and "historical reality". Mahmudov also stated that the claim that Armenian's are the most ancient people in the region is based on propaganda, and said that Armenians are non-natives of the region, having only arrived in the area after Russian victories over Iran and the Ottoman Empire in the first half of the 19th century. The institute director also said: "The Azerbaijani soldier should know that the land under the feet of provocative Armenians is Azerbaijani land. The enemy can never defeat Azerbaijanis on Azerbaijani soil. Those who rule the Armenian state today must fundamentally change their political course. The Armenians cannot defeat us by sitting in our historic city of Irevan."
In relation to Iran
Historic falsifications in Azerbaijan, in relation to Iran and its history, are "backed by state and state backed non-governmental organizational bodies", ranging "from elementary school all the way to the highest level of universities". As a result of the two Russo-Iranian Wars of the 19th century, the border between what is present-day Iran and the Republic of Azerbaijan was formed. Although there had not been a historical Azerbaijani state to speak of in history, the demarcation, set at the Aras river, left significant numbers of what were later coined "Azerbaijanis" to the north of the Aras river. During the existence of the Azerbaijan SSR, as a result of Soviet-era historical revionism and myth-building, the notion of a "northern" and "southern" Azerbaijan was formulated and spread throughout the Soviet Union. During the Soviet nation building campaign, any event, both past and present, that had ever occurred in what is the present-day Azerbaijan Republic and Iranian Azerbaijan were rebranded as phenomenons of "Azerbaijani culture". Any Iranian ruler or poet that had lived in the area was assigned to the newly rebranded identity of the Transcaucasian Turkophones, in other words "Azerbaijanis".
According to Michael P. Croissant: "It was charged that the "two Azerbaijans", once united, were separated artificially by a conspiracy between imperial Russia and Iran". This notion based on illegitimate historic revisionism suited Soviet political purposes well (based on "anti-imperialism"), and became the basis for irredentism among Azerbaijani nationalists in the last years of the Soviet Union, shortly prior to the establishment of the Azerbaijan Republic in 1991.
In Azerbaijan, periods and aspects of Iranian history are usually claimed as being an "Azerbaijani" product in a distortion of history, and historic Iranian figures, such as the Persian poet Nizami Ganjavi are called "Azerbaijanis", contrary to universally acknowledged fact. In the Azerbaijan SSR, forgeries such as an alleged "Turkish divan" and falsified verses were published in order to "Turkify" Nizami Ganjavi. Although this type of irredentism was initially the result of the nation building policy of the Soviets, it became an instrument for "biased, pseudo-academic approaches and political speculations" in the nationalistic aspirations of the young Azerbaijan Republic. In the modern Azerbaijan Republic, historiography is written with the aim of retroactively Turkifying many of the peoples and kingdoms that existed prior to the arrival of Turks in the region, including the Iranian Medes. According to professor of history George Bournoutian:
Bournoutian adds:
North Korea and the Korean War
Since the start of the Korean War (1950–1953), the government of North Korea has consistently denied that the Democratic People's Republic of Korea (DPRK) launched the attack with which it began the war for the Communist unification of Korea. The historiography of the DPRK maintains that the war was provoked by South Korea, at the instigation of the United States: "On June 17, Juche 39 [1950] the then U.S. President [Harry S.] Truman sent [John Foster] Dulles as his special envoy to South Korea to examine the anti-North war scenario and give an order to start the attack. On June 18, Dulles inspected the 38th parallel and the war preparations of the 'ROK Army' units. That day he told Syngman Rhee to start the attack on North Korea with the counter-propaganda that North Korea first 'invaded' the south."
Further North Korean pronouncements included the claim that the U.S. needed the peninsula of Korea as "a bridgehead, for invading the Asian continent, and as a strategic base, from which to fight against national-liberation movements and socialism, and, ultimately, to attain world supremacy." Likewise, the DPRK denied the war crimes committed by the Korean People's Army in the course of the war; nonetheless, in the 1951–1952 period, the Workers' Party of Korea (WPK) privately admitted to the "excesses" of their earlier campaign against North Korean citizens who had collaborated with the enemy – either actually or allegedly – during the US–South Korean occupation of North Korea. Later, the WPK blamed every wartime atrocity upon the U.S. Armed Forces, e.g. the Sinchon Massacre (17 October – 7 December 1950) occurred during the retreat of the DPRK government from Hwanghae Province, in the south-west of North Korea.
The campaign against "collaborators" was attributed to political and ideological manipulations by the U.S.; the high-ranking leader Pak Chang-ok said that the American enemy had "started to use a new method, namely, it donned a leftist garb, which considerably influenced the inexperienced cadres of the Party and government organs." Kathryn Weathersby's Soviet Aims in Korea and the Origins of the Korean War, 1945–1950: New Evidence from Russian Archives (1993) confirmed that the Korean War was launched by order of Kim Il Sung (1912–1994); and also refuted the DPRK's allegations of biological warfare in the Korean War. The Korean Central News Agency dismissed the historical record of Soviet documents as "sheer forgery".
Holocaust denial
Holocaust deniers usually reject the term Holocaust denier as an inaccurate description of their historical point of view, instead preferring the term Holocaust revisionist; nonetheless, scholars prefer "Holocaust denier" to differentiate deniers from legitimate historical revisionists, whose goal is to accurately analyse historical evidence with established methods. Historian Alan Berger reports that Holocaust deniers argue in support of a preconceived theory – that the Holocaust either did not occur or was mostly a hoax – by ignoring extensive historical evidence to the contrary.
When the author David Irving lost his English libel case against Deborah Lipstadt, and her publisher, Penguin Books, and thus was publicly discredited and identified as a Holocaust denier, the trial judge, Justice Charles Gray, concluded that "Irving has, for his own ideological reasons, persistently and deliberately misrepresented and manipulated historical evidence; that, for the same reasons, he has portrayed Hitler in an unwarrantedly favorable light, principally in relation to his attitude towards, and responsibility for, the treatment of the Jews; that he is an active Holocaust denier; that he is anti-semitic and racist, and that he associates with right-wing extremists who promote neo-Nazism."
On 20 February 2006, Irving was found guilty, and sentenced to three years imprisonment for Holocaust denial, under Austria's 1947 law banning Nazi revivalism and criminalizing the "public denial, belittling or justification of National Socialist crimes". Besides Austria, eleven other countries – including Belgium, France, Germany, Lithuania, Poland, and Switzerland – have criminalized Holocaust denial as punishable with imprisonment.
North Macedonia
According to Eugene N. Borza, the Macedonians are in search of their past to legitimize their unsure present, in the disorder of the Balkan politics. Ivaylo Dichev claims that the Macedonian historiography has the impossible task of filling the huge gaps between the ancient kingdom of Macedon, that collapsed in 2nd century BC, the 10th–11th century state of the Cometopuls, and the Yugoslav Macedonia established in the middle of the 20th century.
According to Ulf Brunnbauer, modern Macedonian historiography is highly politicized, because the Macedonian nation-building process is still in development. The recent nation-building project imposes the idea of a "Macedonian nation" with unbroken continuity from the antiquity (Ancient Macedonians) to the modern times, which has been criticized by some domestic and foreign scholars for ahistorically projecting modern ethnic distinctions into the past. In this way generations of students were educated in pseudohistory.
Historiography in Africa
Rwandan genocide denial has proliferated in multiple contexts despite the fact that the mass killings took place amidst widespread news coverage and additionally later received detailed study during the International Criminal Tribunal for Rwanda (ICTR). Perpetrators of violent attacks against civilians in Rwanda, known as the "génocidaires", have been an element of this controversy. Concentrated details involving the planning, financing, and progress of war crimes have gotten unearthed, yet campaigns of denial endure given the influences of extremist ideologies surrounding ethnicity and race.
In May 2020, the Los Angeles Review of Books interviewed legal advocate and writer Linda Melvern on the topic, with her having assisted with ICTR related prosecutions. She concluded that the "pernicious influence" of the Hutu Power faction that enacted the widespread murders "lives on in rumor, stereotype, lies, and propaganda." She also remarked that said "movement's campaign of genocide denial has confused many, recruited some, and shielded others" such that with "the use of seemingly sound research methods, the génocidaires pose a threat, especially to those who might not be aware of the historical facts."
In terms of the 21st Century, increased international debate and discussion have partially failed to prevent efforts obfuscating the facts surrounding the Sudanese genocide. In March 2010, Omer Ismail and John Prendergast wrote for the Christian Science Monitor warning of multiple distortions of reality with lasting implications given the actions of the then Khartoum-based government. Specifically, they alleged that the state had "systematically denied access to the United Nations/African Union observer mission [personnel] to investigate attacks on civilians, so many of these attacks go unreported and the culpability remains mysterious."
Historical negationism within the territories of multiple African nations constitutes a crime from a de jure legal perspective. For example, denying the Rwandan genocide has led to prosecutions in that country. However, the negative social affects from disinformation and misinformation have expanded in some cases using modern media.
In textbooks
Japan
The history textbook controversy centres upon the secondary school history textbook Atarashii Rekishi Kyōkasho ("New History Textbook") said to minimize the nature of Japanese militarism in the First Sino-Japanese War (1894–1895), in annexing Korea in 1910, in the Second Sino-Japanese War (1937–1945), and in the Pacific Theater of World War II (1941–1945). The conservative Japanese Society for History Textbook Reform commissioned the Atarashii Rekishi Kyōkasho textbook with the purpose of traditional national and international view of that Japanese historical period. The Ministry of Education vets all history textbooks, especially those containing references to imperialist atrocities due to a special provision in the textbook examination rules to avoid inflaming controversy with neighbouring countries; however, the Atarashii Rekishi Kyōkasho de-emphasizes aggressive Japanese Imperial wartime behaviour and the matter of Chinese and Korean comfort women. When it comes to the Nanking massacre, the textbook only refers to it as the Nanking Incident, mentioning there were civilian casualties without delving into specifics, and mentioning it again in relation to the Tokyo tribunal, stating that there are multiple opinions about the topic with controversies continuing to this day (see Nanking massacre denial). In 2007, the Ministry of Education attempted to revise textbooks regarding the Battle of Okinawa, lessening the involvement of the Imperial Japanese Army in Okinawan civilian mass suicides.
Pakistan
Allegations of historical revisionism have been made regarding Pakistani textbooks in that they are laced with Indophobic, Hindu-hating and Islamist bias. Pakistan's use of officially published textbooks has been criticized for using schools to more subtly foster religious extremism, whitewashing Muslim conquests on the Indian subcontinent and promoting "expansive pan-Islamic imaginings" that "detect the beginnings of Pakistan in the birth of Islam on the Arabian peninsula". Since 2001, the Pakistani government has stated that curriculum reforms have been underway by the Ministry of Education.
South Korea
12 October 2015, South Korea's government has announced controversial plans to control the history textbooks used in secondary schools despite oppositional concerns of people and academics that the decision is made to glorify the history of those who served the Imperial Japanese government (Chinilpa). Section and the authoritarian dictatorships in South Korea during 1960s–1980s.The Ministry of Education announced that it would put the secondary-school history textbook under state control; "This was an inevitable choice in order to straighten out historical errors and end the social dispute caused by ideological bias in the textbooks," Hwang Woo-yea, education minister said on 12 October 2015. According to the government's plan, the current history textbooks of South Korea will be replaced by a single textbook written by a panel of government-appointed historians and the new series of publications would be issued under the title The Correct Textbook of History and are to be issued to the public and private primary and secondary schools in 2017 onwards.
The move has sparked fierce criticism from academics who argue that the system can be used to distort the history and glorify the history of those who served the Imperial Japanese government (Chinilpa) and of the authoritarian dictatorships. Moreover, 466 organizations including Korean Teachers and Education Workers Union formed History Act Network in solidarity and have staged protests: "The government's decision allows the state too much control and power and, therefore, it is against political neutrality that is certainly the fundamental principle of education." Many South Korean historians condemned Kyohaksa for their text glorifying those who served the Imperial Japanese government (Chinilpa) and the authoritarian dictatorship with a far-right political perspective. On the other hand, New Right supporters welcomed the textbook, saying that "the new textbook finally describes historical truths contrary to the history textbooks published by left-wing publishers", and the textbook issue became intensified as a case of ideological conflict. In Korean history, the history textbook was once put under state control during the authoritarian regime under Park Chung Hee (1963–1979), who is a father of Park Geun-hye, former President of South Korea, and was used as a means to keep the Yushin Regime, also known as the Yushin Dictatorship; however, there had been continuous criticisms about the system especially from the 1980s when Korea experienced a dramatic democratic development. In 2003, reformation of textbook began when the textbooks on Korean modern and contemporary history were published though the Textbook Screening System, which allows textbooks to be published not by a single government body but by many different companies, for the first time.
Turkey
Education in Turkey is centralized, and its policy, administration, and content are each determined by the Turkish government. Textbooks taught in schools are either prepared directly by the Ministry of National Education (MEB) or must be approved by its Instruction and Education Board. In practice, this means that the Turkish government is directly responsible for what textbooks are taught in schools across Turkey. In 2014, Taner Akçam, writing for the Armenian Weekly, discussed 2014–2015 Turkish elementary and middle school textbooks that the MEB had made available on the internet. He found that Turkish history textbooks describe Armenians as people "who are incited by foreigners, who aim to break apart the state and the country, and who murdered Turks and Muslims." The Armenian genocide is referred to as the "Armenian matter", and is described as a lie perpetrated to further the perceived hidden agenda of Armenians. Recognition of the Armenian genocide is defined as the "biggest threat to Turkish national security".
Akçam summarized one textbook that claims the Armenians had sided with the Russians during the war. The 1909 Adana massacre, in which as many as 20,000–30,000 Armenians were massacred, is identified as "The Rebellion of Armenians of Adana". According to the book, the Armenian Hnchak and Dashnak organizations instituted rebellions in many parts of Anatolia, and "didn't hesitate to kill Armenians who would not join them," issuing instructions that "if you want to survive you have to kill your neighbor first." Claims highlighted by Akçam: "[The Armenians murdered] many people living in villages, even children, by attacking Turkish villages, which had become defenseless because all the Turkish men were fighting on the war fronts. ... They stabbed the Ottoman forces in the back. They created obstacles for the operations of the Ottoman units by cutting off their supply routes and destroying bridges and roads. ... They spied for Russia and by rebelling in the cities where they were located, they eased the way for the Russian invasion. ... Since the Armenians who engaged in massacres in collaboration with the Russians created a dangerous situation, this law required the migration of [Armenian people] from the towns they were living in to Syria, a safe Ottoman territory. ... Despite being in the midst of war, the Ottoman state took precautions and measures when it came to the Armenians who were migrating. Their tax payments were postponed, they were permitted to take any personal property they wished, government officials were assigned to ensure that they were protected from attacks during the journey and that their needs were met, police stations were established to ensure that their lives and properties were secure."
Similar revisionist claims found in other textbooks by Akçam included that Armenian "back-stabbing" was the reason the Ottomans lost the Russo-Turkish War of 1877–78 (similar to the post-War German stab-in-the-back myth), that the Hamidian massacres never happened, that the Armenians were armed by the Russians during late World War I to fight the Ottomans (in reality they had already been nearly annihilated from the area by this point), that Armenians killed 600,000 Turks during said war, that the deportation were to save Armenians from other violent Armenian gangs, and that deported Armenians were later allowed to retrieve their possessions and return to Turkey unharmed. As of 2015, Turkish textbooks continue to refer to Armenians as "traitors," deny the genocide, and assert that the Ottoman Turks "took necessary measures to counter Armenian separatism". Students are taught that Armenians were forcibly relocated to defend Turkish nationals from attacks, and Armenians are described as "dishonorable and treacherous".
Yugoslavia
Throughout the post war era, though Tito denounced nationalist sentiments in historiography, those trends continued with Croat and Serbian academics at times accusing each other of misrepresenting each other's histories, especially in relation to the Croat-Nazi alliance. Communist historiography was challenged in the 1980s and a rehabilitation of Serbian nationalism by Serbian historians began. Historians and other members of the intelligentsia belonging to the Serbian Academy of Sciences and Arts (SANU) and the Writers Association played a significant role in the explanation of the new historical narrative. The process of writing a "new Serbian history" paralleled alongside the emerging ethno-nationalist mobilization of Serbs with the objective of reorganizing the Yugoslav federation. Using ideas and concepts from Holocaust historiography, Serbian historians alongside church leaders applied it to World War Two Yugoslavia and equated the Serbs with Jews and Croats with Nazi Germans.
Chetniks along with the Ustashe were vilified by Tito era historiography within Yugoslavia. In the 1980s, Serbian historians initiated the process of re-examining the narrative of how World War Two was told in Yugoslavia which was accompanied by the rehabilitation of Četnik leader Draža Mihailović. Monographs relating to Mihailović and the Četnik movement were produced by some younger historians who were ideologically close to it towards the end of the 1990s. Being preoccupied with the era, Serbian historians have looked to vindicate the history of the Chetniks by portraying them as righteous freedom fighters battling the Nazis while removing from history books the ambiguous alliances with the Italians and Germans. Whereas the crimes committed by Chetniks against Croats and Muslims in Serbian historiography are overall "cloaked in silence". During the Milošević era, Serbian history was falsified to obscure the role Serbian collaborators Milan Nedić and Dimitrije Ljotić played in cleansing Serbia's Jewish community, killing them in the country or deporting them to Eastern European concentration camps.
In the 1990s following a massive Western media coverage of the Yugoslav Wars, there was a rise of the publications considering the matter on historical revisionism of former Yugoslavia. One of the most prominent authors on the field of historical revisionism in the 1990s considering the newly emerged republics is Noel Malcolm and his works Bosnia: A Short History (1994) and Kosovo: A Short History (1998), that have seen a robust debate among historians following their release; following the release of the latter, the merits of the book were the subject of an extended debate in Foreign Affairs. Critics said that the book was "marred by his sympathies for its ethnic Albanian separatists, anti-Serbian bias, and illusions about the Balkans". In late 1999, Thomas Emmert of the history faculty of Gustavus Adolphus College in Minnesota reviewed the book in Journal of Southern Europe and the Balkans Online and while praising aspects of the book also asserted that it was "shaped by the author's overriding determination to challenge Serbian myths", that Malcolm was "partisan", and also complained that the book made a "transparent attempt to prove that the main Serbian myths are false". In 2006, a study by Frederick Anscombe looked at issues surrounding scholarship on Kosovo such as Noel Malcolm's work Kosovo: A Short History. Anscombe noted that Malcolm offered a "detailed critique of the competing versions of Kosovo's history" and that his work marked a "remarkable reversal" of previous acceptance by Western historians of the "Serbian account" regarding the migration of the Serbs (1690) from Kosovo. Malcolm has been criticized for being "anti-Serbian" and selective like the Serbs with the sources, while other more restrained critics note that "his arguments are unconvincing". Anscombe noted that Malcolm, like Serbian and Yugoslav historians who have ignored his conclusions sideline and are unwilling to consider indigenous evidence such as that from the Ottoman archive when composing national history.
French law recognizing colonialism's positive value
On 23 February 2005, the Union for a Popular Movement conservative majority at the French National Assembly voted a law compelling history textbooks and teachers to "acknowledge and recognize in particular the positive role of the French presence abroad, especially in North Africa". It was criticized by historians and teachers, among them Pierre Vidal-Naquet, who refused to recognize the French Parliament's right to influence the way history is written (despite the French Holocaust denial laws, see Loi Gayssot). That law was also challenged by left-wing parties and the former French colonies; critics argued that the law was tantamount to refusing to acknowledge the racism inherent to French colonialism, and that the law proper is a form of historical revisionism.
Marcos martial law negationism in the Philippines
In the Philippines, the biggest examples of historical negationism are linked to the Marcos family dynasty, usually Imelda Marcos, Bongbong Marcos, and Imee Marcos specifically. They have been accused of denying or trivializing the human rights violations during martial law and the plunder of the Philippines' coffers while Ferdinand Marcos was president.
Denial of the Muslim conquest of the Iberian peninsula
A spin-off of the vision of history espoused by the "inclusive Spanish nationalism" built in opposition to the National-Catholic brand of Spanish nationalism, it was first coined by Ignacio Olagüe (a dilettante historian connected to the early Spanish fascism) particularly in the former's 1974 work La revolución islámica en Occidente ("The Islamic revolution in the West"). Olagüe argued that it was impossible for Arabs to have invaded Hispania in 711 since they had not yet established their dominance over the neighbouring part of North Africa. Instead, Olagüe held that the events of 711 could be explained as skirmishes involving allied North African troops within the context of a civil war pitting Catholic Goths led by Roderic against Goths adhering to some form of Arianism and a largely-nontrinitarian Spanish population, including Nestorians, Gnostics and Manichaeans. The negationist postulates of Olagüe were later adopted by certain sectors within Andalusian nationalism. These ideas were resurrected in the early 21st century by the Arabist Emilio González Ferrín.
Australia
The Indigenous Australian population plummeted in the Australian frontier wars. The aboriginal people were regarded as lacking any concept of property or land rights: consequently, Australia was considered terra nullius. Massacres and mass poisonings were carried out against indigenous people. Indigenous children were removed from their families in what is known as the Stolen Generations.
Nakba
Nakba denial is a form of historical denialism pertaining to the 1948 Palestinian expulsion and flight and its accompanying effects, which Palestinians refer to collectively as the "Nakba". Underlying assumptions of Nakba denial cited by scholars can include the denial of historically documented violence against Palestinians, the denial of a distinct Palestinian identity, the idea that Palestine was barren land, and the notion that Palestinian dispossession were part of mutual transfers between Arabs and Jews justified by war.
Ramifications and judicature
16 European countries as well as Canada and Israel have criminalized historical negationism of the Holocaust. The Council of Europe defines it as the "denial, gross minimisation, approval or justification of genocide or crimes against humanity" (article 6, Additional Protocol to the Convention on cybercrime).
International law
Some council-member states proposed an additional protocol to the Council of Europe Cybercrime Convention, addressing materials and "acts of racist or xenophobic nature committed through computer networks"; it was negotiated from late 2001 to early 2002, and, on 7 November 2002, the Council of Europe Committee of Ministers adopted the protocol's final text titled Additional Protocol to the Convention on Cyber-crime, Concerning the Criminalization of Acts of a Racist and Xenophobic Nature Committed through Computer Systems, ("Protocol"). It opened on 28 January 2003, and became current on 1 March 2006; as of 30 November 2011, 20 States have signed and ratified the Protocol, and 15 others have signed, but not yet ratified it (including Canada and South Africa).
The Protocol requires participant States to criminalize the dissemination of racist and xenophobic material, and of racist and xenophobic threats and insults through computer networks, such as the Internet. Article 6, Section 1 of the Protocol specifically covers Holocaust Denial, and other genocides recognized as such by international courts, established since 1945, by relevant international legal instruments. Section 2 of Article 6 allows a Party to the Protocol, at their discretion, only to prosecute the violator if the crime is committed with the intent to incite hatred or discrimination or violence; or to use a reservation, by allowing a Party not to apply Article 6 – either partly or entirely. The Council of Europe's Explanatory Report of the Protocol says that the "European Court of Human Rights has made it clear that the denial or revision of 'clearly established historical facts – such as the Holocaust – ... would be removed from the protection of Article 10 by Article 17' of the European Convention on Human Rights" (see the Lehideux and Isorni judgement of 23 September 1998);
Two of the English-speaking states in Europe, Ireland and the United Kingdom, have not signed the additional protocol, (the third, Malta, signed on 28 January 2003, but has not yet ratified it). On 8 July 2005 Canada became the only non-European state to sign the convention. They were joined by South Africa in April 2008. The United States government does not believe that the final version of the Protocol is consistent with the United States' First Amendment Constitutional rights and has informed the Council of Europe that the United States will not become a Party to the protocol.
Domestic law
There are domestic laws against negationism and hate speech (which may encompass negationism) in several countries, including:
Austria (Article I §3 Verbotsgesetz 1947 with its 1992 updates and added paragraph §3h).
Belgium (Belgian Holocaust denial law).
Czech Republic.
France (Gayssot Act).
Germany (§130(3) of the penal code).
Hungary.
Israel.
Lithuania.
Luxembourg.
Poland (Article 55 of the law establishing the Institute of National Remembrance 1998).
Portugal.
Romania.
Slovakia.
Switzerland (Article 261bis of the Penal Code).
Additionally, the Netherlands considers denying the Holocaust as a hate crime – which is a punishable offence. Wider use of domestic laws include the 1990 French Gayssot Act that prohibits any "racist, anti-Semitic or xenophobic" speech, and the Czech Republic and Ukraine have criminalized the denial and the minimization of Communist-era crimes.
In fiction
In the novel Nineteen Eighty-Four (1949) by George Orwell, the government of Oceania continually revises historical records to concord with the contemporary political explanations of The Party. When Oceania is at war with Eurasia, the public records (newspapers, cinema, television) indicate that Oceania has been always at war with Eurasia; yet, when Eurasia and Oceania are no longer fighting each other, the historical records are subjected to negationism; thus, the populace are brainwashed to believe that Oceania and Eurasia always have been allies against Eastasia. The protagonist of the story, Winston Smith, is an editor in the Ministry of Truth, responsible for effecting the continual historical revisionism that will negate the contradictions of the past upon the contemporary world of Oceania.
To cope with the psychological stresses of life during wartime, Smith begins a diary, in which he observes that "He who controls the present, controls the past. He who controls the past, controls the future", and so illustrates the principal, ideological purpose of historical negationism.
Franz Kurowski was an extremely prolific right-wing German writer who dedicated his entire career to the production of Nazi military propaganda, followed by post-war military pulp fiction and revisionist histories of World War II, claiming the humane behaviour and innocence of war crimes of the Wehrmacht, glorifying war as a desirable state, while fabricating eyewitness reports of atrocities allegedly committed by the Allies, especially Bomber Command and the air raids on Cologne and Dresden as a planned genocide of the civilian population.
See also
Academic integrity
Alternative facts
Ash heap of history
Big lie
Black legend
Cognitive dissonance
Damnatio memoriae
Doublethink
Dunning School (United States)
History wars (Australia)
History wars (Canada)
Information warfare
Memory hole
National memory
Selective omission – biases to taboo some elements of a collective memory
Cases of denialism
1776 Commission
Anti-Katyn
Denial of atrocities against indigenous peoples
Denial of the Holodomor
Genocide denial – lists a number of particular cases
Holocaust denial
Myth of the clean Wehrmacht
Temple denial
Waffen-SS in popular culture
White Legend
Notes
References
Sources
Further reading
Shourie, Arun. 2014. Eminent historians: their technology, their line, their fraud. HarperCollins. .
Arun Shourie, Sita Ram Goel, Harsh Narain, J. Dubashi & Ram Swarup. Hindu Temples – What Happened to Them Vol. I, (A Preliminary Survey) (1990) .
Sisson, Jonathan (2010) “A Conceptual Framework for Dealing with the Past.” Politorbis Nr. 50 - 3 / 2010.
External links
Untruth in the Classroom, 1994
Why "revisionism" isn't
Mad Revisionist: A parody site on historical revisionism
Expert Witness Report by Richard J. Evans FBA presented at the trial "Irving vs. (1) Lipstadt and (2) Penguin Books"
Revisionist History – a satirical look at historical revisionism
43-page long academic paper about revisionism concerning the Amerindians by The Arizona Journal of International and Comparative Law Vol. 18, No. 3
Nizkor Project Web site answering Holocaust deniers
Pseudohistory | 0.771543 | 0.997079 | 0.76929 |
Anthropology | Anthropology is the scientific study of humanity, concerned with human behavior, human biology, cultures, societies, and linguistics, in both the present and past, including archaic humans. Social anthropology studies patterns of behavior, while cultural anthropology studies cultural meaning, including norms and values. The term sociocultural anthropology is commonly used today. Linguistic anthropology studies how language influences social life. Biological or physical anthropology studies the biological development of humans.
Archaeology, often termed as "anthropology of the past," studies human activity through investigation of physical evidence. It is considered a branch of anthropology in North America and Asia, while in Europe, archaeology is viewed as a discipline in its own right or grouped under other related disciplines, such as history and palaeontology.
Etymology
The abstract noun anthropology is first attested in reference to history. Its present use first appeared in Renaissance Germany in the works of Magnus Hundt and Otto Casmann. Their Neo-Latin derived from the combining forms of the Greek words ánthrōpos (, "human") and lógos (, "study"). Its adjectival form appeared in the works of Aristotle. It began to be used in English, possibly via French , by the early 18th century.
Origin and development of the term
Through the 19th century
In 1647, the Bartholins, early scholars of the University of Copenhagen, defined as follows:
Sporadic use of the term for some of the subject matter occurred subsequently, such as the use by Étienne Serres in 1839 to describe the natural history, or paleontology, of man, based on comparative anatomy, and the creation of a chair in anthropology and ethnography in 1850 at the French National Museum of Natural History by Jean Louis Armand de Quatrefages de Bréau. Various short-lived organizations of anthropologists had already been formed. The Société Ethnologique de Paris, the first to use the term ethnology, was formed in 1839 and focused on methodically studying human races. After the death of its founder, William Frédéric Edwards, in 1842, it gradually declined in activity until it eventually dissolved in 1862.
Meanwhile, the Ethnological Society of New York, currently the American Ethnological Society, was founded on its model in 1842, as well as the Ethnological Society of London in 1843, a break-away group of the Aborigines' Protection Society. These anthropologists of the times were liberal, anti-slavery, and pro-human-rights activists. They maintained international connections.
Anthropology and many other current fields are the intellectual results of the comparative methods developed in the earlier 19th century. Theorists in diverse fields such as anatomy, linguistics, and ethnology, started making feature-by-feature comparisons of their subject matters, and were beginning to suspect that similarities between animals, languages, and folkways were the result of processes or laws unknown to them then. For them, the publication of Charles Darwin's On the Origin of Species was the epiphany of everything they had begun to suspect. Darwin himself arrived at his conclusions through comparison of species he had seen in agronomy and in the wild.
Darwin and Wallace unveiled evolution in the late 1850s. There was an immediate rush to bring it into the social sciences. Paul Broca in Paris was in the process of breaking away from the Société de biologie to form the first of the explicitly anthropological societies, the Société d'Anthropologie de Paris, meeting for the first time in Paris in 1859. When he read Darwin, he became an immediate convert to Transformisme, as the French called evolutionism. His definition now became "the study of the human group, considered as a whole, in its details, and in relation to the rest of nature".
Broca, being what today would be called a neurosurgeon, had taken an interest in the pathology of speech. He wanted to localize the difference between man and the other animals, which appeared to reside in speech. He discovered the speech center of the human brain, today called Broca's area after him. His interest was mainly in Biological anthropology, but a German philosopher specializing in psychology, Theodor Waitz, took up the theme of general and social anthropology in his six-volume work, entitled Die Anthropologie der Naturvölker, 1859–1864. The title was soon translated as "The Anthropology of Primitive Peoples". The last two volumes were published posthumously.
Waitz defined anthropology as "the science of the nature of man". Following Broca's lead, Waitz points out that anthropology is a new field, which would gather material from other fields, but would differ from them in the use of comparative anatomy, physiology, and psychology to differentiate man from "the animals nearest to him". He stresses that the data of comparison must be empirical, gathered by experimentation. The history of civilization, as well as ethnology, are to be brought into the comparison. It is to be presumed fundamentally that the species, man, is a unity, and that "the same laws of thought are applicable to all men".
Waitz was influential among British ethnologists. In 1863, the explorer Richard Francis Burton and the speech therapist James Hunt broke away from the Ethnological Society of London to form the Anthropological Society of London, which henceforward would follow the path of the new anthropology rather than just ethnology. It was the 2nd society dedicated to general anthropology in existence. Representatives from the French Société were present, though not Broca. In his keynote address, printed in the first volume of its new publication, The Anthropological Review, Hunt stressed the work of Waitz, adopting his definitions as a standard. Among the first associates were the young Edward Burnett Tylor, inventor of cultural anthropology, and his brother Alfred Tylor, a geologist. Previously Edward had referred to himself as an ethnologist; subsequently, an anthropologist.
Similar organizations in other countries followed: The Anthropological Society of Madrid (1865), the American Anthropological Association in 1902, the Anthropological Society of Vienna (1870), the Italian Society of Anthropology and Ethnology (1871), and many others subsequently. The majority of these were evolutionists. One notable exception was the Berlin Society for Anthropology, Ethnology, and Prehistory (1869) founded by Rudolph Virchow, known for his vituperative attacks on the evolutionists. Not religious himself, he insisted that Darwin's conclusions lacked empirical foundation.
During the last three decades of the 19th century, a proliferation of anthropological societies and associations occurred, most independent, most publishing their own journals, and all international in membership and association. The major theorists belonged to these organizations. They supported the gradual osmosis of anthropology curricula into the major institutions of higher learning. By 1898, 48 educational institutions in 13 countries had some curriculum in anthropology. None of the 75 faculty members were under a department named anthropology.
20th and 21st centuries
Anthropology as a specialized field of academic study developed much through the end of the 19th century. Then it rapidly expanded beginning in the early 20th century to the point where many of the world's higher educational institutions typically included anthropology departments. Thousands of anthropology departments have come into existence, and anthropology has also diversified from a few major subdivisions to dozens more. Practical anthropology, the use of anthropological knowledge and technique to solve specific problems, has arrived; for example, the presence of buried victims might stimulate the use of a forensic archaeologist to recreate the final scene. The organization has also reached a global level. For example, the World Council of Anthropological Associations (WCAA), "a network of national, regional and international associations that aims to promote worldwide communication and cooperation in anthropology", currently contains members from about three dozen nations.
Since the work of Franz Boas and Bronisław Malinowski in the late 19th and early 20th centuries, social anthropology in Great Britain and cultural anthropology in the US have been distinguished from other social sciences by their emphasis on cross-cultural comparisons, long-term in-depth examination of context, and the importance they place on participant-observation or experiential immersion in the area of research. Cultural anthropology, in particular, has emphasized cultural relativism, holism, and the use of findings to frame cultural critiques. This has been particularly prominent in the United States, from Boas' arguments against 19th-century racial ideology, through Margaret Mead's advocacy for gender equality and sexual liberation, to current criticisms of post-colonial oppression and promotion of multiculturalism. Ethnography is one of its primary research designs as well as the text that is generated from anthropological fieldwork.
In Great Britain and the Commonwealth countries, the British tradition of social anthropology tends to dominate. In the United States, anthropology has traditionally been divided into the four field approach developed by Franz Boas in the early 20th century: biological or physical anthropology; social, cultural, or sociocultural anthropology; archaeological anthropology; and linguistic anthropology. These fields frequently overlap but tend to use different methodologies and techniques.
European countries with overseas colonies tended to practice more ethnology (a term coined and defined by Adam F. Kollár in 1783). It is sometimes referred to as sociocultural anthropology in the parts of the world that were influenced by the European tradition.
Fields
Anthropology is a global discipline involving humanities, social sciences and natural sciences. Anthropology builds upon knowledge from natural sciences, including the discoveries about the origin and evolution of Homo sapiens, human physical traits, human behavior, the variations among different groups of humans, how the evolutionary past of Homo sapiens has influenced its social organization and culture, and from social sciences, including the organization of human social and cultural relations, institutions, social conflicts, etc. Early anthropology originated in Classical Greece and Persia and studied and tried to understand observable cultural diversity, such as by Al-Biruni of the Islamic Golden Age. As such, anthropology has been central in the development of several new (late 20th century) interdisciplinary fields such as cognitive science, global studies, and various ethnic studies.
According to Clifford Geertz,
Sociocultural anthropology has been heavily influenced by structuralist and postmodern theories, as well as a shift toward the analysis of modern societies. During the 1970s and 1990s, there was an epistemological shift away from the positivist traditions that had largely informed the discipline. During this shift, enduring questions about the nature and production of knowledge came to occupy a central place in cultural and social anthropology. In contrast, archaeology and biological anthropology remained largely positivist. Due to this difference in epistemology, the four sub-fields of anthropology have lacked cohesion over the last several decades.
Sociocultural
Sociocultural anthropology draws together the principal axes of cultural anthropology and social anthropology. Cultural anthropology is the comparative study of the manifold ways in which people make sense of the world around them, while social anthropology is the study of the relationships among individuals and groups. Cultural anthropology is more related to philosophy, literature and the arts (how one's culture affects the experience for self and group, contributing to a more complete understanding of the people's knowledge, customs, and institutions), while social anthropology is more related to sociology and history. In that, it helps develop an understanding of social structures, typically of others and other populations (such as minorities, subgroups, dissidents, etc.). There is no hard-and-fast distinction between them, and these categories overlap to a considerable degree.
Inquiry in sociocultural anthropology is guided in part by cultural relativism, the attempt to understand other societies in terms of their own cultural symbols and values. Accepting other cultures in their own terms moderates reductionism in cross-cultural comparison. This project is often accommodated in the field of ethnography. Ethnography can refer to both a methodology and the product of ethnographic research, i.e. an ethnographic monograph. As a methodology, ethnography is based upon long-term fieldwork within a community or other research site. Participant observation is one of the foundational methods of social and cultural anthropology. Ethnology involves the systematic comparison of different cultures. The process of participant-observation can be especially helpful to understanding a culture from an emic (conceptual, vs. etic, or technical) point of view.
The study of kinship and social organization is a central focus of sociocultural anthropology, as kinship is a human universal. Sociocultural anthropology also covers economic and political organization, law and conflict resolution, patterns of consumption and exchange, material culture, technology, infrastructure, gender relations, ethnicity, childrearing and socialization, religion, myth, symbols, values, etiquette, worldview, sports, music, nutrition, recreation, games, food, festivals, and language (which is also the object of study in linguistic anthropology).
Comparison across cultures is a key element of method in sociocultural anthropology, including the industrialized (and de-industrialized) West. The Standard Cross-Cultural Sample (SCCS) includes 186 such cultures.
Biological
Biological anthropology and physical anthropology are synonymous terms to describe anthropological research focused on the study of humans and non-human primates in their biological, evolutionary, and demographic dimensions. It examines the biological and social factors that have affected the evolution of humans and other primates, and that generate, maintain or change contemporary genetic and physiological variation.
Archaeological
Archaeology is the study of the human past through its material remains. Artifacts, faunal remains, and human altered landscapes are evidence of the cultural and material lives of past societies. Archaeologists examine material remains in order to deduce patterns of past human behavior and cultural practices. Ethnoarchaeology is a type of archaeology that studies the practices and material remains of living human groups in order to gain a better understanding of the evidence left behind by past human groups, who are presumed to have lived in similar ways.
Linguistic
Linguistic anthropology (not to be confused with anthropological linguistics) seeks to understand the processes of human communications, verbal and non-verbal, variation in language across time and space, the social uses of language, and the relationship between language and culture. It is the branch of anthropology that brings linguistic methods to bear on anthropological problems, linking the analysis of linguistic forms and processes to the interpretation of sociocultural processes. Linguistic anthropologists often draw on related fields including sociolinguistics, pragmatics, cognitive linguistics, semiotics, discourse analysis, and narrative analysis.
Ethnography
Ethnography is a method of analysing social or cultural interaction. It often involves participant observation though an ethnographer may also draw from texts written by participants of in social interactions. Ethnography views first-hand experience and social context as important.
Tim Ingold distinguishes ethnography from anthropology arguing that anthropology tries to construct general theories of human experience, applicable in general and novel settings, while ethnography concerns itself with fidelity. He argues that the anthropologist must make his writing consistent with their understanding of literature and other theory but notes that ethnography may be of use to the anthropologists and the fields inform one another.
Key topics by field: sociocultural
Art, media, music, dance and film
Art
One of the central problems in the anthropology of art concerns the universality of 'art' as a cultural phenomenon. Several anthropologists have noted that the Western categories of 'painting', 'sculpture', or 'literature', conceived as independent artistic activities, do not exist, or exist in a significantly different form, in most non-Western contexts. To surmount this difficulty, anthropologists of art have focused on formal features in objects which, without exclusively being 'artistic', have certain evident 'aesthetic' qualities. Boas' Primitive Art, Claude Lévi-Strauss' The Way of the Masks (1982) or Geertz's 'Art as Cultural System' (1983) are some examples in this trend to transform the anthropology of 'art' into an anthropology of culturally specific 'aesthetics'.
Media
Media anthropology (also known as the anthropology of media or mass media) emphasizes ethnographic studies as a means of understanding producers, audiences, and other cultural and social aspects of mass media. The types of ethnographic contexts explored range from contexts of media production (e.g., ethnographies of newsrooms in newspapers, journalists in the field, film production) to contexts of media reception, following audiences in their everyday responses to media. Other types include cyber anthropology, a relatively new area of internet research, as well as ethnographies of other areas of research which happen to involve media, such as development work, social movements, or health education. This is in addition to many classic ethnographic contexts, where media such as radio, the press, new media, and television have started to make their presences felt since the early 1990s.
Music
Ethnomusicology is an academic field encompassing various approaches to the study of music (broadly defined), that emphasize its cultural, social, material, cognitive, biological, and other dimensions or contexts instead of or in addition to its isolated sound component or any particular repertoire.
Ethnomusicology can be used in a wide variety of fields, such as teaching, politics, cultural anthropology etc. While the origins of ethnomusicology date back to the 18th and 19th centuries, it was formally termed "ethnomusicology" by Dutch scholar Jaap Kunst . Later, the influence of study in this area spawned the creation of the periodical Ethnomusicology and the Society of Ethnomusicology.
Visual
Visual anthropology is concerned, in part, with the study and production of ethnographic photography, film and, since the mid-1990s, new media. While the term is sometimes used interchangeably with ethnographic film, visual anthropology also encompasses the anthropological study of visual representation, including areas such as performance, museums, art, and the production and reception of mass media. Visual representations from all cultures, such as sandpaintings, tattoos, sculptures and reliefs, cave paintings, scrimshaw, jewelry, hieroglyphs, paintings, and photographs are included in the focus of visual anthropology.
Economic, political economic, applied and development
Economic
Economic anthropology attempts to explain human economic behavior in its widest historic, geographic and cultural scope. It has a complex relationship with the discipline of economics, of which it is highly critical. Its origins as a sub-field of anthropology begin with the Polish-British founder of anthropology, Bronisław Malinowski, and his French compatriot, Marcel Mauss, on the nature of gift-giving exchange (or reciprocity) as an alternative to market exchange. Economic Anthropology remains, for the most part, focused upon exchange. The school of thought derived from Marx and known as Political Economy focuses on production, in contrast. Economic anthropologists have abandoned the primitivist niche they were relegated to by economists, and have now turned to examine corporations, banks, and the global financial system from an anthropological perspective.
Political economy
Political economy in anthropology is the application of the theories and methods of historical materialism to the traditional concerns of anthropology, including, but not limited to, non-capitalist societies. Political economy introduced questions of history and colonialism to ahistorical anthropological theories of social structure and culture. Three main areas of interest rapidly developed. The first of these areas was concerned with the "pre-capitalist" societies that were subject to evolutionary "tribal" stereotypes. Sahlin's work on hunter-gatherers as the "original affluent society" did much to dissipate that image. The second area was concerned with the vast majority of the world's population at the time, the peasantry, many of whom were involved in complex revolutionary wars such as in Vietnam. The third area was on colonialism, imperialism, and the creation of the capitalist world-system. More recently, these political economists have more directly addressed issues of industrial (and post-industrial) capitalism around the world.
Applied
Applied anthropology refers to the application of the method and theory of anthropology to the analysis and solution of practical problems. It is a "complex of related, research-based, instrumental methods which produce change or stability in specific cultural systems through the provision of data, initiation of direct action, and/or the formulation of policy". Applied anthropology is the practical side of anthropological research; it includes researcher involvement and activism within the participating community. It is closely related to development anthropology (distinct from the more critical anthropology of development).
Development
Anthropology of development tends to view development from a critical perspective. The kind of issues addressed and implications for the approach involve pondering why, if a key development goal is to alleviate poverty, is poverty increasing? Why is there such a gap between plans and outcomes? Why are those working in development so willing to disregard history and the lessons it might offer? Why is development so externally driven rather than having an internal basis? In short, why does so much planned development fail?
Kinship, feminism, gender and sexuality
Kinship
Kinship can refer both to the study of the patterns of social relationships in one or more human cultures, or it can refer to the patterns of social relationships themselves. Over its history, anthropology has developed a number of related concepts and terms, such as "descent", "descent groups", "lineages", "affines", "cognates", and even "fictive kinship". Broadly, kinship patterns may be considered to include people related both by descent (one's social relations during development), and also relatives by marriage. Within kinship you have two different families. People have their biological families and it is the people they share DNA with. This is called consanguinity or "blood ties". People can also have a chosen family in which they chose who they want to be a part of their family. In some cases, people are closer with their chosen family more than with their biological families.
Feminist
Feminist anthropology is a four field approach to anthropology (archeological, biological, cultural, linguistic) that seeks to reduce male bias in research findings, anthropological hiring practices, and the scholarly production of knowledge. Anthropology engages often with feminists from non-Western traditions, whose perspectives and experiences can differ from those of white feminists of Europe, America, and elsewhere. From the perspective of the Western world, historically such 'peripheral' perspectives have been ignored, observed only from an outsider perspective, and regarded as less-valid or less-important than knowledge from the Western world. Exploring and addressing that double bias against women from marginalized racial or ethnic groups is of particular interest in intersectional feminist anthropology.
Feminist anthropologists have stated that their publications have contributed to anthropology, along the way correcting against the systemic biases beginning with the "patriarchal origins of anthropology (and (academia)" and note that from 1891 to 1930 doctorates in anthropology went to males more than 85%, more than 81% were under 35, and only 7.2% to anyone over 40 years old, thus reflecting an age gap in the pursuit of anthropology by first-wave feminists until later in life. This correction of systemic bias may include mainstream feminist theory, history, linguistics, archaeology, and anthropology. Feminist anthropologists are often concerned with the construction of gender across societies. Gender constructs are of particular interest when studying sexism.
According to St. Clair Drake, Vera Mae Green was, until "[w]ell into the 1960s", the only African American female anthropologist who was also a Caribbeanist. She studied ethnic and family relations in the Caribbean as well as the United States, and thereby tried to improve the way black life, experiences, and culture were studied. However, Zora Neale Hurston, although often primarily considered to be a literary author, was trained in anthropology by Franz Boas, and published Tell my Horse about her "anthropological observations" of voodoo in the Caribbean (1938).
Feminist anthropology is inclusive of the anthropology of birth as a specialization, which is the anthropological study of pregnancy and childbirth within cultures and societies.
Medical, nutritional, psychological, cognitive and transpersonal
Medical
Medical anthropology is an interdisciplinary field which studies "human health and disease, health care systems, and biocultural adaptation". It is believed that William Caudell was the first to discover the field of medical anthropology. Currently, research in medical anthropology is one of the main growth areas in the field of anthropology as a whole. It focuses on the following six basic fields:
The development of systems of medical knowledge and medical care
The patient-physician relationship
The integration of alternative medical systems in culturally diverse environments
The interaction of social, environmental and biological factors which influence health and illness both in the individual and the community as a whole
The critical analysis of interaction between psychiatric services and migrant populations ("critical ethnopsychiatry": Beneduce 2004, 2007)
The impact of biomedicine and biomedical technologies in non-Western settings
Other subjects that have become central to medical anthropology worldwide are violence and social suffering (Farmer, 1999, 2003; Beneduce, 2010) as well as other issues that involve physical and psychological harm and suffering that are not a result of illness. On the other hand, there are fields that intersect with medical anthropology in terms of research methodology and theoretical production, such as cultural psychiatry and transcultural psychiatry or ethnopsychiatry.
Nutritional
Nutritional anthropology is a synthetic concept that deals with the interplay between economic systems, nutritional status and food security, and how changes in the former affect the latter. If economic and environmental changes in a community affect access to food, food security, and dietary health, then this interplay between culture and biology is in turn connected to broader historical and economic trends associated with globalization. Nutritional status affects overall health status, work performance potential, and the overall potential for economic development (either in terms of human development or traditional western models) for any given group of people.
Psychological
Psychological anthropology is an interdisciplinary subfield of anthropology that studies the interaction of cultural and mental processes. This subfield tends to focus on ways in which humans' development and enculturation within a particular cultural group – with its own history, language, practices, and conceptual categories – shape processes of human cognition, emotion, perception, motivation, and mental health. It also examines how the understanding of cognition, emotion, motivation, and similar psychological processes inform or constrain our models of cultural and social processes.
Cognitive
Cognitive anthropology seeks to explain patterns of shared knowledge, cultural innovation, and transmission over time and space using the methods and theories of the cognitive sciences (especially experimental psychology and evolutionary biology) often through close collaboration with historians, ethnographers, archaeologists, linguists, musicologists and other specialists engaged in the description and interpretation of cultural forms. Cognitive anthropology is concerned with what people from different groups know and how that implicit knowledge changes the way people perceive and relate to the world around them.
Transpersonal
Transpersonal anthropology studies the relationship between altered states of consciousness and culture. As with transpersonal psychology, the field is much concerned with altered states of consciousness (ASC) and transpersonal experience. However, the field differs from mainstream transpersonal psychology in taking more cognizance of cross-cultural issues – for instance, the roles of myth, ritual, diet, and text in evoking and interpreting extraordinary experiences.
Political and legal
Political
Political anthropology concerns the structure of political systems, looked at from the basis of the structure of societies. Political anthropology developed as a discipline concerned primarily with politics in stateless societies, a new development started from the 1960s, and is still unfolding: anthropologists started increasingly to study more "complex" social settings in which the presence of states, bureaucracies and markets entered both ethnographic accounts and analysis of local phenomena. The turn towards complex societies meant that political themes were taken up at two main levels. Firstly, anthropologists continued to study political organization and political phenomena that lay outside the state-regulated sphere (as in patron-client relations or tribal political organization). Secondly, anthropologists slowly started to develop a disciplinary concern with states and their institutions (and on the relationship between formal and informal political institutions). An anthropology of the state developed, and it is a most thriving field today. Geertz's comparative work on "Negara", the Balinese state, is an early, famous example.
Legal
Legal anthropology or anthropology of law specializes in "the cross-cultural study of social ordering". Earlier legal anthropological research often focused more narrowly on conflict management, crime, sanctions, or formal regulation. More recent applications include issues such as human rights, legal pluralism, and political uprisings.
Public
Public anthropology was created by Robert Borofsky, a professor at Hawaii Pacific University, to "demonstrate the ability of anthropology and anthropologists to effectively address problems beyond the discipline – illuminating larger social issues of our times as well as encouraging broad, public conversations about them with the explicit goal of fostering social change".
Nature, science, and technology
Cyborg
Cyborg anthropology originated as a sub-focus group within the American Anthropological Association's annual meeting in 1993. The sub-group was very closely related to STS and the Society for the Social Studies of Science. Donna Haraway's 1985 Cyborg Manifesto could be considered the founding document of cyborg anthropology by first exploring the philosophical and sociological ramifications of the term. Cyborg anthropology studies humankind and its relations with the technological systems it has built, specifically modern technological systems that have reflexively shaped notions of what it means to be human beings.
Digital
Digital anthropology is the study of the relationship between humans and digital-era technology and extends to various areas where anthropology and technology intersect. It is sometimes grouped with sociocultural anthropology, and sometimes considered part of material culture. The field is new, and thus has a variety of names with a variety of emphases. These include techno-anthropology, digital ethnography, cyberanthropology, and virtual anthropology.
Ecological
Ecological anthropology is defined as the "study of cultural adaptations to environments". The sub-field is also defined as, "the study of relationships between a population of humans and their biophysical environment". The focus of its research concerns "how cultural beliefs and practices helped human populations adapt to their environments, and how their environments change across space and time. The contemporary perspective of environmental anthropology, and arguably at least the backdrop, if not the focus of most of the ethnographies and cultural fieldworks of today, is political ecology. Many characterize this new perspective as more informed with culture, politics and power, globalization, localized issues, century anthropology and more. The focus and data interpretation is often used for arguments for/against or creation of policy, and to prevent corporate exploitation and damage of land. Often, the observer has become an active part of the struggle either directly (organizing, participation) or indirectly (articles, documentaries, books, ethnographies). Such is the case with environmental justice advocate Melissa Checker and her relationship with the people of Hyde Park.
Environment
Social sciences, like anthropology, can provide interdisciplinary approaches to the environment. Professor Kay Milton, Director of the Anthropology research network in the School of History and Anthropology, describes anthropology as distinctive, with its most distinguishing feature being its interest in non-industrial indigenous and traditional societies. Anthropological theory is distinct because of the consistent presence of the concept of culture; not an exclusive topic but a central position in the study and a deep concern with the human condition. Milton describes three trends that are causing a fundamental shift in what characterizes anthropology: dissatisfaction with the cultural relativist perspective, reaction against cartesian dualisms which obstructs progress in theory (nature culture divide), and finally an increased attention to globalization (transcending the barriers or time/space).
Environmental discourse appears to be characterized by a high degree of globalization. (The troubling problem is borrowing non-indigenous practices and creating standards, concepts, philosophies and practices in western countries.) Anthropology and environmental discourse now have become a distinct position in anthropology as a discipline. Knowledge about diversities in human culture can be important in addressing environmental problems - anthropology is now a study of human ecology. Human activity is the most important agent in creating environmental change, a study commonly found in human ecology which can claim a central place in how environmental problems are examined and addressed. Other ways anthropology contributes to environmental discourse is by being theorists and analysts, or by refinement of definitions to become more neutral/universal, etc. In exploring environmentalism - the term typically refers to a concern that the environment should be protected, particularly from the harmful effects of human activities. Environmentalism itself can be expressed in many ways. Anthropologists can open the doors of environmentalism by looking beyond industrial society, understanding the opposition between industrial and non-industrial relationships, knowing what ecosystem people and biosphere people are and are affected by, dependent and independent variables, "primitive" ecological wisdom, diverse environments, resource management, diverse cultural traditions, and knowing that environmentalism is a part of culture.
Historical
Ethnohistory is the study of ethnographic cultures and indigenous customs by examining historical records. It is also the study of the history of various ethnic groups that may or may not exist today. Ethnohistory uses both historical and ethnographic data as its foundation. Its historical methods and materials go beyond the standard use of documents and manuscripts. Practitioners recognize the utility of such source material as maps, music, paintings, photography, folklore, oral tradition, site exploration, archaeological materials, museum collections, enduring customs, language, and place names.
Religion
The anthropology of religion involves the study of religious institutions in relation to other social institutions, and the comparison of religious beliefs and practices across cultures. Modern anthropology assumes that there is complete continuity between magical thinking and religion, and that every religion is a cultural product, created by the human community that worships it.
Urban
Urban anthropology is concerned with issues of urbanization, poverty, and neoliberalism. Ulf Hannerz quotes a 1960s remark that traditional anthropologists were "a notoriously agoraphobic lot, anti-urban by definition". Various social processes in the Western World as well as in the "Third World" (the latter being the habitual focus of attention of anthropologists) brought the attention of "specialists in 'other cultures'" closer to their homes. There are two main approaches to urban anthropology: examining the types of cities or examining the social issues within the cities. These two methods are overlapping and dependent of each other. By defining different types of cities, one would use social factors as well as economic and political factors to categorize the cities. By directly looking at the different social issues, one would also be studying how they affect the dynamic of the city.
Key topics by field: archaeological and biological
Anthrozoology
Anthrozoology (also known as "human–animal studies") is the study of interaction between living things. It is an interdisciplinary field that overlaps with a number of other disciplines, including anthropology, ethology, medicine, psychology, veterinary medicine and zoology. A major focus of anthrozoologic research is the quantifying of the positive effects of human-animal relationships on either party and the study of their interactions. It includes scholars from a diverse range of fields, including anthropology, sociology, biology, and philosophy.
Biocultural
Biocultural anthropology is the scientific exploration of the relationships between human biology and culture. Physical anthropologists throughout the first half of the 20th century viewed this relationship from a racial perspective; that is, from the assumption that typological human biological differences lead to cultural differences. After World War II the emphasis began to shift toward an effort to explore the role culture plays in shaping human biology.
Evolutionary
Evolutionary anthropology is the interdisciplinary study of the evolution of human physiology and human behaviour and the relation between hominins and non-hominin primates. Evolutionary anthropology is based in natural science and social science, combining the human development with socioeconomic factors. Evolutionary anthropology is concerned with both biological and cultural evolution of humans, past and present. It is based on a scientific approach, and brings together fields such as archaeology, behavioral ecology, psychology, primatology, and genetics. It is a dynamic and interdisciplinary field, drawing on many lines of evidence to understand the human experience, past and present.
Forensic
Forensic anthropology is the application of the science of physical anthropology and human osteology in a legal setting, most often in criminal cases where the victim's remains are in the advanced stages of decomposition. A forensic anthropologist can assist in the identification of deceased individuals whose remains are decomposed, burned, mutilated or otherwise unrecognizable. The adjective "forensic" refers to the application of this subfield of science to a court of law.
Palaeoanthropology
Paleoanthropology combines the disciplines of paleontology and physical anthropology. It is the study of ancient humans, as found in fossil hominid evidence such as petrifacted bones and footprints. Genetics and morphology of specimens are crucially important to this field. Markers on specimens, such as enamel fractures and dental decay on teeth, can also give insight into the behaviour and diet of past populations.
Organizations
Contemporary anthropology is an established science with academic departments at most universities and colleges. The single largest organization of anthropologists is the American Anthropological Association (AAA), which was founded in 1903. Its members are anthropologists from around the globe.
In 1989, a group of European and American scholars in the field of anthropology established the European Association of Social Anthropologists (EASA) which serves as a major professional organization for anthropologists working in Europe. The EASA seeks to advance the status of anthropology in Europe and to increase visibility of marginalized anthropological traditions and thereby contribute to the project of a global anthropology or world anthropology.
Hundreds of other organizations exist in the various sub-fields of anthropology, sometimes divided up by nation or region, and many anthropologists work with collaborators in other disciplines, such as geology, physics, zoology, paleontology, anatomy, music theory, art history, sociology and so on, belonging to professional societies in those disciplines as well.
List of major organizations
American Anthropological Association
American Ethnological Society
Asociación de Antropólogos Iberoamericanos en Red, AIBR
Anthropological Society of London
Center for World Indigenous Studies
Ethnological Society of London
European Association of Social Anthropologists
Max Planck Institute for Evolutionary Anthropology
Network of Concerned Anthropologists
N.N. Miklukho-Maklai Institute of Ethnology and Anthropology
Royal Anthropological Institute of Great Britain and Ireland
Society for Anthropological Sciences
Society for Applied Anthropology
USC Center for Visual Anthropology
Ethics
As the field has matured it has debated and arrived at ethical principles aimed at protecting both the subjects of anthropological research as well as the researchers themselves, and professional societies have generated codes of ethics.
Anthropologists, like other researchers (especially historians and scientists engaged in field research), have over time assisted state policies and projects, especially colonialism.
Some commentators have contended:
That the discipline grew out of colonialism, perhaps was in league with it, and derives some of its key notions from it, consciously or not. (See, for example, Gough, Pels and Salemink, but cf. Lewis 2004).
That ethnographic work is often ahistorical, writing about people as if they were "out of time" in an "ethnographic present" (Johannes Fabian, Time and Its Other).
In his article "The Misrepresentation of Anthropology and Its Consequence", Herbert S. Lewis critiqued older anthropological works that presented other cultures as if they were strange and unusual. While the findings of those researchers should not be discarded, the field should learn from its mistakes.
Cultural relativism
As part of their quest for scientific objectivity, present-day anthropologists typically urge cultural relativism, which has an influence on all the sub-fields of anthropology. This is the notion that cultures should not be judged by another's values or viewpoints, but be examined dispassionately on their own terms. There should be no notions, in good anthropology, of one culture being better or worse than another culture.
Ethical commitments in anthropology include noticing and documenting genocide, infanticide, racism, sexism, mutilation (including circumcision and subincision), and torture. Topics like racism, slavery, and human sacrifice attract anthropological attention and theories ranging from nutritional deficiencies, to genes, to acculturation, to colonialism, have been proposed to explain their origins and continued recurrences.
To illustrate the depth of an anthropological approach, one can take just one of these topics, such as "racism" and find thousands of anthropological references, stretching across all the major and minor sub-fields.
Military involvement
Anthropologists' involvement with the U.S. government, in particular, has caused bitter controversy within the discipline. Franz Boas publicly objected to US participation in World War I, and after the war he published a brief exposé and condemnation of the participation of several American archaeologists in espionage in Mexico under their cover as scientists.
But by the 1940s, many of Boas' anthropologist contemporaries were active in the allied war effort against the Axis Powers (Nazi Germany, Fascist Italy, and Imperial Japan). Many served in the armed forces, while others worked in intelligence (for example, Office of Strategic Services and the Office of War Information). At the same time, David H. Price's work on American anthropology during the Cold War provides detailed accounts of the pursuit and dismissal of several anthropologists from their jobs for communist sympathies.
Attempts to accuse anthropologists of complicity with the CIA and government intelligence activities during the Vietnam War years have turned up little. Many anthropologists (students and teachers) were active in the antiwar movement. Numerous resolutions condemning the war in all its aspects were passed overwhelmingly at the annual meetings of the American Anthropological Association (AAA).
Professional anthropological bodies often object to the use of anthropology for the benefit of the state. Their codes of ethics or statements may proscribe anthropologists from giving secret briefings. The Association of Social Anthropologists of the UK and Commonwealth (ASA) has called certain scholarship ethically dangerous. The "Principles of Professional Responsibility" issued by the American Anthropological Association and amended through November 1986 stated that "in relation with their own government and with host governments ... no secret research, no secret reports or debriefings of any kind should be agreed to or given." The current "Principles of Professional Responsibility" does not make explicit mention of ethics surrounding state interactions.
Anthropologists, along with other social scientists, are working with the US military as part of the US Army's strategy in Afghanistan. The Christian Science Monitor reports that "Counterinsurgency efforts focus on better grasping and meeting local needs" in Afghanistan, under the Human Terrain System (HTS) program; in addition, HTS teams are working with the US military in Iraq. In 2009, the American Anthropological Association's Commission on the Engagement of Anthropology with the US Security and Intelligence Communities (CEAUSSIC) released its final report concluding, in part, that:
Post-World War II developments
Before WWII British 'social anthropology' and American 'cultural anthropology' were still distinct traditions. After the war, enough British and American anthropologists borrowed ideas and methodological approaches from one another that some began to speak of them collectively as 'sociocultural' anthropology.
Basic trends
There are several characteristics that tend to unite anthropological work. One of the central characteristics is that anthropology tends to provide a comparatively more holistic account of phenomena and tends to be highly empirical. The quest for holism leads most anthropologists to study a particular place, problem or phenomenon in detail, using a variety of methods, over a more extensive period than normal in many parts of academia.
In the 1990s and 2000s, calls for clarification of what constitutes a culture, of how an observer knows where his or her own culture ends and another begins, and other crucial topics in writing anthropology were heard. These dynamic relationships, between what can be observed on the ground, as opposed to what can be observed by compiling many local observations remain fundamental in any kind of anthropology, whether cultural, biological, linguistic or archaeological.
Biological anthropologists are interested in both human variation and in the possibility of human universals (behaviors, ideas or concepts shared by virtually all human cultures). They use many different methods of study, but modern population genetics, participant observation and other techniques often take anthropologists "into the field," which means traveling to a community in its own setting, to do something called "fieldwork." On the biological or physical side, human measurements, genetic samples, nutritional data may be gathered and published as articles or monographs.
Along with dividing up their project by theoretical emphasis, anthropologists typically divide the world up into relevant time periods and geographic regions. Human time on Earth is divided up into relevant cultural traditions based on material, such as the Paleolithic and the Neolithic, of particular use in archaeology. Further cultural subdivisions according to tool types, such as Olduwan or Mousterian or Levalloisian help archaeologists and other anthropologists in understanding major trends in the human past. Anthropologists and geographers share approaches to culture regions as well, since mapping cultures is central to both sciences. By making comparisons across cultural traditions (time-based) and cultural regions (space-based), anthropologists have developed various kinds of comparative method, a central part of their science.
Commonalities between fields
Because anthropology developed from so many different enterprises (see History of anthropology), including but not limited to fossil-hunting, exploring, documentary film-making, paleontology, primatology, antiquity dealings and curatorship, philology, etymology, genetics, regional analysis, ethnology, history, philosophy, and religious studies, it is difficult to characterize the entire field in a brief article, although attempts to write histories of the entire field have been made.
Some authors argue that anthropology originated and developed as the study of "other cultures", both in terms of time (past societies) and space (non-European/non-Western societies). For example, the classic of urban anthropology, Ulf Hannerz in the introduction to his seminal Exploring the City: Inquiries Toward an Urban Anthropology mentions that the "Third World" had habitually received most of attention; anthropologists who traditionally specialized in "other cultures" looked for them far away and started to look "across the tracks" only in late 1960s.
Now there exist many works focusing on peoples and topics very close to the author's "home". It is also argued that other fields of study, like History and Sociology, on the contrary focus disproportionately on the West.
In France, the study of Western societies has been traditionally left to sociologists, but this is increasingly changing, starting in the 1970s from scholars like Isac Chiva and journals like Terrain ("fieldwork") and developing with the center founded by Marc Augé (Le Centre d'anthropologie des mondes contemporains, the Anthropological Research Center of Contemporary Societies).
Since the 1980s it has become common for social and cultural anthropologists to set ethnographic research in the North Atlantic region, frequently examining the connections between locations rather than limiting research to a single locale. There has also been a related shift toward broadening the focus beyond the daily life of ordinary people; increasingly, research is set in settings such as scientific laboratories, social movements, governmental and nongovernmental organizations and businesses.
See also
Christian anthropology, a sub-field of theology
Philosophical anthropology, a sub-field of philosophy
Lists
Notes
References
Works cited
Further reading
Dictionaries and encyclopedias
Fieldnotes and memoirs
Histories
.
Textbooks and key theoretical works
External links
Open Encyclopedia of Anthropology.
(AIO)
Behavioural sciences
Humans | 0.769584 | 0.999598 | 0.769274 |
Anthropocene | The Anthropocene was a rejected proposal for a geological epoch following the Holocene, dating from the commencement of significant human impact on Earth up to the present day. This impact affects Earth's oceans, geology, geomorphology, landscape, limnology, hydrology, ecosystems and climate. The effects of human activities on Earth can be seen for example in biodiversity loss and climate change. Various start dates for the Anthropocene have been proposed, ranging from the beginning of the Neolithic Revolution (12,000–15,000 years ago), to as recently as the 1960s. The biologist Eugene F. Stoermer is credited with first coining and using the term anthropocene informally in the 1980s; Paul J. Crutzen re-invented and popularized the term. However, in 2024 the International Commission on Stratigraphy (ICS) and the International Union of Geological Sciences (IUGS) rejected the Anthropocene Epoch proposal for inclusion in the Geologic Time Scale sparking significant disagreement from scientists working in the field.
The Anthropocene Working Group (AWG) of the Subcommission on Quaternary Stratigraphy (SQS) of the ICS voted in April 2016 to proceed towards a formal golden spike (GSSP) proposal to define the Anthropocene epoch in the geologic time scale. The group presented the proposal to the International Geological Congress in August 2016.
In May 2019, the AWG voted in favour of submitting a formal proposal to the ICS by 2021. The proposal located potential stratigraphic markers to the mid-20th century. This time period coincides with the start of the Great Acceleration, a post-World War II time period during which global population growth, pollution and exploitation of natural resources have all increased at a dramatic rate. The Atomic Age also started around the mid-20th century, when the risks of nuclear wars, nuclear terrorism and nuclear accidents increased.
Twelve candidate sites were selected for the GSSP; the sediments of Crawford Lake, Canada were finally proposed, in July 2023, to mark the lower boundary of the Anthropocene, starting with the Crawfordian stage/age in 1950.
In March 2024, after 15 years of deliberation, the Anthropocene Epoch proposal of the AWG was voted down by a wide margin by the SQS, owing largely to its shallow sedimentary record and extremely recent proposed start date. The ICS and the IUGS later formally confirmed, by a near unanimous vote, the rejection of the AWG's Anthropocene Epoch proposal for inclusion in the Geologic Time Scale. The IUGS statement on the rejection concluded: "Despite its rejection as a formal unit of the Geologic Time Scale, Anthropocene will nevertheless continue to be used not only by Earth and environmental scientists, but also by social scientists, politicians and economists, as well as by the public at large. It will remain an invaluable descriptor of human impact on the Earth system."
Development of the concept
An early concept for the Anthropocene was the Noosphere by Vladimir Vernadsky, who in 1938 wrote of "scientific thought as a geological force". Scientists in the Soviet Union appear to have used the term Anthropocene as early as the 1960s to refer to the Quaternary, the most recent geological period.
Ecologist Eugene F. Stoermer subsequently used Anthropocene with a different sense in the 1980s and the term was widely popularised in 2000 by atmospheric chemist Paul J. Crutzen, who regards the influence of human behavior on Earth's atmosphere in recent centuries as so significant as to constitute a new geological epoch.
The term Anthropocene is informally used in scientific contexts. The Geological Society of America entitled its 2011 annual meeting: Archean to Anthropocene: The past is the key to the future. The new epoch has no agreed start-date, but one proposal, based on atmospheric evidence, is to fix the start with the Industrial Revolution 1780, with the invention of the steam engine. Other scientists link the new term to earlier events, such as the rise of agriculture and the Neolithic Revolution (around 12,000 years BP).
Evidence of relative human impact – such as the growing human influence on land use, ecosystems, biodiversity, and species extinction – is substantial; scientists think that human impact has significantly changed (or halted) the growth of biodiversity. Those arguing for earlier dates posit that the proposed Anthropocene may have begun as early as 14,000–15,000 years BP, based on geologic evidence; this has led other scientists to suggest that "the onset of the Anthropocene should be extended back many thousand years"; this would make the Anthropocene essentially synonymous with the current term, Holocene.
Anthropocene Working Group
In 2008, the Stratigraphy Commission of the Geological Society of London considered a proposal to make the Anthropocene a formal unit of geological epoch divisions. A majority of the commission decided the proposal had merit and should be examined further. Independent working groups of scientists from various geological societies began to determine whether the Anthropocene will be formally accepted into the Geological Time Scale.
In January 2015, 26 of the 38 members of the International Anthropocene Working Group published a paper suggesting the Trinity test on 16 July 1945 as the starting point of the proposed new epoch. However, a significant minority supported one of several alternative dates. A March 2015 report suggested either 1610 or 1964 as the beginning of the Anthropocene. Other scholars pointed to the diachronous character of the physical strata of the Anthropocene, arguing that onset and impact are spread out over time, not reducible to a single instant or date of start.
A January 2016 report on the climatic, biological, and geochemical signatures of human activity in sediments and ice cores suggested the era since the mid-20th century should be recognised as a geological epoch distinct from the Holocene.
The Anthropocene Working Group met in Oslo in April 2016 to consolidate evidence supporting the argument for the Anthropocene as a true geologic epoch. Evidence was evaluated and the group voted to recommend Anthropocene as the new geological epoch in August 2016.
In April 2019, the Anthropocene Working Group (AWG) announced that they would vote on a formal proposal to the International Commission on Stratigraphy, to continue the process started at the 2016 meeting. In May 2019, 29 members of the 34 person AWG panel voted in favour of an official proposal to be made by 2021. The AWG also voted with 29 votes in favour of a starting date in the mid 20th century. Ten candidate sites for a Global boundary Stratotype Section and Point have been identified, one of which will be chosen to be included in the final proposal. Possible markers include microplastics, heavy metals, or radioactive nuclei left by tests from thermonuclear weapons.
In November 2021, an alternative proposal that the Anthropocene is a geological event, not an epoch, was published and later expanded in 2022. This challenged the assumption underlying the case for the Anthropocene epoch - the idea that it is possible to accurately assign a precise date of start to highly diachronous processes of human-influenced Earth system change. The argument indicated that finding a single GSSP would be impractical, given human-induced changes in the Earth system occurred at different periods, in different places, and spread under different rates. Under this model, the Anthropocene would have many events marking human-induced impacts on the planet, including the mass extinction of large vertebrates, the development of early farming, land clearance in the Americas, global-scale industrial transformation during the Industrial Revolution, and the start of the Atomic Age. The authors are members of the AWG who had voted against the official proposal of a starting date in the mid-20th century, and sought to reconcile some of the previous models (including Ruddiman and Maslin proposals). They cited Crutzen's original concept, arguing that the Anthropocene is much better and more usefully conceived of as an unfolding geological event, like other major transformations in Earth's history such as the Great Oxidation Event.
In July 2023, the AWG chose Crawford Lake in Ontario, Canada as a site representing the beginning of the proposed new epoch. The sediment in that lake shows a spike in levels of plutonium from hydrogen bomb tests, a key marker the group chose to place the start of the Anthropocene in the 1950s, along with other elevated markers including carbon particles and nitrates from the burning of fossil fuels and widespread application of chemical fertilizers respectively. Had it been approved, the official declaration of the new Anthropocene epoch would have taken place in August 2024, and its first age may have been named Crawfordian after the lake.
Rejection in 2024 vote by IUGS
In March 2024, the New York Times reported on the results of an internal vote held by the IUGS: After nearly 15 years of debate, the proposal to ratify the Anthropocene had been defeated by a 12-to-4 margin, with 2 abstentions. These results were not out of a dismissal of human impact on the planet, but rather an inability to constrain the Anthropocene in a geological context. This is because the widely-adopted 1950 start date was found to be prone to recency bias. It also overshadowed earlier examples of human impacts, many of which happened in different parts of the world at different times. Although the proposal could be raised again, this would require the entire process of debate to start from the beginning. The results of the vote were officially confirmed by the IUGS and upheld as definitive later that month.
Proposed starting point
Industrial Revolution
Crutzen proposed the Industrial Revolution as the start of Anthropocene. Lovelock proposes that the Anthropocene began with the first application of the Newcomen atmospheric engine in 1712. The Intergovernmental Panel on Climate Change takes the pre-industrial era (chosen as the year 1750) as the baseline related to changes in long-lived, well mixed greenhouse gases. Although it is apparent that the Industrial Revolution ushered in an unprecedented global human impact on the planet, much of Earth's landscape already had been profoundly modified by human activities. The human impact on Earth has grown progressively, with few substantial slowdowns. A 2024 scientific perspective paper authored by a group of scientists led by William J. Ripple proposed the start of the Anthropocene around 1850, stating it is a "compelling choice . . . from a population, fossil fuel, greenhouse gasses, temperature, and land use perspective."
Mid 20th century (Great Acceleration)
In May 2019 the twenty-nine members of the Anthropocene Working Group (AWG) proposed a start date for the Epoch in the mid-20th century, as that period saw "a rapidly rising human population accelerated the pace of industrial production, the use of agricultural chemicals and other human activities. At the same time, the first atomic-bomb blasts littered the globe with radioactive debris that became embedded in sediments and glacial ice, becoming part of the geologic record." The official start-dates, according to the panel, would coincide with either the radionuclides released into the atmosphere from bomb detonations in 1945, or with the Limited Nuclear Test Ban Treaty of 1963.
First atomic bomb (1945)
The peak in radionuclides fallout consequential to atomic bomb testing during the 1950s is another possible date for the beginning of the Anthropocene (the detonation of the first atomic bomb in 1945 or the Partial Nuclear Test Ban Treaty in 1963).
Etymology
The name Anthropocene is a combination of anthropo- from the Ancient Greek meaning 'human' and -cene from meaning 'new' or 'recent'.
As early as 1873, the Italian geologist Antonio Stoppani acknowledged the increasing power and effect of humanity on the Earth's systems and referred to an 'anthropozoic era'.
Nature of human effects
Biodiversity loss
The human impact on biodiversity forms one of the primary attributes of the Anthropocene. Humankind has entered what is sometimes called the Earth's sixth major extinction. Most experts agree that human activities have accelerated the rate of species extinction. The exact rate remains controversial – perhaps 100 to 1000 times the normal background rate of extinction.
Anthropogenic extinctions started as humans migrated out of Africa over 60,000 years ago. Increases in global rates of extinction have been elevated above background rates since at least 1500, and appear to have accelerated in the 19th century and further since. Rapid economic growth is considered a primary driver of the contemporary displacement and eradication of other species.
According to the 2021 Economics of Biodiversity review, written by Partha Dasgupta and published by the UK government, "biodiversity is declining faster than at any time in human history." A 2022 scientific review published in Biological Reviews confirms that an anthropogenic sixth mass extinction event is currently underway. A 2022 study published in Frontiers in Ecology and the Environment, which surveyed more than 3,000 experts, states that the extinction crisis could be worse than previously thought, and estimates that roughly 30% of species "have been globally threatened or driven extinct since the year 1500." According to a 2023 study published in Biological Reviews some 48% of 70,000 monitored species are experiencing population declines from human activity, whereas only 3% have increasing populations.
Biogeography and nocturnality
Studies of urban evolution give an indication of how species may respond to stressors such as temperature change and toxicity. Species display varying abilities to respond to altered environments through both phenotypic plasticity and genetic evolution. Researchers have documented the movement of many species into regions formerly too cold for them, often at rates faster than initially expected.
Permanent changes in the distribution of organisms from human influence will become identifiable in the geologic record. This has occurred in part as a result of changing climate, but also in response to farming and fishing, and to the accidental introduction of non-native species to new areas through global travel. The ecosystem of the entire Black Sea may have changed during the last 2000 years as a result of nutrient and silica input from eroding deforested lands along the Danube River.
Researchers have found that the growth of the human population and expansion of human activity has resulted in many species of animals that are normally active during the day, such as elephants, tigers and boars, becoming nocturnal to avoid contact with humans, who are largely diurnal.
Climate change
One geological symptom resulting from human activity is increasing atmospheric carbon dioxide content. This signal in the Earth's climate system is especially significant because it is occurring much faster, and to a greater extent, than previously. Most of this increase is due to the combustion of fossil fuels such as coal, oil, and gas.
Geomorphology
Changes in drainage patterns traceable to human activity will persist over geologic time in large parts of the continents where the geologic regime is erosional. This involves, for example, the paths of roads and highways defined by their grading and drainage control. Direct changes to the form of the Earth's surface by human activities (quarrying and landscaping, for example) also record human impacts.
It has been suggested that the deposition of calthemite formations exemplify a natural process which has not previously occurred prior to the human modification of the Earth's surface, and which therefore represents a unique process of the Anthropocene. Calthemite is a secondary deposit, derived from concrete, lime, mortar or other calcareous material outside the cave environment. Calthemites grow on or under man-made structures (including mines and tunnels) and mimic the shapes and forms of cave speleothems, such as stalactites, stalagmites, flowstone etc.
Stratigraphy
Sedimentological record
Human activities like deforestation and road construction are believed to have elevated average total sediment fluxes across the Earth's surface. However, construction of dams on many rivers around the world means the rates of sediment deposition in any given place do not always appear to increase in the Anthropocene. For instance, many river deltas around the world are actually currently starved of sediment by such dams, and are subsiding and failing to keep up with sea level rise, rather than growing.
Fossil record
Increases in erosion due to farming and other operations will be reflected by changes in sediment composition and increases in deposition rates elsewhere. In land areas with a depositional regime, engineered structures will tend to be buried and preserved, along with litter and debris. Litter and debris thrown from boats or carried by rivers and creeks will accumulate in the marine environment, particularly in coastal areas, but also in mid-ocean garbage patches. Such human-created artifacts preserved in stratigraphy are known as "technofossils".
Changes in biodiversity will also be reflected in the fossil record, as will species introductions. An example cited is the domestic chicken, originally the red junglefowl Gallus gallus, native to south-east Asia but has since become the world's most common bird through human breeding and consumption, with over 60 billion consumed annually and whose bones would become fossilised in landfill sites. Hence, landfills are important resources to find "technofossils".
Trace elements
In terms of trace elements, there are distinct signatures left by modern societies. For example, in the Upper Fremont Glacier in Wyoming, there is a layer of chlorine present in ice cores from 1960's atomic weapon testing programs, as well as a layer of mercury associated with coal plants in the 1980s.
From the late 1940s, nuclear tests have led to local nuclear fallout and severe contamination of test sites both on land and in the surrounding marine environment. Some of the radionuclides that were released during the tests are Cs, Sr, Pu, Pu, Am, and I. These have been found to have had significant impact on the environment and on human beings. In particular, Cs and Sr have been found to have been released into the marine environment and led to bioaccumulation over a period through food chain cycles. The carbon isotope C, commonly released during nuclear tests, has also been found to be integrated into the atmospheric CO, and infiltrating the biosphere, through ocean-atmosphere gas exchange. Increase in thyroid cancer rates around the world is also surmised to be correlated with increasing proportions of the I radionuclide.
The highest global concentration of radionuclides was estimated to have been in 1965, one of the dates which has been proposed as a possible benchmark for the start of the formally defined Anthropocene.
Human burning of fossil fuels has also left distinctly elevated concentrations of black carbon, inorganic ash, and spherical carbonaceous particles in recent sediments across the world. Concentrations of these components increases markedly and almost simultaneously around the world beginning around 1950.
Anthropocene markers
A marker that accounts for a substantial global impact of humans on the total environment, comparable in scale to those associated with significant perturbations of the geological past, is needed in place of minor changes in atmosphere composition.
A useful candidate for holding markers in the geologic time record is the pedosphere. Soils retain information about their climatic and geochemical history with features lasting for centuries or millennia. Human activity is now firmly established as the sixth factor of soil formation. Humanity affects pedogenesis directly by, for example, land levelling, trenching and embankment building, landscape-scale control of fire by early humans, organic matter enrichment from additions of manure or other waste, organic matter impoverishment due to continued cultivation and compaction from overgrazing. Human activity also affects pedogenesis indirectly by drift of eroded materials or pollutants. Anthropogenic soils are those markedly affected by human activities, such as repeated ploughing, the addition of fertilisers, contamination, sealing, or enrichment with artefacts (in the World Reference Base for Soil Resources they are classified as Anthrosols and Technosols). An example from archaeology would be dark earth phenomena when long-term human habitation enriches the soil with black carbon.
Anthropogenic soils are recalcitrant repositories of artefacts and properties that testify to the dominance of the human impact, and hence appear to be reliable markers for the Anthropocene. Some anthropogenic soils may be viewed as the 'golden spikes' of geologists (Global Boundary Stratotype Section and Point), which are locations where there are strata successions with clear evidences of a worldwide event, including the appearance of distinctive fossils. Drilling for fossil fuels has also created holes and tubes which are expected to be detectable for millions of years. The astrobiologist David Grinspoon has proposed that the site of the Apollo 11 Lunar landing, with the disturbances and artifacts that are so uniquely characteristic of our species' technological activity and which will survive over geological time spans could be considered as the 'golden spike' of the Anthropocene.
An October 2020 study coordinated by University of Colorado at Boulder found that distinct physical, chemical and biological changes to Earth's rock layers began around the year 1950. The research revealed that since about 1950, humans have doubled the amount of fixed nitrogen on the planet through industrial production for agriculture, created a hole in the ozone layer through the industrial scale release of chlorofluorocarbons (CFCs), released enough greenhouse gasses from fossil fuels to cause planetary level climate change, created tens of thousands of synthetic mineral-like compounds that do not naturally occur on Earth, and caused almost one-fifth of river sediment worldwide to no longer reach the ocean due to dams, reservoirs and diversions. Humans have produced so many millions of tons of plastic each year since the early 1950s that microplastics are "forming a near-ubiquitous and unambiguous marker of Anthropocene". The study highlights a strong correlation between global human population size and growth, global productivity and global energy use and that the "extraordinary outburst of consumption and productivity demonstrates how the Earth System has departed from its Holocene state since c. 1950 CE, forcing abrupt physical, chemical and biological changes to the Earth's stratigraphic record that can be used to justify the proposal for naming a new epoch—the Anthropocene."
A December 2020 study published in Nature found that the total anthropogenic mass, or human-made materials, outweighs all the biomass on earth, and highlighted that "this quantification of the human enterprise gives a mass-based quantitative and symbolic characterization of the human-induced epoch of the Anthropocene."
Debates
Although the validity of Anthropocene as a scientific term remains disputed, its underlying premise, i.e., that humans have become a geological force, or rather, the dominant force shaping the Earth's climate, has found traction among academics and the public. In an opinion piece for Philosophical Transactions of the Royal Society B, Rodolfo Dirzo, Gerardo Ceballos, and Paul R. Ehrlich write that the term is "increasingly penetrating the lexicon of not only the academic socio-sphere, but also society more generally", and is now included as an entry in the Oxford English Dictionary. The University of Cambridge, as another example, offers a degree in Anthropocene Studies. In the public sphere, the term Anthropocene has become increasingly ubiquitous in activist, pundit, and political discourses. Some who are critical of the term Anthropocene nevertheless concede that "For all its problems, [it] carries power." The popularity and currency of the word has led scholars to label the term a "charismatic meta-category" or "charismatic mega-concept." The term, regardless, has been subject to a variety of criticisms from social scientists, philosophers, Indigenous scholars, and others.
The anthropologist John Hartigan has argued that due to its status as a charismatic meta-category, the term Anthropocene marginalizes competing, but less visible, concepts such as that of "multispecies." The more salient charge is that the ready acceptance of Anthropocene is due to its conceptual proximity to the status quo – that is, to notions of human individuality and centrality.
Other scholars appreciate the way in which the term Anthropocene recognizes humanity as a geological force, but take issue with the indiscriminate way in which it does. Not all humans are equally responsible for the climate crisis. To that end, scholars such as the feminist theorist Donna Haraway and sociologist Jason Moore, have suggested naming the Epoch instead as the Capitalocene. Such implies capitalism as the fundamental reason for the ecological crisis, rather than just humans in general. However, according to philosopher Steven Best, humans have created "hierarchical and growth-addicted societies" and have demonstrated "ecocidal proclivities" long before the emergence of capitalism. Hartigan, Bould, and Haraway all critique what Anthropocene does as a term; however, Hartigan and Bould differ from Haraway in that they criticize the utility or validity of a geological framing of the climate crisis, whereas Haraway embraces it.
In addition to "Capitalocene," other terms have also been proposed by scholars to trace the roots of the Epoch to causes other than the human species broadly. Janae Davis, for example, has suggested the "Plantationocene" as a more appropriate term to call attention to the role that plantation agriculture has played in the formation of the Epoch, alongside Kathryn Yusoff's argument that racism as a whole is foundational to the Epoch. The Plantationocene concept traces "the ways that plantation logics organize modern economies, environments, bodies, and social relations." In a similar vein, Indigenous studies scholars such as Métis geographer Zoe Todd have argued that the Epoch must be dated back to the colonization of the Americas, as this "names the problem of colonialism as responsible for contemporary environmental crisis." Potawatomi philosopher Kyle Powys Whyte has further argued that the Anthropocene has been apparent to Indigenous peoples in the Americas since the inception of colonialism because of "colonialism's role in environmental change."
Other critiques of Anthropocene have focused on the genealogy of the concept. Todd also provides a phenomenological account, which draws on the work of the philosopher Sara Ahmed, writing: "When discourses and responses to the Anthropocene are being generated within institutions and disciplines which are embedded in broader systems that act as de facto 'white public space,' the academy and its power dynamics must be challenged." Other aspects which constitute current understandings of the concept of the Anthropocene such as the ontological split between nature and society, the assumption of the centrality and individuality of the human, and the framing of environmental discourse in largely scientific terms have been criticized by scholars as concepts rooted in colonialism and which reinforce systems of postcolonial domination. To that end, Todd makes the case that the concept of Anthropocene must be indigenized and decolonized if it is to become a vehicle of justice as opposed to white thought and domination.
The scholar Daniel Wildcat, a Yuchi member of the Muscogee Nation of Oklahoma, for example, has emphasized spiritual connection to the land as a crucial tenet for any ecological movement. Similarly, in her study of the Ladakhi people in northern India, the anthropologist Karine Gagné, detailed their understanding of the relation between nonhuman and human agency as one that is deeply intimate and mutual. For the Ladakhi, the nonhuman alters the epistemic, ethical, and affective development of humans – it provides a way of "being in the world." The Ladakhi, who live in the Himalayas, for example, have seen the retreat of the glaciers not just as a physical loss, but also as the loss of entities which generate knowledge, compel ethical reflections, and foster intimacy. Other scholars have similarly emphasized the need to return to notions of relatedness and interdependence with nature. The writer Jenny Odell has written about what Robin Wall Kimmerer calls "species loneliness," the loneliness which occurs from the separation of the human and the nonhuman, and the anthropologist Radhika Govindrajan has theorized on the ethics of care, or relatedness, which govern relations between humans and animals. Scholars are divided on whether to do away with the term Anthropocene or co-opt it.
More recently, eco-philosopher David Abram, in a book chapter titled 'Interbreathing in the Humilocene', has proposed adoption of the term ‘Humilocene’ (the Epoch of Humility), which emphasizes an ethical imperative and ecocultural direction that human societies should take. The term plays with the etymological roots of the term ‘human’, thus connecting it back with terms such as humility, humus (the soil), and even a corrective sense of humiliation that some human societies should feel given their collective destructive impact on the earth.
"Early anthropocene" model
William Ruddiman has argued that the Anthropocene began approximately 8,000 years ago with the development of farming and sedentary cultures. At that point, humans were dispersed across all continents except Antarctica, and the Neolithic Revolution was ongoing. During this period, humans developed agriculture and animal husbandry to supplement or replace hunter-gatherer subsistence. Such innovations were followed by a wave of extinctions, beginning with large mammals and terrestrial birds. This wave was driven by both the direct activity of humans (e.g. hunting) and the indirect consequences of land-use change for agriculture. Landscape-scale burning by prehistoric hunter-gathers may have been an additional early source of anthropogenic atmospheric carbon. Ruddiman also claims that the greenhouse gas emissions in-part responsible for the Anthropocene began 8,000 years ago when ancient farmers cleared forests to grow crops.
Ruddiman's work has been challenged with data from an earlier interglaciation ("Stage 11", approximately 400,000 years ago) which suggests that 16,000 more years must elapse before the current Holocene interglaciation comes to an end, and thus the early anthropogenic hypothesis is invalid. Also, the argument that "something" is needed to explain the differences in the Holocene is challenged by more recent research showing that all interglacials are different.
Homogenocene
Homogenocene (from old Greek: homo-, same; geno-, kind; kainos-, new;) is a more specific term used to define our current epoch, in which biodiversity is diminishing and biogeography and ecosystems around the globe seem more and more similar to one another mainly due to invasive species that have been introduced around the globe either on purpose (crops, livestock) or inadvertently. This is due to the newfound globalism that humans participate in, as species traveling across the world to another region was not as easily possible in any point of time in history as it is today.
The term Homogenocene was first used by Michael Samways in his editorial article in the Journal of Insect Conservation from 1999 titled "Translocating fauna to foreign lands: Here comes the Homogenocene."
The term was used again by John L. Curnutt in the year 2000 in Ecology, in a short list titled "A Guide to the Homogenocene", which reviewed Alien species in North America and Hawaii: impacts on natural ecosystems by George Cox. Charles C. Mann, in his acclaimed book 1493: Uncovering the New World Columbus Created, gives a bird's-eye view of the mechanisms and ongoing implications of the homogenocene.
Society and culture
Humanities
The concept of the Anthropocene has also been approached via humanities such as philosophy, literature and art. In the scholarly world, it has been the subject of increasing attention through special journals, conferences, and disciplinary reports. The Anthropocene, its attendant timescale, and ecological implications prompt questions about death and the end of civilisation, memory and archives, the scope and methods of humanistic inquiry, and emotional responses to the "end of nature". Some scholars have posited that the realities of the Anthropocene, including "human-induced biodiversity loss, exponential increases in per-capita resource consumption, and global climate change," have made the goal of environmental sustainability largely unattainable and obsolete.
Historians have actively engaged the Anthropocene. In 2000, the same year that Paul Crutzen coined the term, world historian John McNeill published Something New Under the Sun, tracing the rise of human societies' unprecedented impact on the planet in the twentieth century. In 2001, historian of science Naomi Oreskes revealed the systematic efforts to undermine trust in climate change science and went on to detail the corporate interests delaying action on the environmental challenge. Both McNeill and Oreskes became members of the Anthropocene Working Group because of their work correlating human activities and planetary transformation.
Popular culture
In 2019, the English musician Nick Mulvey released a music video on YouTube named "In the Anthropocene". In cooperation with Sharp's Brewery, the song was recorded on 105 vinyl records made of washed-up plastic from the Cornish coast.
The Anthropocene Reviewed is a podcast and book by author John Green, where he "reviews different facets of the human-centered planet on a five-star scale".
Photographer Edward Burtynsky created "The Anthropocene Project" with Jennifer Baichwal and Nicholas de Pencier, which is a collection of photographs, exhibitions, a film, and a book. His photographs focus on landscape photography that captures the effects human beings have had on the earth.
In 2015, the American death metal band Cattle Decapitation released its seventh studio album titled The Anthropocene Extinction.
In 2020, Canadian musician Grimes released her fifth studio album titled Miss Anthropocene. The name is also a pun on the feminine title "Miss" and the words "misanthrope" and "Anthropocene."
See also
References
External links
The Anthropocene epoch: have we entered a new phase of planetary history?, The Guardian, 2019
Drawing A Line In The Mud: Scientists Debate When 'Age Of Humans' Began. NPR. 17 March 2021.
(lecture given by Professor Will Steffen in Melbourne, Australia)
8 billion humans: How population growth and climate change are connected as the 'Anthropocene engine' transforms the planet. The Conversation. 3 November 2022.
Holocene
Human impact on the environment
Human ecology
1960s neologisms
Events in the geological history of Earth | 0.770273 | 0.998694 | 0.769267 |
The Civilizing Process | The Civilizing Process is a book by German sociologist Norbert Elias. It is an influential work in sociology and Elias' most important work. It was first published in Basel, Switzerland in two volumes in 1939 in German as Über den Prozeß der Zivilisation.
Because of World War II, it was virtually ignored, but gained popularity when it was republished in 1969 and translated into English. Covering European history from roughly 800 AD to 1900 AD, it is the first formal analysis and theory of civilization. Elias proposes a double sociogenesis of the state: the social development of the state has two sides, mental and political. The civilisation process that Elias describes results in a profound change in human behaviour. It leads to the construction of the modern state and transition of man from the warrior of the Middle Ages to the civil man of the end of the 19th c.
The Civilizing Process is today regarded as the founding work of figurational sociology. In 1998 the International Sociological Association listed the work as the seventh most important sociological book of the 20th century.
Themes
The first volume, The History of Manners, traces the historical developments of the European habitus, or "second nature", the particular individual psychic structures molded by social attitudes. Elias traced how post-medieval European standards regarding violence, sexual behaviour, bodily functions, table manners and forms of speech were gradually transformed by increasing thresholds of shame and repugnance, working outward from a nucleus in court etiquette. The internalized "self-restraint" imposed by increasingly complex networks of social connections developed the "psychological" self-perceptions that Freud recognized as the "super-ego".
The second volume, State Formation and Civilization, looks into the formation of the state and the theory of civilisation. First, Elias explains that throughout time, social unity increased its control over military and fiscal power until possessing the monopoly over it. The progressive monopolisation of the military and taxation were feeding one another (the political power was using tax money to pay its army and using the army to collect the taxes). Elias describes several steps of the creation of the State:
From the 11th to the 13th c.: there was an open competition between different houses where everyone was fighting to maintain and extend its power. For instance, after the death of Charles IV of France (1328), France formed a powerful agglomeration of territories. However, one cannot speak of a coherent kingdom yet because regional consciousness was still predominant, the interests of each territory and seigneury were prevailing.
From the 14th to the 16th c.: the courts were progressively established, and vassals gathered around important lords. The feudality was princely (it was seigneurial before) because only the most powerful houses had maintained their power and extended it by taking over the territory of smaller houses.
After the 16th c.: the royal house is victorious and has a monopoly of power. It created a central administration and institutions. The competitions were now regulated: it takes place peacefully within the state to access a high position in the administration.
At the end of the process, the State is created and possessed the monopoly of legitimate physical violence. Elias also describes that the "absolutist mechanism": the state became the supreme body that coordinates the different interdependent group of the society.
In parallel of the sociogeneses of the state, Elias notes the change in the way to manage the bodily functions. Individuals tried to repress in themselves what is perceived to be part of animal nature, repress in a sphere that appears through time, intimacy. Hence, new feelings appeared: embarrassment and prudishness.
Reception
A particular criticism of The Civilizing Process was formulated by German ethnologist and cultural anthropologist Hans Peter Duerr in his 5-volume Der Mythos vom Zivilisationsprozeß (1988–2002), pointing out there existed plenty of social restrictions and regulations in Western culture and elsewhere since long before the Medieval period. Elias and his supporters responded that he had never intended to claim that social regulations or self-restraining psychological agents would be institutions singular to Western modernity, it is just that Western culture developed particularly sophisticated, concise, comprehensive, and rigid institutions apparent for instance in its decisive technological advances when compared to other cultures.
English editions
The Civilizing Process, Vol.I. The History of Manners, Oxford: Blackwell, 1969
The Civilizing Process, Vol.II. State Formation and Civilization, Oxford: Blackwell, 1982
The Civilizing Process. Oxford: Blackwell, 1994
The Civilizing Process. Sociogenetic and Psychogenetic Investigations. Revised edition. Oxford: Blackwell, 2000
References
Sociology books
1939 non-fiction books
German non-fiction books | 0.784368 | 0.980704 | 0.769233 |
Social anthropology | Social anthropology is the study of patterns of behaviour in human societies and cultures. It is the dominant constituent of anthropology throughout the United Kingdom and much of Europe, where it is distinguished from cultural anthropology. In the United States, social anthropology is commonly subsumed within cultural anthropology or sociocultural anthropology.
Comparison with cultural anthropology
The term cultural anthropology is generally applied to ethnographic works that are holistic in spirit, are oriented to the ways in which culture affects individual experience, or aim to provide a rounded view of the knowledge, customs, and institutions of people. Social anthropology is a term applied to ethnographic works that attempt to isolate a particular system of social relations such as those that comprise domestic life, economy, law, politics, or religion, give analytical priority to the organizational bases of social life, and attend to cultural phenomena as somewhat secondary to the main issues of social scientific inquiry.
Topics of interest for social anthropologists have included customs, economic and political organization, law and conflict resolution, patterns of consumption and exchange, kinship and family structure, gender relations, childbearing and socialization, religion, while present-day social anthropologists are also concerned with issues of globalism, ethnic violence, gender studies, transnationalism and local experience, and the emerging cultures of cyberspace, and can also help with bringing opponents together when environmental concerns come into conflict with economic developments.
British and American anthropologists including Gillian Tett and Karen Ho who studied Wall Street provided an alternative explanation for the financial crisis of 2007–2010 to the technical explanations rooted in economic and political theory.
Differences among British, French, and American sociocultural anthropologies have diminished with increasing dialogue and borrowing of both theory and methods. Social and cultural anthropologists, and some who integrate the two, are found in most institutes of anthropology. Thus the formal names of institutional units no longer necessarily reflect fully the content of the disciplines these cover. Some, such as the Institute of Social and Cultural Anthropology (Oxford), changed their name to reflect the change in composition; others, such as Social Anthropology at the University of Kent, became simply Anthropology. Most retain the name under which they were founded.
Long-term qualitative research, including intensive field studies (emphasizing participant observation methods), has been traditionally encouraged in social anthropology rather than quantitative analysis of surveys, questionnaires and brief field visits typically used by economists, political scientists, and (most) sociologists.
Comparison and intersection with cognitive anthropology
Cognitive anthropology studies how people represent and think about events and objects in the world. It links human thought processes and the physical and ideational aspects of culture. The scopes of these two disciplines intersect in the field of cognitive development. The following part of the section shows the significance of their co-research for understanding the processes that constitute society.
According to Sir Edward Tylor: "Culture, or civilization, taken in its broad, ethnographic sense, is that complex whole which includes knowledge, belief, art, morals, law, custom, and any other capabilities and habits acquired by man as a member of society.” The cultural consensus principle is incorporated in the reasoning behind the cultural consonance model and other similar models (see cognitive anthropology) that seek to evaluate the effects of shared cognitive structures on social life and the human condition beginning from the onset of cognitive development. The major part of social and cognitive anthropology concepts (e.g., Cultural consonance, Cultural models, Knowledge structures, Shared knowledge etc.) seem to rely upon broad pervasive, unaware interactions between society members. Research shows that unconscious remembering increases recall efficiency over time and yields greater confidence in that thought. According to the received view in cognitive sciences, cognition begins from birth (and even from prenatal) due to motive forces of shared intentionality: unaware knowledge assimilation. Therefore, mechanisms of unaware interactions at the onset of life, one of the focuses of research in cognitive sciences, have become the central research issue in social and cognitive anthropology.
Another intersection of these two disciplines appears in neuroscience research. Behavioral propensities (an exteriorization of Cultural models, Schemata, etc.; see key concepts of cognitive anthropology) are the product of biological and cultural factors that manifest in individual brain development, neural wiring, and neurochemical homeostasis. According to received view in neuroscience, an observed human behavior, in any context, is the last event in a long chain of biological and cultural interactions. The brain´s anatomy is subject to neuroplasticity and depends on both, contextual (cultural) and historically dependent (previous experience) mechanisms to shape the neural system. By bridging sociology with anthropology and cognitive science perspectives, we can assess shared cultural knowledge – understand processes underlying unspoken social norms and beliefs, as well as study processes of shaping individual values that together constitute societies.
Focus and practice
Social anthropology is distinguished from subjects such as economics or political science by its holistic range and the attention it gives to the comparative diversity of societies and cultures across the world, and the capacity this gives the discipline to re-examine Euro-American assumptions. It is differentiated from sociology, both in its main methods (based on long-term participant observation and linguistic competence), and in its commitment to the relevance and illumination provided by micro studies. It extends beyond strictly social phenomena to culture, art, individuality, and cognition. Many social anthropologists use quantitative methods, too, particularly those whose research touches on topics such as local economies, demography, human ecology, cognition, or health and illness.
Specializations
Specializations within social anthropology shift as its objects of study are transformed and as new intellectual paradigms appear; musicology and medical anthropology are examples of current, well-defined specialities. More recent and currently specializations are:
cognitive development – neuroscience research for the neuroplasticity and the shared intentionality approach to the extended mind thesis: the anthropological analysis of ecological learning in cognitive development;
social and ethical understandings of novel technologies – the way anthropologists analyze everyday life, cultural reproduction, and human evolution;
kinship – emergent forms of "the family" and other new socialities modelled on kinship;
postsocialism crisis – the ongoing social fall-out of the demise of state socialism;
the politics of resurgent religiosity; and
audit cultures – analysis of audit cultures and accountability.
The subject has been enlivened by, and has contributed to, approaches from other disciplines, such as philosophy (ethics, phenomenology, logic), the history of science, psychoanalysis, and linguistics.
Ethical considerations
The subject has both ethical and reflexive dimensions. Practitioners have developed an awareness of the sense in which scholars create their objects of study and the ways in which anthropologists themselves may contribute to processes of change in the societies they study. An example of this is the "hawthorne effect", whereby those being studied may alter their behaviour in response to the knowledge that they are being watched and studied.
History
Social anthropology has historical roots in a number of 19th-century disciplines, including the study of Classics, ethnography, ethnology, folklore, linguistics, and sociology, among others. Its immediate precursor took shape in the work of Edward Burnett Tylor and James George Frazer in the late 19th century and underwent major changes in both method and theory during the period 1890–1920 with a new emphasis on original fieldwork, long-term holistic study of social behavior in natural settings, and the introduction of French and German social theory.
Polish anthropologist and ethnographer Bronisław Malinowski, one of the most important influences on British social anthropology, emphasized long-term fieldwork in which anthropologists work in the vernacular and immerse themselves in the daily practices of local people. This development was bolstered by Franz Boas' introduction of the concept of cultural relativism, arguing that cultures are based on different ideas about the world and can therefore only be properly understood in terms of their own standards and values.
Museums such as the British Museum weren't the only site of anthropological studies; with the New Imperialism period, starting in the 1870s, zoos became unattended "laboratories", especially the so-called "ethnological exhibitions" or "Negro villages". Thus, "savages" from the Americas, Africa and Asia were displayed, often nude, in cages, in what has been termed "human zoos". In 1906, Congolese pygmy Ota Benga was put by American anthropologist Madison Grant in a cage in the Bronx Zoo, labelled "the missing link" between an orangutan and the "White race"—Grant, a renowned eugenicist, was also the author of The Passing of the Great Race (1916). Such exhibitions were attempts to illustrate and prove in the same movement the validity of scientific racism, whose first formulation may be found in Arthur de Gobineau's An Essay on the Inequality of Human Races (1853–1855). In 1931, the Colonial Exhibition in Paris still displayed Kanaks from New Caledonia in the "indigenous village"; it received 24 million visitors in six months, thus demonstrating the popularity of such "human zoos".
Anthropology grew increasingly distinct from natural history and by the end of the 19th century the discipline began to crystallize into its modern form—by 1935, for example, it was possible for T. K. Penniman to write a history of the discipline entitled A Hundred Years of Anthropology. At the time, the field was dominated by "the comparative method". It was assumed that all societies passed through a single evolutionary process from the most primitive to most advanced. Non-European societies were thus seen as evolutionary "living fossils" that could be studied in order to understand the European past. Scholars wrote histories of prehistoric migrations which were sometimes valuable but often also fanciful. It was during this time that Europeans first accurately traced Polynesian migrations across the Pacific Ocean for instance—although some of them believed it originated in Egypt. Finally, the concept of race was actively discussed as a way to classify—and rank—human beings based on difference.
Tylor and Frazer
Edward Burnett Tylor (1832–1917) and James George Frazer (1854–1941) are generally considered the antecedents to modern social anthropologists in Great Britain. Although the British anthropologist Tylor undertook a field trip to Mexico, both he and Frazer derived most of the material for their comparative studies through extensive reading, not fieldwork, mainly the Classics (literature and history of Ancient Greece and Rome), the work of the early European folklorists, and reports from missionaries, travelers, and contemporaneous ethnologists.
Tylor advocated strongly for unilinealism and a form of "uniformity of mankind". Tylor in particular laid the groundwork for theories of cultural diffusionism, stating that there are three ways that different groups can have similar cultural forms or technologies: "independent invention, inheritance from ancestors in a distant region, transmission from one race to another."
Tylor formulated one of the early and influential anthropological conceptions of culture as "that complex whole, which includes knowledge, belief, art, morals, law, custom, and any other capabilities and habits acquired by [humans] as [members] of society." However, as Stocking notes, Tylor mainly concerned himself with describing and mapping the distribution of particular elements of culture, rather than with the larger function, and he generally seemed to assume a Victorian idea of progress rather than the idea of non-directional, multilineal cultural change proposed by later anthropologists. Tylor also theorized about the origins of religious beliefs in human beings, proposing a theory of animism as the earliest stage, and noting that "religion" has many components, of which he believed the most important to be belief in supernatural beings (as opposed to moral systems, cosmology, etc.).
Frazer, a Scottish scholar with a broad knowledge of Classics, also concerned himself with the study of religion, mythology, and magic. His comparative studies, most influentially in the numerous editions of The Golden Bough, analyzed similarities in religious belief and symbolism globally. Neither Tylor nor Frazer, however, were particularly interested in fieldwork, nor were they interested in examining how the cultural elements and institutions fit together. The Golden Bough was abridged drastically in subsequent editions after his first.
Malinowski and the British School
Toward the turn of the 20th century, a number of anthropologists became dissatisfied with this categorization of cultural elements; historical reconstructions also came to seem increasingly speculative to them. Under the influence of several younger scholars, a new approach came to predominate among British anthropologists, concerned with analyzing how societies held together in the present (synchronic analysis, rather than diachronic or historical analysis), and emphasizing long-term (one to several years) immersion fieldwork. Cambridge University financed a multidisciplinary expedition to the Torres Strait Islands in 1898, organized by Alfred Cort Haddon and including a physician-anthropologist, William Rivers, as well as a linguist, a botanist, and other specialists. The findings of the expedition set new standards for ethnographic description.
A decade and a half later, the Polish anthropology student Bronisław Malinowski (1884–1942) was beginning what he expected to be a brief period of fieldwork in the old model, collecting lists of cultural items, when the outbreak of the First World War stranded him in New Guinea. As a subject of the Austro-Hungarian Empire resident on a British colonial possession, he was effectively confined to New Guinea for several years.
He made use of the time by undertaking far more intensive fieldwork than had been done by anthropologists, and his classic ethnographical work, Argonauts of the Western Pacific (1922) advocated an approach to fieldwork that became standard in the field: getting "the native's point of view" through participant observation. Theoretically, he advocated a functionalist interpretation, which examined how social institutions functioned to satisfy individual needs.
1920s–1940
Modern social anthropology was founded in Britain at the London School of Economics and Political Science following World War I. Influences include both the methodological revolution pioneered by Bronisław Malinowski's process-oriented fieldwork in the Trobriand Islands of Melanesia between 1915 and 1918 and Alfred Radcliffe-Brown's theoretical program for systematic comparison that was based on a conception of rigorous fieldwork and the structure-functionalist conception of Durkheim’s sociology. Other intellectual founders include W. H. R. Rivers and A. C. Haddon, whose orientation reflected the contemporary Parapsychologies of Wilhelm Wundt and Adolf Bastian, and Sir E. B. Tylor, who defined anthropology as a positivist science following Auguste Comte. Edmund Leach (1962) defined social anthropology as a kind of comparative micro-sociology based on intensive fieldwork studies. Scholars have not settled a theoretical orthodoxy on the nature of science and society, and their tensions reflect views which are seriously opposed.
A. R. Radcliffe-Brown also published a seminal work in 1922. He had carried out his initial fieldwork in the Andaman Islands in the old style of historical reconstruction. However, after reading the work of French sociologists Émile Durkheim and Marcel Mauss, Radcliffe-Brown published an account of his research (entitled simply The Andaman Islanders) that paid close attention to the meaning and purpose of rituals and myths. Over time, he developed an approach known as structural functionalism, which focused on how institutions in societies worked to balance out or create an equilibrium in the social system to keep it functioning harmoniously. His structuralist approach contrasted with Malinowski's functionalism, and was quite different from the later French structuralism, which examined the conceptual structures in language and symbolism.
Malinowski and Radcliffe-Brown's influence stemmed from the fact that they, like Boas, actively trained students and aggressively built up institutions that furthered their programmatic ambitions. This was particularly the case with Radcliffe-Brown, who spread his agenda for "Social Anthropology" by teaching at universities across the British Empire and Commonwealth. From the late 1930s until the postwar period appeared a string of monographs and edited volumes that cemented the paradigm of British Social Anthropology (BSA). Famous ethnographies include The Nuer, by Edward Evan Evans-Pritchard, and The Dynamics of Clanship Among the Tallensi, by Meyer Fortes; well-known edited volumes include African Systems of Kinship and Marriage and African Political Systems.
Post-World War II trends
Following World War II, sociocultural anthropology as comprised by the fields of ethnography and ethnology diverged into an American school of cultural anthropology while social anthropology diversified in Europe by challenging the principles of structure-functionalism, absorbing ideas from Claude Lévi-Strauss's structuralism and from the followers of Max Gluckman, and embracing the study of conflict, change, urban anthropology, and networks. Together with many of his colleagues at the Rhodes-Livingstone Institute and students at Manchester University, collectively known as the Manchester School, took BSA in new directions through their introduction of explicitly Marxist-informed theory, their emphasis on conflicts and conflict resolution, and their attention to the ways in which individuals negotiate and make use of the social structural possibilities. During this period Gluckman was also involved in a dispute with American anthropologist Paul Bohannan on ethnographic methodology within the anthropological study of law. He believed that indigenous terms used in ethnographic data should be translated into Anglo-American legal terms for the benefit of the reader. The Association of Social Anthropologists of the UK and Commonwealth was founded in 1946.
In Britain, anthropology had a great intellectual impact, it "contributed to the erosion of Christianity, the growth of cultural relativism, an awareness of the survival of the primitive in modern life, and the replacement of diachronic modes of analysis with synchronic, all of which are central to modern culture." Later in the 1960s and 1970s, Edmund Leach and his students Mary Douglas and Nur Yalman, among others, introduced French structuralism in the style of Claude Lévi-Strauss.
In countries of the British Commonwealth, social anthropology has often been institutionally separate from physical anthropology and primatology, which may be connected with departments of biology and zoology; and from archaeology, which may be connected with departments of Classics, Egyptology, Oriental studies, and the like. In other countries (and in some, particularly smaller, British and North American universities), anthropologists have also found themselves institutionally linked with scholars of cultural studies, ethnic studies, folklore, human geography, museum studies, sociology, social relations, and social work. British anthropology has continued to emphasize social organization and economics over purely symbolic or literary topics.
1980s to present
The European Association of Social Anthropologists (EASA) was founded in 1989 as a society of scholarship at a meeting of founder members from fourteen European countries, supported by the Wenner-Gren Foundation for Anthropological Research. The Association seeks to advance anthropology in Europe by organizing biennial conferences and by editing its academic journal, Social Anthropology/Anthropologies Social. Departments of Social Anthropology at different universities have tended to focus on disparate aspects of the field, and can be found in several universities around the world. The field of social anthropology has expanded in ways not anticipated by the founders of the field, as for example in the subfield of structure and dynamics.
Anthropologists associated with social anthropology
Andre Beteille
Aleksandar Boskovic
Edmund Snow Carpenter
Philippe Descola
Mary Douglas
Thomas Hylland Eriksen
E. E. Evans-Pritchard
Raymond Firth
Rosemary Firth
Meyer Fortes
Ernest Gellner
Stephen D. Glazier
Jack Goody
David Graeber
Don Kalb
Adam Kuper
Edmund Leach
Murray Leaf
Claude Lévi-Strauss
David MacDougall
Judith MacDougall
Alan Macfarlane
Bronisław Malinowski
Siegfried Frederick Nadel
A.H.J. Prins
Alfred Radcliffe-Brown
Juan Mauricio Renold
Audrey Richards
Victor Turner
Marshall Sahlins
Marilyn Strathern
Hebe Vessuri
Susan Visvanathan
Douglas R. White
Eric Wolf
Robert Layton
See also
Cultural anthropology
Ethnology
Ethnosemiotics
List of important publications in anthropology
Rajamandala
Notes
References
Benchmark Statement Anthropology (UK)
Further reading
Malinowski, Bronislaw (1915): The Trobriand Islands
Malinowski, Bronislaw (1922): Argonauts of the Western Pacific
Malinowski, Bronislaw (1929): The Sexual Life of Savages in North-Western Melanesia
Malinowski, Bronislaw (1935): Coral Gardens and Their Magic: A Study of the Methods of Tilling the Soil and of Agricultural Rites in the Trobriand Islands
Leach, Edmund (1954): Political systems of Highland Burma. London: G. Bell.
Leach, Edmund (1982): Social Anthropology
Eriksen, Thomas H. (1985):, pp. 926–929 in The Social Science Encyclopedia
Kuper, Adam (1996):
External links
The Moving Anthropology Student Network (MASN) - website offers tutorials, information on the subject, discussion-forums and a large link-collection for all interested scholars of social anthropology | 0.772604 | 0.995633 | 0.76923 |
Western imperialism in Asia | The influence and imperialism of Western Europe and associated states (such as Russia, Japan, and the United States) peaked in Asian territories from the colonial period beginning in the 16th century and substantially reducing with 20th century decolonization. It originated in the 15th-century search for alternative trade routes to the Indian subcontinent and Southeast Asia as a response to Ottoman control of the Silk Road that led directly to the Age of Discovery, and additionally the introduction of early modern warfare into what Europeans first called the East Indies and later the Far East. By the early 16th century, the Age of Sail greatly expanded Western European influence and development of the spice trade under colonialism. European-style colonial empires and imperialism operated in Asia throughout six centuries of colonialism, formally ending with the independence of the Portuguese Empire's last colony Macau in 1999. The empires introduced Western concepts of nation and the multinational state. This article attempts to outline the consequent development of the Western concept of the nation state.
European political power, commerce, and culture in Asia gave rise to growing trade in commodities—a key development in the rise of today's modern world free market economy. In the 16th century, the Portuguese broke the (overland) monopoly of the Arabs and Italians in trade between Asia and Europe by the discovery of the sea route to India around the Cape of Good Hope. The ensuing rise of the rival Dutch East India Company gradually eclipsed Portuguese influence in Asia. Dutch forces first established independent bases in the East (most significantly Batavia, the heavily fortified headquarters of the Dutch East India Company) and then between 1640 and 1660 wrested Malacca, Ceylon, some southern Indian ports, and the lucrative Japan trade from the Portuguese. Later, the English and the French established settlements in India and trade with China and their acquisitions would gradually surpass those of the Dutch. Following the end of the Seven Years' War in 1763, the British eliminated French influence in India and established the British East India Company (founded in 1600) as the most important political force on the Indian subcontinent.
Before the Industrial Revolution in the mid-to-late 19th century, demand for oriental goods such as porcelain, silk, spices, and tea remained the driving force behind European imperialism. The Western European stake in Asia remained confined largely to trading stations and strategic outposts necessary to protect trade. Industrialization, however, dramatically increased European demand for Asian raw materials; with the severe Long Depression of the 1870s provoking a scramble for new markets for European industrial products and financial services in Africa, the Americas, Eastern Europe, and especially in Asia. This scramble coincided with a new era in global colonial expansion known as "the New Imperialism", which saw a shift in focus from trade and indirect rule to formal colonial control of vast overseas territories ruled as political extensions of their mother countries. Between the 1870s and the beginning of World War I in 1914, the United Kingdom, France, and the Netherlands—the established colonial powers in Asia—added to their empires vast expanses of territory in the Middle East, the Indian Subcontinent, and Southeast Asia. In the same period, the Empire of Japan, following the Meiji Restoration; the German Empire, following the end of the Franco-Prussian War in 1871; Tsarist Russia; and the United States, following the Spanish–American War in 1898, quickly emerged as new imperial powers in East Asia and in the Pacific Ocean area.
In Asia, World War I and World War II were played out as struggles among several key imperial powers, with conflicts involving the European powers along with Russia and the rising American and Japanese. None of the colonial powers, however, possessed the resources to withstand the strains of both World Wars and maintain their direct rule in Asia. Although nationalist movements throughout the colonial world led to the political independence of nearly all of Asia's remaining colonies, decolonization was intercepted by the Cold War. Southeast Asia, South Asia, the Middle East, and East Asia remained embedded in a world economic, financial, and military system in which the great powers compete to extend their influence. However, the rapid post-war economic development and rise of the industrialized developed countries of Republic of China on Taiwan, Singapore, South Korea, Japan and the developing countries of India, the People's Republic of China and its autonomous territory of Hong Kong, along with the collapse of the Soviet Union, have greatly diminished Western European influence in Asia. The United States remains influential with trade and military bases in Asia.
Early European exploration of Asia
European exploration of Asia started in ancient Roman times along the Silk Road. The Romans had knowledge of lands as distant as China. Trade with India through the Roman Egyptian Red Sea ports was significant in the first centuries of the Common Era.
Medieval European exploration of Asia
In the 13th and 14th centuries, a number of Europeans, many of them Christian missionaries, had sought to penetrate into China. The most famous of these travelers was Marco Polo. But these journeys had little permanent effect on east–west trade because of a series of political developments in Asia in the last decades of the 14th century, which put an end to further European exploration of Asia. The Yuan dynasty in China, which had been receptive to European missionaries and merchants, was overthrown, and the new Ming rulers were found to be unreceptive of religious proselytism. Meanwhile, the Ottoman Turks consolidated control over the eastern Mediterranean, closing off key overland trade routes. Thus, until the 15th century, only minor trade and cultural exchanges between Europe and Asia continued at certain terminals controlled by Muslim traders.
Oceanic voyages to Asia
Western European rulers determined to find new trade routes of their own. The Portuguese spearheaded the drive to find oceanic routes that would provide cheaper and easier access to South and East Asian goods. This chartering of oceanic routes between East and West began with the unprecedented voyages of Portuguese and Spanish sea captains. Their voyages were influenced by medieval European adventurers, who had journeyed overland to the Far East and contributed to geographical knowledge of parts of Asia upon their return.
In 1488, Bartolomeu Dias rounded the southern tip of Africa under the sponsorship of Portugal's John II, from which point he noticed that the coast swung northeast (Cape of Good Hope). While Dias' crew forced him to turn back, by 1497, Portuguese navigator Vasco da Gama made the first open voyage from Europe to India. In 1520, Ferdinand Magellan, a Portuguese navigator in the service of the Crown of Castile ('Spain'), found a sea route into the Pacific Ocean.
Portuguese and Spanish trade and colonization in Asia
Portuguese monopoly over trade in the Indian Ocean and Asia
In 1509, the Portuguese under Francisco de Almeida won the decisive battle of Diu against a joint Mamluk and Arab fleet sent to expel the Portuguese of the Arabian Sea. The victory enabled Portugal to implement its strategy of controlling the Indian Ocean.
Early in the 16th century, Afonso de Albuquerque emerged as the Portuguese colonial viceroy most instrumental in consolidating Portugal's holdings in Africa and in Asia. He understood that Portugal could wrest commercial supremacy from the Arabs only by force, and therefore devised a plan to establish forts at strategic sites which would dominate the trade routes and also protect Portuguese interests on land. In 1510, he conquered Goa in India, which enabled him to gradually consolidate control of most of the commercial traffic between Europe and Asia, largely through trade; Europeans started to carry on trade from forts, acting as foreign merchants rather than as settlers. In contrast, early European expansion in the "West Indies", (later known to Europeans as a separate continent from Asia that they would call the "Americas") following the 1492 voyage of Christopher Columbus, involved heavy settlement in colonies that were treated as political extensions of the mother countries.
Lured by the potential of high profits from another expedition, the Portuguese established a permanent base in Cochin, south of the Indian trade port of Calicut in the early 16th century. In 1510, the Portuguese, led by Afonso de Albuquerque, seized Goa on the coast of India, which Portugal held until 1961, along with Diu and Daman (the remaining territory and enclaves in India from a former network of coastal towns and smaller fortified trading ports added and abandoned or lost centuries before). The Portuguese soon acquired a monopoly over trade in the Indian Ocean.
Portuguese viceroy Albuquerque (1509–1515) resolved to consolidate Portuguese holdings in Africa and Asia, and secure control of trade with the East Indies and China. His first objective was Malacca, which controlled the narrow strait through which most Far Eastern trade moved. Captured in 1511, Malacca became the springboard for further eastward penetration, starting with the voyage of António de Abreu and Francisco Serrão in 1512, ordered by Albuquerque, to the Moluccas. Years later the first trading posts were established in the Moluccas, or "Spice Islands", which was the source for some of the world's most hotly demanded spices, and from there, in Makassar and some others, but smaller, in the Lesser Sunda Islands. By 1513–1516, the first Portuguese ships had reached Canton on the southern coasts of China.
In 1513, after the failed attempt to conquer Aden, Albuquerque entered with an armada, for the first time for Europeans by the ocean via, on the Red Sea; and in 1515, Albuquerque consolidated the Portuguese hegemony in the Persian Gulf gates, already begun by him in 1507, with the domain of Muscat and Ormuz. Shortly after, other fortified bases and forts were annexed and built along the Gulf, and in 1521, through a military campaign, the Portuguese annexed Bahrain.
The Portuguese conquest of Malacca triggered the Malayan–Portuguese war. In 1521, Ming dynasty China defeated the Portuguese at the Battle of Tunmen and then defeated the Portuguese again at the Battle of Xicaowan. The Portuguese tried to establish trade with China by illegally smuggling with the pirates on the offshore islands off the coast of Zhejiang and Fujian, but they were driven away by the Ming navy in the 1530s-1540s.
In 1557, China decided to lease Macau to the Portuguese as a place where they could dry goods they transported on their ships, which they held until 1999. The Portuguese, based at Goa and Malacca, had now established a lucrative maritime empire in the Indian Ocean meant to monopolize the spice trade. The Portuguese also began a channel of trade with the Japanese, becoming the first recorded Westerners to have visited Japan. This contact introduced Christianity and firearms into Japan.
In 1505, (also possibly before, in 1501), the Portuguese, through Lourenço de Almeida, the son of Francisco de Almeida, reached Ceylon. The Portuguese founded a fort at the city of Colombo in 1517 and gradually extended their control over the coastal areas and inland. In a series of military conflicts and political maneuvers, the Portuguese extended their control over the Sinhalese kingdoms, including Jaffna (1591), Raigama (1593), Sitawaka (1593), and Kotte (1594)- However, the aim of unifying the entire island under Portuguese control faced the Kingdom of Kandy`s fierce resistance. The Portuguese, led by Pedro Lopes de Sousa, launched a full-scale military invasion of the kingdom of Kandy in the Campaign of Danture of 1594. The invasion was a disaster for the Portuguese, with their entire army wiped out by Kandyan guerrilla warfare. Constantino de Sá, romantically celebrated in the 17th century Sinhalese Epic (also for its greater humanism and tolerance compared to other governors) led the last military operation that also ended in disaster. He died in the Battle of Randeniwela, refusing to abandon his troops in the face of total annihilation.
The energies of Castile (later, the unified Spain), the other major colonial power of the 16th century, were largely concentrated on the Americas, not South and East Asia, but the Spanish did establish a footing in the Far East in the Philippines. After fighting with the Portuguese by the Spice Islands since 1522 and the agreement between the two powers in 1529 (in the treaty of Zaragoza), the Spanish, led by Miguel López de Legazpi, settled and conquered gradually the Philippines since 1564. After the discovery of the return voyage to the Americas by Andres de Urdaneta in 1565, cargoes of Chinese goods were transported from the Philippines to Mexico and from there to Spain. By this long route, Spain reaped some of the profits of Far Eastern commerce. Spanish officials converted the islands to Christianity and established some settlements, permanently establishing the Philippines as the area of East Asia most oriented toward the West in terms of culture and commerce. The Moro Muslims fought against the Spanish for over three centuries in the Spanish–Moro conflict.
Decline of Portugal's Asian empire since the 17th century
The lucrative trade was vastly expanded when the Portuguese began to export slaves from Africa in 1541; however, over time, the rise of the slave trade left Portugal over-extended, and vulnerable to competition from other Western European powers. Envious of Portugal's control of trade routes, other Western European nations—mainly the Netherlands, France, and England—began to send in rival expeditions to Asia. In 1642, the Dutch drove the Portuguese out of the Gold Coast in Africa, the source of the bulk of Portuguese slave laborers, leaving this rich slaving area to other Europeans, especially the Dutch and the English.
Rival European powers began to make inroads in Asia as the Portuguese and Spanish trade in the Indian Ocean declined primarily because they had become hugely over-stretched financially due to the limitations on their investment capacity and contemporary naval technology. Both of these factors worked in tandem, making control over Indian Ocean trade extremely expensive.
The existing Portuguese interests in Asia proved sufficient to finance further colonial expansion and entrenchment in areas regarded as of greater strategic importance in Africa and Brazil. Portuguese maritime supremacy was lost to the Dutch in the 17th century, and with this came serious challenges for the Portuguese. However, they still clung to Macau and settled a new colony on the island of Timor. It was as recent as the 1960s and 1970s that the Portuguese began to relinquish their colonies in Asia. Goa was invaded by India in 1961 and became an Indian state in 1987; Portuguese Timor was abandoned in 1975 and was then invaded by Indonesia. It became an independent country in 2002, and Macau was handed back to the Chinese as per a treaty in 1999.
Holy wars
The arrival of the Portuguese and Spanish and their holy wars against Muslim states in the Malayan–Portuguese war, Spanish–Moro conflict and Castilian War inflamed religious tensions and turned Southeast Asia into an arena of conflict between Muslims and Christians. The Brunei Sultanate's capital at Kota Batu was assaulted by Governor Sande who led the 1578 Spanish attack.
The word "savages" in Spanish, cafres, was from the word "infidel" in Arabic - Kafir, and was used by the Spanish to refer to their own "Christian savages" who were arrested in Brunei. It was said Castilians are kafir, men who have no souls, who are condemned by fire when they die, and that too because they eat pork by the Brunei Sultan after the term accursed doctrine was used to attack Islam by the Spaniards which fed into hatred between Muslims and Christians sparked by their 1571 war against Brunei. The Sultan's words were in response to insults coming from the Spanish at Manila in 1578, other Muslims from Champa, Java, Borneo, Luzon, Pahang, Demak, Aceh, and the Malays echoed the rhetoric of holy war against the Spanish and Iberian Portuguese, calling them kafir enemies which was a contrast to their earlier nuanced views of the Portuguese in the Hikayat Tanah Hitu and Sejarah Melayu. The war by Spain against Brunei was defended in an apologia written by Doctor De Sande. The British eventually partitioned and took over Brunei while Sulu was attacked by the British, Americans, and Spanish which caused its breakdown and downfall after both of them thrived from 1500 to 1900 for four centuries. Dar al-Islam was seen as under invasion by "kafirs" by the Atjehnese led by Zayn al-din and by Muslims in the Philippines as they saw the Spanish invasion, since the Spanish brought the idea of a crusader holy war against Muslim Moros just as the Portuguese did in Indonesia and India against what they called "Moors" in their political and commercial conquests which they saw through the lens of religion in the 16th century.
In 1578, an attack was launched by the Spanish against Jolo, and in 1875 it was destroyed at their hands, and once again in 1974 it was destroyed by the Philippines. The Spanish first set foot on Borneo in Brunei.
The Spanish war against Brunei failed to conquer Brunei but it totally cut off the Philippines from Brunei's influence, the Spanish then started colonizing Mindanao and building fortresses. In response, the Bisayas, where Spanish forces were stationed, were subjected to retaliatory attacks by the Magindanao in 1599-1600 due to the Spanish attacks on Mindanao.
The Brunei royal family was related to the Muslim Rajahs who in ruled the principality in 1570 of Manila (Kingdom of Maynila) and this was what the Spaniards came across on their initial arrival to Manila, Spain uprooted Islam out of areas where it was shallow after they began to force Christianity on the Philippines in their conquests after 1521 while Islam was already widespread in the 16th century Philippines. In the Philippines in the Cebu islands the natives killed the Spanish fleet leader Magellan. Borneo's western coastal areas at Landak, Sukadana, and Sambas saw the growth of Muslim states in the sixteenth century, in the 15th century at Nanking, the capital of China, the death and burial of the Borneo Bruneian king Maharaja Kama took place upon his visit to China with Zheng He's fleet.
The Spanish were expelled from Brunei in 1579 after they attacked in 1578. There were fifty thousand inhabitants before the 1597 attack by the Spanish in Brunei.
During first contact with China, numerous aggressions and provocations were undertaken by the Portuguese They believed they could mistreat the non-Christians because they themselves were Christians and acted in the name of their religion in committing crimes and atrocities. This resulted in the Battle of Xicaowan where the local Chinese navy defeated and captured a fleet of Portuguese caravels.
Dutch trade and colonization in Asia
Rise of Dutch control over Asian trade in the 17th century
The Portuguese decline in Asia was accelerated by attacks on their commercial empire by the Dutch and the English, which began a global struggle over the empire in Asia that lasted until the end of the Seven Years' War in 1763. The Netherlands revolt against Spanish rule facilitated Dutch encroachment on the Portuguese monopoly over South and East Asian trade. The Dutch looked on Spain's trade and colonies as potential spoils of war. When the two crowns of the Iberian peninsula were joined in 1581, the Dutch felt free to attack Portuguese territories in Asia.
By the 1590s, a number of Dutch companies were formed to finance trading expeditions in Asia. Because competition lowered their profits, and because of the doctrines of mercantilism, in 1602 the companies united into a cartel and formed the Dutch East India Company, and received from the government the right to trade and colonize territory in the area stretching from the Cape of Good Hope eastward to the Strait of Magellan.
In 1605, armed Dutch merchants captured the Portuguese fort at Amboyna in the Moluccas, which was developed into the company's first secure base. Over time, the Dutch gradually consolidated control over the great trading ports of the East Indies. This control allowed the company to monopolise the world spice trade for decades. Their monopoly over the spice trade became complete after they drove the Portuguese from Malacca in 1641 and Ceylon in 1658.
Dutch East India Company colonies or outposts were later established in Atjeh (Aceh), 1667; Macassar, 1669; and Bantam, 1682. The company established its headquarters at Batavia (today Jakarta) on the island of Java. Outside the East Indies, the Dutch East India Company colonies or outposts were also established in Persia (Iran), Bengal (now Bangladesh and part of India), Mauritius (1638-1658/1664-1710), Siam (now Thailand), Guangzhou (Canton, China), Taiwan (1624–1662), and southern India (1616–1795).
Ming dynasty China defeated the Dutch East India Company in the Sino-Dutch conflicts. The Chinese first defeated and drove the Dutch out of the Pescadores in 1624. The Ming navy under Zheng Zhilong defeated the Dutch East India Company's fleet at the 1633 Battle of Liaoluo Bay. In 1662, Zheng Zhilong's son Zheng Chenggong (also known as Koxinga) expelled the Dutch from Taiwan after defeating them in the siege of Fort Zeelandia. (see History of Taiwan) Further, the Dutch East India Company trade post on Dejima (1641–1857), an artificial island off the coast of Nagasaki, was for a long time the only place where Europeans could trade with Japan.
The Vietnamese Nguyễn lords defeated the Dutch in a naval battle in 1643.
The Cambodians defeated the Dutch in the Cambodian–Dutch War in 1644.
In 1652, Jan van Riebeeck established an outpost at the Cape of Good Hope (the southwestern tip of Africa, currently in South Africa) to restock company ships on their journey to East Asia. This post later became a fully-fledged colony, the Cape Colony (1652–1806). As Cape Colony attracted increasing Dutch and European settlement, the Dutch founded the city of Kaapstad (Cape Town).
By 1669, the Dutch East India Company was the richest private company in history, with a huge fleet of merchant ships and warships, tens of thousands of employees, a private army consisting of thousands of soldiers, and a reputation on the part of its stockholders for high dividend payments.
Dutch New Imperialism in Asia
The company was in almost constant conflict with the English; relations were particularly tense following the Amboyna Massacre in 1623. During the 18th century, Dutch East India Company possessions were increasingly focused on the East Indies. After the fourth war between the United Provinces and England (1780–1784), the company suffered increasing financial difficulties. In 1799, the company was dissolved, commencing official colonisation of the East Indies. During the era of New Imperialism the territorial claims of the Dutch East India Company (VOC) expanded into a fully fledged colony named the Dutch East Indies. Partly driven by re-newed colonial aspirations of fellow European nation states the Dutch strived to establish unchallenged control of the archipelago now known as Indonesia.
Six years into formal colonisation of the East Indies, in Europe the Dutch Republic was occupied by the French forces of Napoleon. The Dutch government went into exile in England and formally ceded its colonial possessions to Great Britain. The pro-French Governor General of Java Jan Willem Janssens, resisted a British invasion force in 1811 until forced to surrender. British Governor Raffles, who the later founded the city of Singapore, ruled the colony the following 10 years of the British interregnum (1806–1816).
After the defeat of Napoleon and the Anglo-Dutch Treaty of 1814 colonial government of the East Indies was ceded back to the Dutch in 1817. The loss of South Africa and the continued scramble for Africa stimulated the Dutch to secure unchallenged dominion over its colony in the East Indies. The Dutch started to consolidate its power base through extensive military campaigns and elaborate diplomatic alliances with indigenous rulers ensuring the Dutch tricolor was firmly planted in all corners of the Archipelago. These military campaigns included: the Padri War (1821–1837), the Java War (1825–1830) and the Aceh War (1873–1904). This raised the need for a considerable military buildup of the colonial army (KNIL). From all over Europe soldiers were recruited to join the KNIL.
The Dutch concentrated their colonial enterprise in the Dutch East Indies (Indonesia) throughout the 19th century. The Dutch lost control over the East Indies to the Japanese during much of World War II. Following the war, the Dutch fought Indonesian independence forces after Japan surrendered to the Allies in 1945. In 1949, most of what was known as the Dutch East Indies was ceded to the independent Republic of Indonesia. In 1962, also Dutch New Guinea was annexed by Indonesia de facto ending Dutch imperialism in Asia.
British in India
Portuguese, French, and British competition in India (1600–1763)
The English sought to stake out claims in India at the expense of the Portuguese dating back to the Elizabethan era. In 1600, Queen Elizabeth I incorporated the English East India Company (later the British East India Company), granting it a monopoly of trade from the Cape of Good Hope eastward to the Strait of Magellan. In 1639, it acquired Madras on the east coast of India, where it quickly surpassed Portuguese Goa as the principal European trading Centre on the Indian Subcontinent.
Through bribes, diplomacy, and manipulation of weak native rulers, the company prospered in India, where it became the most powerful political force, and outrivaled its Portuguese and French competitors. For more than one hundred years, English and French trading companies had fought one another for supremacy, and, by the middle of the 18th century, competition between the British and the French had heated up. French defeat by the British under the command of Robert Clive during the Seven Years' War (1756–1763) marked the end of the French stake in India.
Collapse of Mughal India
The British East India Company, although still in direct competition with French and Dutch interests until 1763, following the subjugation of Bengal at the 1757 Battle of Plassey. The British East India Company made great advances at the expense of the Mughal Empire.
The reign of Aurangzeb had marked the height of Mughal power. By 1690 Mughal territorial expansion reached its greatest extent encompassing the entire Indian Subcontinent. But this period of power was followed by one of decline. Fifty years after the death of Aurangzeb, the great Mughal empire had crumbled. Meanwhile, marauding warlords, nobles, and others bent on gaining power left the Subcontinent increasingly anarchic. Although the Mughals kept the imperial title until 1858, the central government had collapsed, creating a power vacuum.
From Company to Crown
Aside from defeating the French during the Seven Years' War, Robert Clive, the leader of the East India Company in India, defeated Siraj ud-Daulah, a key Indian ruler of Bengal, at the decisive Battle of Plassey (1757), a victory that ushered in the beginning of a new period in Indian history, that of informal British rule. While still nominally the sovereign. The transition to formal imperialism, characterized by Queen Victoria being crowned "Empress of India" in the 1870s, was a gradual process. The first step toward cementing formal British control extended back to the late 18th century. The British Parliament, disturbed by the idea that a great business concern, interested primarily in profit, was controlling the destinies of millions of people, passed acts in 1773 and 1784 that gave itself the power to control company policies.
The East India then fought a series of Anglo-Mysore wars in Southern India with the Sultanate of Mysore under Hyder Ali and then Tipu Sultan. Defeats in the First Anglo-Mysore war and stalemate in the Second were followed by victories in the Third and the Fourth. Following Tipu Sultan's death in the fourth war in the Siege of Seringapatam (1799), the kingdom would become a protectorate of the company.
The East India Company fought three Anglo-Maratha Wars with the Maratha Confederacy. The First Anglo-Maratha War ended in 1782 with a restoration of the pre-war status quo. The Second and Third Anglo-Maratha wars resulted in British victories. After the Surrender of Peshwa Bajirao II on 1818, the East India company acquired control of a large majority of the Indian Subcontinent.
Until 1858, however, much of India was still officially the dominion of the Mughal emperor. Anger among some social groups, however, was seething under the governor-generalship of James Dalhousie (1847–1856), who annexed the Punjab (1849) after victory in the Second Sikh War, annexed seven princely states using the doctrine of lapse, annexed the key state of Oudh on the basis of misgovernment, and upset cultural sensibilities by banning Hindu practices such as sati
The 1857 Indian Rebellion, an uprising initiated by Indian troops, called sepoys, who formed the bulk of the company's armed forces, was the key turning point. Rumour had spread among them that their bullet cartridges were lubricated with pig and cow fat. The cartridges had to be bit open, so this upset the Hindu and Muslim soldiers. The Hindu religion held cows sacred, and for Muslims pork was considered haraam. In one camp, 85 out of 90 sepoys would not accept the cartridges from their garrison officer. The British harshly punished those who would not by jailing them. The Indian people were outraged, and on May 10, 1857, sepoys marched to Delhi, and, with the help of soldiers stationed there, captured it. Fortunately for the British, many areas remained loyal and quiescent, allowing the revolt to be crushed after fierce fighting. One important consequence of the revolt was the final collapse of the Mughal dynasty. The mutiny also ended the system of dual control under which the British government and the British East India Company shared authority. The government relieved the company of its political responsibilities, and in 1858, after 258 years of existence, the company relinquished its role. Trained civil servants were recruited from graduates of British universities, and these men set out to rule India. Lord Canning (created earl in 1859), appointed Governor-General of India in 1856, became known as "Clemency Canning" as a term of derision for his efforts to restrain revenge against the Indians during the Indian Mutiny. When the Government of India was transferred from the company to the Crown, Canning became the first viceroy of India.
The Company initiated the first of the Anglo-Burmese Wars in 1824, which led to total annexation of Burma by the Crown in 1885. The British ruled Burma as a province of British India until 1937, then administered her separately under the Burma Office except during the Japanese occupation of Burma, 1942–1945, until granted independence on 4 January 1948. (Unlike India, Burma opted not to join the Commonwealth of Nations.)
Rise of Indian nationalism
The denial of equal status to Indians was the immediate stimulus for the formation in 1885 of the Indian National Congress, initially loyal to the Empire but committed from 1905 to increased self-government and by 1930 to outright independence. The "Home charges", payments transferred from India for administrative costs, were a lasting source of nationalist grievance, though the flow declined in relative importance over the decades to independence in 1947.
Although majority Hindu and minority Muslim political leaders were able to collaborate closely in their criticism of British policy into the 1920s, British support for a distinct Muslim political organisation, the Muslim League from 1906 and insistence from the 1920s on separate electorates for religious minorities, is seen by many in India as having contributed to Hindu-Muslim discord and the country's eventual Partition.
France in Indochina
France, which had lost its empire to the British by the end of the 18th century, had little geographical or commercial basis for expansion in Southeast Asia. After the 1850s, French imperialism was initially impelled by a nationalistic need to rival the United Kingdom and was supported intellectually by the notion that French culture was superior to that of the people of Annam (Vietnam), and its mission civilisatrice—or its "civilizing mission" of the Annamese through their assimilation to French culture and the Catholic religion. The pretext for French expansionism in Indochina was the protection of French religious missions in the area, coupled with a desire to find a southern route to China through Tonkin, the European name for a region of northern Vietnam.
French religious and commercial interests were established in Indochina as early as the 17th century, but no concerted effort at stabilizing the French position was possible in the face of British strength in the Indian Ocean and French defeat in Europe at the beginning of the 19th century. A mid-19th century religious revival under the Second Empire provided the atmosphere within which interest in Indochina grew. Anti-Christian persecutions in the Far East provided the pretext for the bombardment of Tourane (Danang) in 1847, and invasion and occupation of Danang in 1857 and Saigon in 1858. Under Napoleon III, France decided that French trade with China would be surpassed by the British, and accordingly the French joined the British against China in the Second Opium War from 1857 to 1860, and occupied parts of Vietnam as its gateway to China.
By the Treaty of Saigon in 1862, on June 5, the Vietnamese emperor ceded France three provinces of southern Vietnam to form the French colony of Cochinchina; France also secured trade and religious privileges in the rest of Vietnam and a protectorate over Vietnam's foreign relations. Gradually French power spread through exploration, the establishment of protectorates, and outright annexations. Their seizure of Hanoi in 1882 led directly to war with China (1883–1885), and the French victory confirmed French supremacy in the region. France governed Cochinchina as a direct colony, and central and northern Vietnam under the protectorates of Annam and Tonkin, and Cambodia as protectorates in one degree or another. Laos too was soon brought under French "protection".
By the beginning of the 20th century, France had created an empire in Indochina nearly 50 percent larger than the mother country. A Governor-General in Hanoi ruled Cochinchina directly and the other regions through a system of residents. Theoretically, the French maintained the precolonial rulers and administrative structures in Annam, Tonkin, Cochinchina, Cambodia, and Laos, but in fact the governor-generalship was a centralised fiscal and administrative regime ruling the entire region. Although the surviving native institutions were preserved in order to make French rule more acceptable, they were almost completely deprived of any independence of action. The ethnocentric French colonial administrators sought to assimilate the upper classes into France's "superior culture." While the French improved public services and provided commercial stability, the native standard of living declined and precolonial social structures eroded. Indochina, which had a population of over eighteen million in 1914, was important to France for its tin, pepper, coal, cotton, and rice. It is still a matter of debate, however, whether the colony was commercially profitable.
Russia and the "Great Game"
Tsarist Russia is not often regarded as a colonial power such as the United Kingdom or France because of the manner of Russian expansions: unlike the United Kingdom, which expanded overseas, the Russian Empire grew from the centre outward by a process of accretion, like the United States. In the 19th century, Russian expansion took the form of a struggle of an effectively landlocked country for access to a warm-water port.
Historian Michael Khodarkovsky describes Tsarist Russia as a "hybrid empire" that combined elements of continental and colonial empires.
While the British were consolidating their hold on India, Russian expansion had moved steadily eastward to the Pacific, then toward the Caucasus and Central Asia. In the early 19th century, it succeeded in conquering the South Caucasus and Dagestan from Qajar Iran following the Russo-Persian War (1804–1813), the Russo-Persian War (1826–1828) and the out coming treaties of Gulistan and Turkmenchay, giving Russia direct borders with both Persia's as well as Ottoman Turkey's heartlands. Later, they eventually reached the frontiers of Afghanistan as well (which had the largest foreign border adjacent to British holdings in India). In response to Russian expansion, the defense of India's land frontiers and the control of all sea approaches to the subcontinent via the Suez Canal, the Red Sea, and the Persian Gulf became preoccupations of British foreign policy in the 19th century. This was called the Great Game.
According to Kazakh scholar Kereihan Amanzholov, Russian colonialism had "no essential difference with the colonialist policies of Britain, France, and other European powers".
Anglo-Russian rivalry in the Middle East and Central Asia led to a brief confrontation over Afghanistan in the 1870s. In Persian, both nations set up banks to extend their economic influence. The United Kingdom went so far as to invade Tibet, a land subordinate to the Chinese Qing Empire, in 1904, but withdrew when it became clear that Russian influence was insignificant and when Chinese and Tibetan resistance proved tougher than expected.
Qing China defeated Russia in the early Sino-Russian border conflicts, although the Russian Empire later acquired Outer Manchuria in the Amur Annexation during the Second Opium War. During the Boxer Rebellion, the Russian Empire invaded Manchuria in 1900, and the Blagoveshchensk massacre occurred against Chinese residents on the Russian side of the border.
In 1907, the United Kingdom and Russia signed an agreement that, on the surface, ended their rivalry in Central Asia. (see Anglo-Russian Convention) As part of the entente, Russia agreed to deal with the sovereign of Afghanistan only through British intermediaries. In turn, the United Kingdom would not annex or occupy Afghanistan. Chinese suzerainty over Tibet also was recognised by both Russia and the United Kingdom, since nominal control by a weak China was preferable to control by either power. Persia was divided into Russian and British spheres of influence and an intervening "neutral" zone. The United Kingdom and Russia chose to reach these uneasy compromises because of growing concern on the part of both powers over German expansion in strategic areas of China and Africa.
Following the entente, Russia increasingly intervened in Persian domestic politics and suppressed nationalist movements that threatened both Saint Petersburg and London. After the Russian Revolution, Russia gave up its claim to a sphere of influence, though Soviet involvement persisted alongside the United Kingdom's until the 1940s.
In the Middle East, in Persia and the Ottoman Empire, a German company built a railroad from Constantinople to Baghdad and the Persian Gulf in the latter, while it built a railroad from the north of the country to the south, connecting the Caucasus with the Persian Gulf in the former. Germany wanted to gain economic influence in the region and then, perhaps, move on to India. This was met with bitter resistance by the United Kingdom, Russia, and France who divided the region among themselves.
Western European and Russian intrusions into China
The 16th century brought many Jesuit missionaries to China, such as Matteo Ricci, who established missions where Western science was introduced, and where Europeans gathered knowledge of Chinese society, history, culture, and science. During the 18th century, merchants from Western Europe came to China in increasing numbers. However, merchants were confined to Guangzhou and the Portuguese colony of Macau, as they had been since the 16th century. European traders were increasingly irritated by what they saw as the relatively high customs duties they had to pay and by the attempts to curb the growing import trade in opium. By 1800, its importation was forbidden by the imperial government. However, the opium trade continued to boom.
Early in the 19th century, serious internal weaknesses developed in the Qing dynasty which left China vulnerable to Western Europeans, Meiji period Japanese, and Russian imperialism. In 1839, China found itself fighting the First Opium War with Britain. China was defeated, and in 1842, signed the provisions of the Treaty of Nanking which were first of the unequal treaties signed during the Qing dynasty. Hong Kong Island was ceded to Britain, and certain ports, including Shanghai and Guangzhou, were opened to British trade and residence. In 1856, the Second Opium War broke out. The Chinese were again defeated, and now forced to the terms of the 1858 Treaty of Tientsin. The treaty opened new ports to trade and allowed foreigners to travel in the interior. In addition, Christians gained the right to propagate their religion. The United States Treaty of Wanghia and Russia later obtained the same prerogatives in separate treaties.
Toward the end of the 19th century, China appeared on the way to territorial dismemberment and economic vassalage—the fate of India's rulers that played out much earlier. Several provisions of these treaties caused long-standing bitterness and humiliation among the Chinese: extraterritoriality (meaning that in a dispute with a Chinese person, a Westerner had the right to be tried in a court under the laws of his own country), customs regulation, and the right to station foreign warships in Chinese waters, including its navigable rivers.
Jane E. Elliott criticized the allegation that China refused to modernize or was unable to defeat Western armies as simplistic, noting that China embarked on a massive military modernization in the late 1800s after several defeats, buying weapons from Western countries and manufacturing their own at arsenals, such as the Hanyang Arsenal during the Boxer Rebellion. In addition, Elliott questioned the claim that Chinese society was traumatized by the Western victories, as many Chinese peasants (90% of the population at that time) living outside the concessions continued about their daily lives, uninterrupted and without any feeling of "humiliation".
Historians have judged the Qing dynasty's vulnerability and weakness to foreign imperialism in the 19th century to be based mainly on its maritime naval weakness while it achieved military success against westerners on land, the historian Edward L. Dreyer said that "China’s nineteenth-century humiliations were strongly related to her weakness and failure at sea. At the start of the Opium War, China had no unified navy and no sense of how vulnerable she was to attack from the sea; British forces sailed and steamed wherever they wanted to go......In the Arrow War (1856-1860), the Chinese had no way to prevent the Anglo-French expedition of 1860 from sailing into the Gulf of Zhili and landing as near as possible to Beijing. Meanwhile, new but not exactly modern Chinese armies suppressed the midcentury rebellions, bluffed Russia into a peaceful settlement of disputed frontiers in Central Asia, and defeated the French forces on land in the Sino-French War (1884-1885). But the defeat of the fleet, and the resulting threat to steamship traffic to Taiwan, forced China to conclude peace on unfavorable terms."
During the Sino-French War, Vietnamese forces defeated the French at the Battle of Cầu Giấy (Paper Bridge), Bắc Lệ ambush, Battle of Phu Lam Tao, Battle of Zhenhai, the Battle of Tamsui in the Keelung Campaign and in the last battle which ended the war, the Battle of Bang Bo (Zhennan Pass), which triggered the French Retreat from Lạng Sơn and resulted in the collapse of the French Jules Ferry government in the Tonkin Affair.
The Qing dynasty forced Russia to hand over disputed territory in Ili in the Treaty of Saint Petersburg (1881), in what was widely seen by the west as a diplomatic victory for the Qing. Russia acknowledged that Qing China potentially posed a serious military threat. Mass media in the west during this era portrayed China as a rising military power due to its modernization programs and as a major threat to the western world, invoking fears that China would successfully conquer western colonies like Australia.
The British observer Demetrius Charles de Kavanagh Boulger suggested a British-Chinese alliance to check Russian expansion in Central Asia.
During the Ili crisis when Qing China threatened to go to war against Russia over the Russian occupation of Ili, the British officer Charles George Gordon was sent to China by Britain to advise China on military options against Russia should a potential war break out between China and Russia.
The Russians observed the Chinese building up their arsenal of modern weapons during the Ili crisis, the Chinese bought thousands of rifles from Germany. In 1880, massive amounts of military equipment and rifles were shipped via boats to China from Antwerp as China purchased torpedoes, artillery, and 260,260 modern rifles from Europe.
The Russian military observer D. V. Putiatia visited China in 1888 and found that in Northeastern China (Manchuria) along the Chinese-Russian border, the Chinese soldiers were potentially able to become adept at "European tactics" under certain circumstances, and the Chinese soldiers were armed with modern weapons like Krupp artillery, Winchester carbines, and Mauser rifles.
Compared to Russian controlled areas, more benefits were given to the Muslim Kirghiz on the Chinese controlled areas. Russian settlers fought against the Muslim nomadic Kirghiz, which led the Russians to believe that the Kirghiz would be a liability in any conflict against China. The Muslim Kirghiz were sure that in an upcoming war, that China would defeat Russia.
Russian sinologists, the Russian media, threat of internal rebellion, the pariah status inflicted by the Congress of Berlin, the negative state of the Russian economy all led Russia to concede and negotiate with China in St Petersburg, and return most of Ili to China.
The rise of Japan since the Meiji Restoration as an imperial power led to further subjugation of China. In a dispute over China's longstanding claim of suzerainty in Korea, war broke out between China and Japan, resulting in humiliating defeat for the Chinese. By the Treaty of Shimonoseki (1895), China was forced to recognize effective Japanese rule of Korea and Taiwan was ceded to Japan until its recovery in 1945 at the end of the WWII by the Republic of China.
China's defeat at the hands of Japan was another trigger for future aggressive actions by Western powers. In 1897, Germany demanded and was given a set of exclusive mining and railroad rights in Shandong province. Russia obtained access to Dairen and Port Arthur and the right to build a railroad across Manchuria, thereby achieving complete domination over a large portion of northwestern China. The United Kingdom and France also received a number of concessions. At this time, much of China was divided up into "spheres of influence": Germany had influence in Jiaozhou (Kiaochow) Bay, Shandong, and the Yellow River valley; Russia had influence in the Liaodong Peninsula and Manchuria; the United Kingdom had influence in Weihaiwei and the Yangtze Valley; and France had influence in the Guangzhou Bay and the provinces of Yunnan, Guizhou and Guangxi.
China continued to be divided up into these spheres until the United States, which had no sphere of influence, grew alarmed at the possibility of its businessmen being excluded from Chinese markets. In 1899, Secretary of State John Hay asked the major powers to agree to a policy of equal trading privileges. In 1900, several powers agreed to the U.S.-backed scheme, giving rise to the "Open Door" policy, denoting freedom of commercial access and non-annexation of Chinese territory. In any event, it was in the European powers' interest to have a weak but independent Chinese government. The privileges of the Europeans in China were guaranteed in the form of treaties with the Qing government. In the event that the Qing government totally collapsed, each power risked losing the privileges that it already had negotiated.
The erosion of Chinese sovereignty and seizures of land from Chinese by foreigners contributed to a spectacular anti-foreign outbreak in June 1900, when the "Boxers" (properly the society of the "righteous and harmonious fists") attacked foreigners around Beijing. The Imperial Court was divided into anti-foreign and pro-foreign factions, with the pro-foreign faction led by Ronglu and Prince Qing hampering any military effort by the anti-foreign faction led by Prince Duan and Dong Fuxiang. The Qing Empress Dowager ordered all diplomatic ties to be cut off and all foreigners to leave the legations in Beijing to go to Tianjin. The foreigners refused to leave. Fueled by entirely false reports that the foreigners in the legations were massacred, the Eight-Nation Alliance decided to launch an expedition on Beijing to reach the legations but they underestimated the Qing military. The Qing and Boxers defeated the foreigners at the Seymour Expedition, forcing them to turn back at the Battle of Langfang. In response to the foreign attack on the Taku Forts the Qing responded by declaring war against the foreigners. the Qing forces and foreigners fought a fierce battle at the Battle of Tientsin before the foreigners could launch a second expedition. On their second try Gaselee Expedition, with a much larger force, the foreigners managed to reach Beijing and fight the Battle of Peking. British and French forces looted, plundered and burned the Old Summer Palace to the ground for the second time (the first time being in 1860, following the Second Opium War). German forces were particularly severe in exacting revenge for the killing of their ambassador due to the orders of Kaiser Wilhelm II, who held anti-Asian sentiments, while Russia tightened its hold on Manchuria in the northeast until its crushing defeat by Japan in the war of 1904–1905. The Qing court evacuated to Xi'an and threatened to continue the war against foreigners, until the foreigners tempered their demands in the Boxer Protocol, promising that China would not have to give up any land and gave up the demands for the execution of Dong Fuxiang and Prince Duan.
The correspondent Douglas Story observed Chinese troops in 1907 and praised their abilities and military skill.
Extraterritorial jurisdiction was abandoned by the United Kingdom and the United States in 1943. Chiang Kai-shek forced the French to hand over all their concessions back to China control after World War II. Foreign political control over leased parts of China ended with the incorporation of Hong Kong and the small Portuguese territory of Macau into the People's Republic of China in 1997 and 1999 respectively.
U.S. imperialism in Asia
[[Image:Editorial cartoon about Jacob Smith's retaliation for Balangiga.gif|thumb|One of the New York Journal'''s most infamous cartoons, depicting Philippine-American War General Jacob H. Smith's order "Kill Everyone over Ten," from the front page on May 5, 1902.]]
Some Americans in the 19th century advocated for the annexation of Taiwan from China. Taiwanese aborigines often attacked and massacred shipwrecked western sailors. In 1867, during the Rover incident, Taiwanese aborigines attacked shipwrecked American sailors, killing the entire crew. They subsequently defeated a retaliatory expedition by the American military and killed another American during the battle.
As the United States emerged as a new imperial power in the Pacific and Asia, one of the two oldest Western imperialist powers in the regions, Spain, was finding it increasingly difficult to maintain control of territories it had held in the regions since the 16th century. In 1896, a widespread revolt against Spanish rule broke out in the Philippines. Meanwhile, the recent string of U.S. territorial gains in the Pacific posed an even greater threat to Spain's remaining colonial holdings.
As the U.S. continued to expand its economic and military power in the Pacific, it declared war against Spain in 1898. During the Spanish–American War, U.S. Admiral Dewey destroyed the Spanish fleet at Manila and U.S. troops landed in the Philippines. Spain later agreed by treaty to cede the Philippines in Asia and Guam in the Pacific. In the Caribbean, Spain ceded Puerto Rico to the U.S. The war also marked the end of Spanish rule in Cuba, which was to be granted nominal independence but remained heavily influenced by the U.S. government and U.S. business interests. One year following its treaty with Spain, the U.S. occupied the small Pacific outpost of Wake Island.
The Filipinos, who assisted U.S. troops in fighting the Spanish, wished to establish an independent state and, on June 12, 1898, declared independence from Spain. In 1899, fighting between the Filipino nationalists and the U.S. broke out; it took the U.S. almost fifteen years to fully subdue the insurgency. The U.S. sent 70,000 troops and suffered thousands of casualties. The Filipinos insurgents, however, suffered considerably higher casualties than the Americans. Most casualties in the war were civilians dying primarily from disease and famine.
U.S. counter-insurgency operations in rural areas often included scorched earth tactics which involved burning down villages and concentrating civilians into camps known as "protected zones". The execution of U.S. soldiers taken prisoner by the Filipinos led to disproportionate reprisals by American forces.
The Moro Muslims fought against the Americans in the Moro Rebellion.
In 1914, Dean C. Worcester, U.S. Secretary of the Interior for the Philippines (1901–1913) described "the regime of civilisation and improvement which started with American occupation and resulted in developing naked savages into cultivated and educated men". Nevertheless, some Americans, such as Mark Twain, deeply opposed American involvement/imperialism in the Philippines, leading to the abandonment of attempts to construct a permanent U.S. naval base and using it as an entry point to the Chinese market. In 1916, Congress guaranteed the independence of the Philippines by 1945.
World War I: changes in imperialism
World War I brought about the fall of several empires in Europe. This had repercussions around the world. The defeated Central Powers included Germany and the Turkish Ottoman Empire. Germany lost all of its colonies in Asia. German New Guinea, a part of Papua New Guinea, became administered by Australia. German possessions and concessions in China, including Qingdao, became the subject of a controversy during the Paris Peace Conference when the Beiyang government in China agreed to cede these interests to Japan, to the anger of many Chinese people. Although the Chinese diplomats refused to sign the agreement, these interests were ceded to Japan with the support of the United States and the United Kingdom.
Turkey gave up her provinces; Syria, Palestine, and Mesopotamia (now Iraq) came under French and British control as League of Nations Mandates. The discovery of petroleum first in Iran and then in the Arab lands in the interbellum provided a new focus for activity on the part of the United Kingdom, France, and the United States (see History of the Middle East#World War I).
Japan
In 1641, all Westerners were thrown out of Japan. For the next two centuries, Japan was free from Western contact, except for at the port of Nagasaki, which Japan allowed Dutch merchant vessels to enter on a limited basis.
Japan's freedom from Western contact ended on 8 July 1853, when Commodore Matthew Perry of the U.S. Navy sailed a squadron of black-hulled warships into Edo (modern Tokyo) harbor. The Japanese told Perry to sail to Nagasaki but he refused. Perry sought to present a letter from U.S. President Millard Fillmore to the emperor which demanded concessions from Japan. Japanese authorities responded by stating that they could not present the letter directly to the emperor, but scheduled a meeting on 14 July with a representative of the emperor. On 14 July, the squadron sailed towards the shore, giving a demonstration of their cannon's firepower thirteen times. Perry landed with a large detachment of Marines and presented the emperor's representative with Fillmore's letter. Perry said he would return, and did so, this time with even more war ships. The U.S. show of force led to Japan's concession to the Convention of Kanagawa on 31 March 1854. This treaty conferred extraterritoriality on American nationals, as well as, opening up further treaty ports beyond Nagasaki. This treaty was followed up by similar treaties with the United Kingdom, the Netherlands, Russia and France. These events made Japanese authorities aware that the country was lacking technologically and needed the strength of industrialism in order to keep their power. This realisation eventually led to a civil war and political reform known the Meiji Restoration.
The Meiji Restoration of 1868 led to administrative overhaul, deflation and subsequent rapid economic development. Japan had limited natural resources of her own and sought both overseas markets and sources of raw materials, fuelling a drive for imperial conquest which began with the defeat of China in 1895.
Taiwan, ceded by Qing dynasty China, became the first Japanese colony. In 1899, Japan won agreements from the great powers' to abandon extraterritoriality for their citizens, and an alliance with the United Kingdom established it in 1902 as an international power. Its spectacular defeat of Russia's navy in 1905 gave it the southern half of the island of Sakhalin; exclusive Japanese influence over Korea (propinquity); the former Russian lease of the Liaodong Peninsula with Port Arthur (Lüshunkou); and extensive rights in Manchuria (see the Russo-Japanese War).
The Empire of Japan and the Joseon dynasty in Korea formed bilateral diplomatic relations in 1876. China lost its suzerainty of Korea after defeat in the Sino-Japanese War in 1894. Russia also lost influence on the Korean peninsula with the Treaty of Portsmouth as a result of the Russo-Japanese war in 1904. The Joseon dynasty became increasingly dependent on Japan. Korea became a protectorate of Japan with the Japan–Korea Treaty of 1905. Korea was then de jure annexed to Japan with the Japan–Korea Treaty of 1910.
Japan was now one of the most powerful forces in the Far East, and in 1914, it entered World War I on the side of the Allies, seizing German-occupied Kiaochow and subsequently demanding Chinese acceptance of Japanese political influence and territorial acquisitions (Twenty-One Demands, 1915). Mass protests in Peking in 1919 which sparked widespread Chinese nationalism, coupled with Allied (and particularly U.S.) opinion led to Japan's abandonment of most of the demands and Kiaochow's 1922 return to China. Japan received the German territory from the Treaty of Versailles.
Tensions with China increased over the 1920s, and in 1931 Japanese Kwantung Army based in Manchuria seized control of the region without admission from Tokyo. Intermittent conflict with China led to full-scale war in mid-1937, drawing Japan toward an overambitious bid for Asian hegemony (Greater East Asia Co-Prosperity Sphere), which ultimately led to defeat and the loss of all its overseas territories after World War II (see Japanese expansionism and Japanese nationalism).
After World War II
Decolonization and the rise of nationalism in Asia
In the aftermath of World War II, European colonies, controlling more than one billion people throughout the world, still ruled most of the Middle East, South East Asia, and the Indian Subcontinent. However, the image of European pre-eminence was shattered by the wartime Japanese occupations of large portions of British, French, and Dutch territories in the Pacific. The destabilisation of European rule led to the rapid growth of nationalist movements in Asia—especially in Indonesia, Malaya, Burma, and French Indochina (Vietnam, Cambodia, and Laos).
The war, however, only accelerated forces already in existence undermining Western imperialism in Asia. Throughout the colonial world, the processes of urbanisation and capitalist investment created professional merchant classes that emerged as new Westernised elites. While imbued with Western political and economic ideas, these classes increasingly grew to resent their unequal status under European rule.
British in India and the Middle East
In India, the westward movement of Japanese forces towards Bengal during World War II had led to major concessions on the part of British authorities to Indian nationalist leaders. In 1947, the United Kingdom, devastated by war and embroiled in an economic crisis at home, granted British India its independence as two nations: India and Pakistan. Myanmar (Burma) and Sri Lanka (Ceylon), which is also part of British India, also gained their independence from the United Kingdom the following year, in 1948. In the Middle East, the United Kingdom granted independence to Jordan in 1946 and two years later, in 1948, ended its mandate of Palestine becoming the independent nation of Israel.
Following the end of the war, nationalists in Indonesia demanded complete independence from the Netherlands. A brutal conflict ensued, and finally, in 1949, through United Nations mediation, the Dutch East Indies achieved independence, becoming the new nation of Indonesia. Dutch imperialism moulded this new multi-ethnic state comprising roughly 3,000 islands of the Indonesian archipelago with a population at the time of over 100 million.
The end of Dutch rule opened up latent tensions between the roughly 300 distinct ethnic groups of the islands, with the major ethnic fault line being between the Javanese and the non-Javanese.
Dutch New Guinea was under the Dutch administration until 1962 (see also West New Guinea dispute).
United States in Asia
In the Philippines, the U.S. remained committed to its previous pledges to grant the islands their independence, and the Philippines became the first of the Western-controlled Asian colonies to be granted independence post-World War II. However, the Philippines remained under pressure to adopt a political and economic system similar to the U.S.
This aim was greatly complicated by the rise of new political forces. During the war, the Hukbalahap (People's Army), which had strong ties to the Communist Party of the Philippines (PKP), fought against the Japanese occupation of the Philippines and won strong popularity among many sectors of the Filipino working class and peasantry. In 1946, the PKP participated in elections as part of the Democratic Alliance. However, with the onset of the Cold War, its growing political strength drew a reaction from the ruling government and the United States, resulting in the repression of the PKP and its associated organizations. In 1948, the PKP began organizing an armed struggle against the government and continued U.S. military presence. In 1950, the PKP created the People's Liberation Army (Hukbong Mapagpalaya ng Bayan''), which mobilized thousands of troops throughout the islands. The insurgency lasted until 1956 when the PKP gave up armed struggle.
In 1968, the PKP underwent a split, and in 1969 the Maoist faction of the PKP created the New People's Army. Maoist rebels re-launched an armed struggle against the government and the U.S. military presence in the Philippines, which continues to this day.
France in Indochina
Post-war resistance to French rule
France remained determined to retain its control of Indochina. However, in Hanoi, in 1945, a broad front of nationalists and communists led by Ho Chi Minh declared an independent Democratic Republic of Vietnam, commonly referred to as the Viet Minh regime by Western outsiders. France, seeking to regain control of Vietnam, countered with a vague offer of self-government under French rule. France's offers were unacceptable to Vietnamese nationalists; and in December 1946 the Việt Minh launched a rebellion against the French authority governing the colonies of French Indochina. The first few years of the war involved a low-level rural insurgency against French authority. However, after the Chinese communists reached the Northern border of Vietnam in 1949, the conflict turned into a conventional war between two armies equipped with modern weapons supplied by the United States and the Soviet Union. Meanwhile, the France granted the State of Vietnam based in Saigon independence in 1949 while Laos and Cambodia received independence in 1953. The US recognized the regime in Saigon, and provided the French military effort with military aid.
Meanwhile, in Vietnam, the French war against the Viet Minh continued for nearly eight years. The French were gradually worn down by guerrilla and jungle fighting. The turning point for France occurred at Dien Bien Phu in 1954, which resulted in the surrender of ten thousand French troops. Paris was forced to accept a political settlement that year at the Geneva Conference, which led to a precarious set of agreements regarding the future political status of Laos, Cambodia, and Vietnam.
List of European colonies in Asia
British colonies in East Asia, South Asia, and Southeast Asia:
British Burma (1824–1948, merged with India by the British from 1886 to 1937)
British Ceylon (1815–1948, now Sri Lanka)
British Hong Kong (1842–1997)
Colonial India (includes the territory of present-day India, Pakistan and Bangladesh)
Danish India (1696–1869)
Swedish Parangipettai (1733)
British India (1613–1947)
British East India Company (1757–1858)
British Raj (1858–1947)
Bhutan (1865–1947) (British protectorate)
Nepal (1816–1923) (British protectorate)
French colonies in South and Southeast Asia:
French India (1769–1954)
French Indochina (1887–1953), including:
French Laos (1893–1953)
French Cambodia (1863–1953)
Annam (French protectorate), Cochinchina, Tonkin (now Vietnam) (1883–1953)
Dutch, British, Spanish, Portuguese colonies and Russian territories in Asia:
Dutch India (1605–1825)
Dutch Bengal
Dutch Ceylon (1656–1796)
Portuguese Ceylon (1505–1658)
Dutch East Indies (now Indonesia) – Dutch colony from 1602 to 1949 (included Dutch New Guinea until 1962)
Portuguese India (1510–1961)
Portuguese Macau – Portuguese colony, the first European colony in China (1557–1999)
Portuguese Timor (1702–1975, now East Timor)
Malaya (now part of Malaysia):
Portuguese Malacca (1511–1641)
Dutch Malacca (1641–1824)
British Malaya, included:
Straits Settlements (1826–1946)
Federated Malay States (1895–1946)
Unfederated Malay States (1885–1946)
Federation of Malaya (under British rule, 1948–1963)
British Borneo (now part of Malaysia), including:
Labuan (1848–1946)
North Borneo (1882–1941)
Crown Colony of North Borneo (1946–1963)
Crown Colony of Sarawak (1946–1963)
Brunei
British Brunei (1888–1984) (British protectorate)
Russian Manchuria – ceded to Russian Empire through Treaty of Aigun (1858) and Treaty of Peking (1860)
Philippines:
Spanish Philippines (1565–1898, 3rd longest European occupation in Asia, 333 years),
Insular Government of the Philippine Islands and Commonwealth of the Philippines, United States colony (1898–1946)
Singapore – British colony (1819–1959)
Taiwan:
Spanish Formosa (1626–1642)
Dutch Formosa (1624–1662)
Bahrain
Portuguese Bahrain (1521–1602)
British Protectorate (1861–1971)
Iraq
Mandatory Iraq (1920–1932) (British protectorate)
Kingdom of Iraq (1932–1958)
Israel and Palestine
Mandatory Palestine (1920–1948) (British Mandate)
Jordan
Emirate of Transjordan (1921–1946) (British protectorate)
Kuwait
Sheikhdom of Kuwait (1899–1961) (British protectorate)
Lebanon and Syria
French Mandate for Syria and the Lebanon (1923–1946)
Oman
Portuguese Oman (1507–1650)
Muscat and Oman (1892–1971) (British protectorate)
Qatar
British protectorate of Qatar (1916–1971)
United Arab Emirates
Trucial States (1820–1971) (British protectorate)
Yemen
Aden Protectorate (1869–1963)
Colony of Aden (1937–1963)
Federation of South Arabia (1962–1967)
Protectorate of South Arabia (1963–1967)
Independent states
Afghanistan – founded by the Anglo-Afghan Treaty of 1919 of the United Kingdom and declared independence in 1919
Emirate of Afghanistan (1879 - 1919) (British protected state)
China – independent, but within European cultures of influence which were largely limited to the colonised ports except for Manchuria.
Foreign concessions in China
Shanghai International Settlement (1863 - 1941)
Shanghai French Concession (1849 - 1943)
Concessions in Tianjin (1860 - 1947)
Iran – in Russian sphere of influence in the north and British in the south
Empire of Japan – a great power that had its own colonial empire, including Korea and Taiwan
Mongolia – in Russian sphere of influence and later Soviet controlled
Thailand – the only independent state in Southeast Asia, but bordered by a British sphere of influence in the north and south and French influence in the northeast and east
Turkey – successor to the Ottoman Empire in 1923; the Ottoman Empire itself could be considered a colonial empire as it had a protectorate over the Sultanate of Aceh
Notes
References
Citations
Sources
Further reading
"Asia Reborn: A Continent Rises from the Ravages of Colonialism and War to a New Dynamism" by Prasenjit K. Basu, Publisher: Aleph Book Company
Panikkar, K. M. (1953). Asia and Western dominance, 1498–1945, by K.M. Panikkar. London: G. Allen and Unwin.
Senaka Weeraratna, Repression of Buddhism in Sri Lanka by the Portuguese (1505–1658)
Great Game
New Imperialism
Western culture | 0.774025 | 0.993794 | 0.769221 |
World domination | World domination (also called global domination, world conquest, global conquest, or cosmocracy) is a hypothetical power structure, either achieved or aspired to, in which a single political authority holds the power over all and/or virtually all the inhabitants of Earth. Various individuals or regimes have tried to achieve this goal throughout history, without ever attaining it. The theme has been often used in works of fiction, particularly in political fiction, as well as in conspiracy theories (which may posit that some person or group has already secretly achieved this goal), particularly those fearing the development of a "New World Order" involving a world government of a totalitarian nature.
History
Historically, world domination has been thought of in terms of a nation expanding its power to the point that all other nations are subservient to it. This may be achieved by direct military force or by establishing a hegemony. The latter is an indirect form of rule by the hegemon (leading state) over subordinate states. The hegemonic implied power includes the threat of force, protection, or economic benefits.
While various empires and hegemonies over the course of history have been able to expand and dominate large parts of the world, none have come close to conquering all the territory on Earth. However, these powers have had a global impact in cultural and economic terms that is still felt today. Some of the largest and more prominent empires include:
The Roman Empire was the post-Republican state of ancient Rome and is generally understood to mean the period and territory ruled by the Romans following Octavian's assumption of sole rule under the Principate in 31 BC. It included territory in Europe, North Africa, and Western Asia, and was ruled by emperors. The fall of the Western Roman Empire in 476 conventionally marks the end of classical antiquity and the beginning of the Middle Ages.
The Mongol Empire, which in the 13th century under Genghis Khan came to control the largest continuous land empire in the world, spanning from East Asia to the Middle East and Eastern Europe. It eventually fractured and ended with the fall of the Yuan dynasty, which was established by Kublai Khan. It reached its greatest extent in 1309, when it controlled the region through which the Silk Road trade route ran.
The Spanish Empire under the Habsburg monarchy and Iberian Union, which controlled vast areas of Europe, America, Africa and some parts of Asia. The empire collapsed in a process that started in the Thirty Years' War and the Napoleonic Wars. It was the first global empire in human history, being the first one who was referred to as The empire on which the sun never sets and having pretensions (specially during the Spanish Habsburg) to being the secular leaders of the worldwide Christendom and the sword of the Pope against their opponents, the Protestant Reformers on Northern Europe, the Regalism from Kingdom of France, the Islamic world on Greater Middle East (mostly Ottoman Empire on Western Asia and Morocco of North Africa), the Pagans from the West Indies and East Indies, and all the enemies of the Catholic Church in their Christian mission to Evangelise all the World.
The Russian Empire, which controlled vast areas of Eurasia stretching from the Baltic region to Russian Manchuria, reaching its largest extent in 1895. The empire collapsed during the February Revolution in 1917, which saw Tsar Nicholas II abdicate. The cultural and economic unity of the Russian Empire allowed the rise of its successor state, the Soviet Union, a superpower whose military strength and ideology were major forces in global politics during the 20th century.
The British Empire, originating under Elizabeth I, was the largest empire in history. By 1921, the British Empire reached its height and dominated a quarter of the globe, controlling territory on each continent. The empire went through a long period of decline and decolonization following the end of the Second World War, which had brought it close to bankruptcy, until it ceased to be a dominant force in world affairs. English is still the official language in many countries, most of which were former British colonies, and is widely spoken as a second language around the world. The Industrial Revolution that took place in the United Kingdom from the 18th century was spread to the rest of the globe through the expansion of the British Empire, enabling the development of an industrialized global economy.
By the early 21st century, wars of territorial conquest were uncommon and the world's nations could attempt to resolve their differences through multilateral diplomacy under the auspices of global organizations like the United Nations and World Trade Organization. The world's superpowers and potential superpowers rarely attempt to exert global influence through the types of territorial empire-building seen in history, but the influence of historical empires is still important and the idea of world domination is still socially and culturally relevant.
Ideologies
The aspiration to rule "the four corners of the universe" was the hallmark of imperial ideologies worldwide and since the beginning of history.
Egypt
The Egyptian King was believed to rule "all under the sun." On Abydos Stelae, Thutmose I claimed: "I made the boundaries of Egypt as far as the sun encircles." The Story of Sinuke tells that the King has "subdued all that the sun encircles." The Hymn of Victory of Thutmose III and the Stelae of Amenophis II proclaimed that no one makes a boundary with the King and there is "no boundary for him towards all lands united, towards all lands together." Thutmose III was also acknowledged: "None presents himself before thy majesty. The circuit of the Great Circle [Ocean] is included in thy grasp."
Mesopotamia
The title of King of the Universe appeared in Ancient Mesopotamia as a title of great prestige claiming world domination, being used by powerful monarchs, starting with the Akkadian emperor Sargon (2334–2284 BC) and it was used in a succession of later empires claiming symbolical descent from Sargon's Akkadian Empire. During the Early Dynastic Period in Mesopotamia (c. 2900–2350 BC), the rulers of the various city-states (the most prominent being Ur, Uruk, Lagash, Umma and Kish) in the region would often launch invasions into regions and cities far from their own, at most times with negligible consequences for themselves, in order to establish temporary and small empires to either gain or keep a superior position relative to the other city-states. Eventually this quest to be more prestigious and powerful than the other city-states resulted in a general ambition for universal rule. Since Mesopotamia was equated to correspond to the entire world and Sumerian cities had been built far and wide (cities the like of Susa, Mari and Assur were located near the perceived corners of the world) it seemed possible to reach the edges of the world (at this time thought to be the lower sea, the Persian gulf, and the upper sea, the Mediterranean).
The title šar kiššatim was perhaps most prominently used by the kings of the Neo-Assyrian Empire, more than a thousand years after the fall of the Akkadian Empire.
After taking Babylon and defeating the Neo-Babylonian Empire, Cyrus the Great proclaimed himself "king of Babylon, king of Sumer and Akkad, king of the four corners of the world" in the famous Cyrus Cylinder, an inscription deposited in the foundations of the Esagila temple dedicated to the chief Babylonian god, Marduk. Cyrus the Great's dominions composed the largest empire the world had ever seen to that point, spanning from the Mediterranean Sea and Hellespont in the west to the Indus River in the east and Iranian philosophy, literature and religion played dominant roles in world events for the next millennium, like the Cyrus Cylinder as the oldest-known declaration of human rights. Before Cyrus and his army crossed the river Araxes to battle with the Armenians, he installed his son Cambyses II as king in case he should not return from battle. However, once Cyrus had crossed the Aras River, he had a vision in which Darius had wings atop his shoulders and stood upon the confines of Europe and Asia (the known world). When Cyrus awoke from the dream, he inferred it as a great danger to the future security of the empire, as it meant that Darius would one day rule the whole world. However, his son Cambyses was the heir to the throne, not Darius, causing Cyrus to wonder if Darius was forming treasonable and ambitious designs. This led Cyrus to order Hystaspes to go back to Persis and watch over his son strictly, until Cyrus himself returned. In many cuneiform inscriptions, like Behistun Inscription, Darius the Great denote his achievements, he presents himself as a devout believer of Ahura Mazda, perhaps even convinced that he had a divine right to rule over the world, believing that because he lived righteously by Asha, Ahura Mazda supported him as a Virtuos monarch and appointed him to rule the Achaemenid Empire and their global projection, while believing that each rebellion in his kingdom was the work of druj, the enemy of Asha, due to Dualist beliefs.
Alexander the Great
In the 4th century BCE, Alexander the Great notably expressed a desire to conquer the world, and a legend persists that after he completed his military conquest of the known ancient world, he "wept because he had no more worlds to conquer", as he was unaware of China farther to the east and had no way to know about civilizations in the Americas.
After the collapse of Macedonian Empire, the Seleucid Empire appeared with claims to world rule in their imperial ideology, as Antiochus I Soter claimed the ancient Mesopotamian title King of the Universe. However, it didn't reflect realistic Seleucid imperial ambitions at this point after the treaty of peace of Seleucus I Nicator with the Mauryans had set a limit to eastern expansion, and Antiochus ceding the lands west of Thrace to the Antigonids.
India
On the Indosphere, Bharata Chakravartin was the first chakravartin (universal emperor, ruler of rulers or possessor of chakra) of Avasarpini (present half time cycle as per Jain cosmology). In a Jain legend, Yasasvati Devi, senior-most queen of Rishabhanatha (first Jain tirthankara), saw four auspicious dreams one night. She saw the sun and the moon, the Mount Meru, the lake with swans, earth and the ocean. Rishabhanatha explained her that these dreams meant that a chakravartin ruler will be born to them who will conquer whole of the world. Then, Bharata, a Kshatriya from Ikshvaku dynasty, was born to them on the ninth day of the dark half of the month of Chaitra. He is said to have conquered all the six parts of the world, during his digvijaya (winning six divisions of earth in all directions), and to have engaged in a fight with Bahubali, his brother, to conquer the last remaining city. The ancient name of India was named "Bhāratavarsha" or "Bhārata" or "Bharata-bhumi" after him. In the Hindu text, Skanda Purana (chapter 37) it is stated that "Rishabhanatha was the son of Nabhiraja, and Rishabha had a son named Bharata, and after the name of this Bharata, this country is known as Bharata-varsha." After completing his world-conquest, he is said to have proceeded for his capital Ayodhyapuri with a huge army and the divine chakra-ratna (spinning, disk-like super weapon with serrated edges). Also, there's a legend of the Maharaj Vikramaditya's Empire, which spread across Middle East and East Asia (reaching even modern Indonesia) as a great Hindu world emperor (Chakravarti), probably inspiring the imperial pretensions of Chandragupta II and Skandagupta, as the term Vikramaditya is also used as a title by several Hindu monarchs. According to P.N. Oak and Stephen Knapp, king Vikrama’s empire extended up to Europe and whole Jambudweep (Asia). But, according to most historical texts, his kingdom was located in the present-day northern India and Pakistan, implying that the historic Vikramaditya only ruled on Bharat until River Indus as per Bhavishya Purana. So, there is no epigraphic evidence to suggest that his rule extended to Europe, Arabia, Central Asia or Southeast Asia (Sources of contemporary empires, like Parthians, Kushans, Chinese, Romans, Sassanids) don't mention an empire ruling from Arabia to Indonesia), and that part of his rule is considered to be legend rather than historical fact, lying in the fact that Indic religious conceptions of the Indian sub-continent as being "the world" (as how the term Jambudvipa is used broadly to the same), and how that translates into folk memories. However, the Mahabharata or Somadeva's Kathasaritasagara has pretensions of world ruling, as performing some mystic ritual and virtues would be a signal of becoming emperor of the whole world, as Dharma has a universal jurisdiction in all the cosmos. In which there was a time when King Yudhisthira ruled over "the world". As From Śuciratha will come the son named Vṛṣṭimān, and his son, Suṣeṇa, will be the emperor of the entire world. Also there are signs in Bāṇabhaṭṭa that will shall arise an emperor named Harsha, who will rule over all the continents like Harishchandra, who will conquer the world like Mandhatr. But, the world, in the time of Ramayana in the 12th millennium BCE and Mahabharata in the 5th millennium BCE was only India. Some pan-indian empires, like Maurya Empire, were seeking the world domination (first of the known ancient world by indian in the Akhand Bharat, and then enter in conflict with Seleucid Empire), as Ashoka the great was a devout Buddhist and wanted to establish it as a world religion. Also, the first references to a Chakravala Chakravartin (an emperor who rules over all four of the continents) appear in monuments from the time of the early Maurya Empire, in the 4th to 3rd century BCE, in reference to Chandragupta Maurya and his grandson Ashoka.
China
On the Sinosphere, one of the consequences of the Mandate of Heaven in Imperial China was the claim of the Emperor of China as Son of Heaven who ruled tianxia (meaning "all under heaven", closely associated with civilization and order in classical Chinese philosophy), which in English can be transliterated as "ruler of the whole world", being equivalent to the concept of a universal monarch. The title was interpreted literally only in China and Japan, whose monarchs were referred to as demigods, deities, or "living gods", chosen by the gods and goddesses of heaven. The theory, from Confucian bureaucracy, behind this was that the Chinese emperor acted as the autocrat of all under Heaven and held a mandate to rule over everyone else in the world; but only as long as he served the people well. If the quality of rule became questionable because of repeated natural disasters such as flood or famine, or for other reasons, then rebellion was justified. This important concept legitimized the dynastic cycle or the change of dynasties. The center of this world view was not exclusionary in nature, and outer groups, such as ethnic minorities and foreign people, who accepted the mandate of the Chinese Emperor (through annexation or being Tributary state of China) were themselves received and included into the Chinese tianxia (in which equates China as "everything under the sky"), as it presupposed "inclusion of all" and implied acceptance of the world's diversities, emphasizing harmonious reciprocal dependence and ruled by virtue as a means for lasting peace. Although in practice there would be areas of the known world which were not under the control of the Chinese monarch ("barbarians"), in Chinese political theory the rulers of those areas derived their power from the Chinese monarch (Sinocentrism). This principle was exemplified with Qin Shi Huang's goal to "unify all under Heaven", which was, in fact, representative of his desire to control and expand Chinese territory to act as an actual geographic entity, as consecuence of existing many of feudal states that had shared cultural and economic interests, so the concept of a great nation centered on the Yellow River Plain (the known world, both Han or Non-Han in Hua–Yi distinction) gradually expanded and the equivalence of tianxia with the Chinese nation evolved due to the feudal practice of conferring land. For the emperors of the central kingdom of China, the world can be roughly divided into two broad and simple categories: civilization and non-civilization, which means the people who have accepted the emperor's supremacy, the Heavenly virtue and its principle, and the people who have not accepted it; then, they recognized their country as the only true civilization in all respects, starting with their geography and including all the known world in a Celestial Empire. China's neighbors were obliged to pay their respects to the 'excellent' Chinese emperors within these boundaries on a regular basis. It can be said that this was the most important element of the East Asian order, which was implicit in the name of Celestial Empire in the past. In the 7th century during the Tang dynasty, some northern tribes of Turkic origin, after being made vassal (as consequence of Tang campaign against the Eastern Turks), referred to the Emperor Taizong as the "Khan of Heaven". Also The Chinese emperor exercised power over the surrounding dynasty under the name of Celestial Empire. Especially in the case of kings of ancient Korea, it was the subject of the Chinese emperor. The idea of the absolute authority of the Chinese emperor and the extension of tianxia by the assimilation of vassal states began to fade with the Opium Wars, as China was made to refer to Great Britain as a "sovereign nation", equal to itself, it to establish a foreign affairs bureau and accommodate to Westphalian sovereignty of Western nations' system of international affairs during New Imperialism.
Persia
On Sasanian Empire, the use of the mythological Kayanian title of kay, first used by Yazdegerd II and reached its zenith under Peroz I, was due to a shift in the political perspective of the Sasanian Empire. Originally disposed towards the west against their rivals from the Byzantine Empire, this now changed to the east against Hephthalites. The war against the Hunnic tribes (Iranian Huns) may have awakened the mythical rivalry existing between the Iranian Kayanian rulers (mythical kings of the legendary Avestan dynasty) and their Turanian enemies, which is demonstrated in the Younger Avesta. Based on the legend of the Iranian hero-king Fereydun (Frēdōn in Middle Persian), who divided his kingdom between his three sons: his eldest son Salm received the empire of the west, "Rûm" (more generally meaning the Roman Empire, the Greco-Roman world, or just "the West"); the second eldest Tur received the empire of the east, Turān (all the lands north and east of the Amu Darya, as far as China); and the youngest, Iraj, received the heartland of the empire, Iran. So, the Sasanians Shahanshah may have believed themselves to be the heirs of the Fereydun and Iraj (reinforced because they were Ahura Mazda's worshippers), and so possibly considered both the Byzantine domains in west and the eastern domains of the Hephthalites as belonging to Iran, and therefore have been symbolically asserting their rights over these lands of both Hemispheres of Earth by assuming the title of kay.
Genghis Khan
On the Mongol Empire, Genghis Khan genuinely believed that it was his destiny to conquer the world for his god, Tengri, having a mission of bringing the rest of the world under one sword. This was based in his shamanic beliefs of the Great Blue Sky that spans the world, deriving his mandate for a world empire from this universal divinity, being close to unify Eurasia into a world empire under the shamanic umbrella. Also, Temujin took name "Genghis Khan", which means Universal Ruler, then, his sons and grandsons took up challenge of world conquest.
The fourth Mughal emperor styled himself Jahangir, meaning "world conqueror", and his wife Mehr-un-Nissa being awarded with the title of Nur Jahan ('Light of the World').
Ottomans
The Ottoman Empire had claims of world domination through Ottoman Caliphate. The Süleyman the Magnificent's Venetian helmet was an elaborate headpiece designed to project the sultan's power in the context of the Ottoman–Habsburg rivalry. Meaning The four floors of the Crown also represent Solomon's goal of world conquest—reigning the North, the South, the East, and the West—as well as the Pope's famous triple crown and the Holy Roman Empire two years ago. It was a reference to Charles V, who was crowned as the German Emperor, and also the three-tiered tiara worn by the Pope Clement VII.
Modern theory
In the early 17th century, Sir Walter Raleigh proposed that world domination could be achieved through control of the oceans, writing that "whosoever commands the sea commands the trade; whosoever commands the trade of the world commands the riches of the world, and consequently the world itself". In 1919, Halford Mackinder offered another influential theory for a route to world domination, writing:
While Mackinder's "Heartland Theory" initially received little attention outside geography, it later exercised some influence on the foreign policies of world powers seeking to obtain the control suggested by the theory. Impressed with the swift opening of World War II, Derwent Whittlesey wrote in 1942:
Yet before the entrance of the United States into this War and with Isolationism still intact, U.S. strategist Hanson W. Baldwin had projected that "[t]omorrow air bases may be the highroad to power and domination ... Obviously it is only by air bases ... that power exercised in the sovereign skies above a nation can be stretched far beyond its shores ... Perhaps ... future acquisitions of air bases ... can carry the voice of America through the skies to the ends of the earth.
Writing in 1948, Hans Morgenthau stressed that the mechanical development of weapons, transportation, and communication makes "the conquest of the world technically possible, and they make it technically possible to keep the world in that conquered state." He argues that a lack of such infrastructure explains why great ancient empires, though vast, failed to complete the universal conquest of their world and perpetuate the conquest. "Today no technological obstacle stands in the way of a world-wide empire," as "modern technology makes it possible to extend the control of mind and action to every corner of the globe regardless of geography and season." Morgenthau continued on the technological progress:
However, it has often been said that with the full size and scope of the world known, "world domination is an impossible goal", and specifically that "no single nation however big and powerful can dominate a world" of well over a hundred interdependent nations and billions of people.
The above assumption is challenged by scholars of metric approach to history. Cesare Marchetti and Jesse H. Ausubel argued that the size of
empires corresponds to two weeks of travel from the capital to the rim using the fastest transportation system available. Airplane permits global empire because any place can be reached within less than two weeks, though for political reasons we may have to wait a couple more generations (from 2013) to see a global empire. Max Ostrovsky stressed that the implication is even more drastic in the progress of communication. The speed of communication in the Inca Empire, for example, was 20 km per hour (running man). Today, information moves with the speed of light. By most cautious extrapolations, he concluded, modern technology allows empire exceeding the size or population of Earth multiple times.
Some proponents of ideologies (communism, socialism, and Islamism) actively pursue the goal of establishing a form of government consistent with their political beliefs, or assert that the world is moving "naturally" towards the adoption of a particular form of government (or self), authoritarian or anti-authoritarian. These proposals are not concerned with a particular nation achieving world domination, but with all nations conforming to a particular social or economic model. A goal of world domination can be to establish a world government, a single common political authority for all of humanity. The period of the Cold War, in particular, is considered to be a period of intense ideological polarization, given the existence of two rival blocs—the capitalist West and the communist East—that each expressed the hope of seeing the triumph of their ideology over that of the enemy. The ultimate end of such a triumph would be that one ideology or the other would become the sole governing ideology in the world.
In certain religions, some adherents may also seek the conversion (peaceful or forced) of as many people as possible to their own religion, without restrictions of national or ethnic origin. This type of spiritual domination is usually seen as distinct from the temporal dominion, although there have been instances of efforts begun as holy wars devolving into the pursuit of wealth, resources, and territory. Some Christian groups teach that a false religion, led by false prophets who achieve world domination by inducing nearly universal worship of a false deity, is a prerequisite to end times described in the Book of Revelation. As one author put it, "[i]f world domination is to be obtained, the masses of little people must be brought on board with religion".
In some instances, speakers have accused nations or ideological groups of seeking world domination, even where those entities have denied that this was their goal. For example, J. G. Ballard quoted Aldous Huxley as having said of the United States entering the First World War, "I dread the inevitable acceleration of American world domination which will be the result of it all ... Europe will no longer be Europe". In 2012, politician and critic of Islam Geert Wilders characterized Islam as "an ideology aiming for world domination rather than a religion", and in 2008 characterized the 2008 Israel–Gaza conflict as a proxy action by Islam against the West, contending that "[t]he end of Israel would not mean the end of our problems with Islam, but only ... the start of the final battle for world domination".
See also
World revolution
American imperialism
Russian imperialism
Chinese expansionism
Global governance, the political interaction of transnational actors.
List of largest empires by maximum extent of land area occupied.
Singleton (global governance), a hypothetical world order in which there is a single decision-making agency (potentially an advanced artificial intelligence) at the highest level, capable of exerting effective control over its domain.
Superpower, a state with a leading position in the international system and the ability to influence events in its own interest by global projection of power.
King of the Universe
Universal monarchy
References
External links
Politics
Domination | 0.77371 | 0.99418 | 0.769207 |
Plutocracy | A plutocracy or plutarchy is a society that is ruled or controlled by people of great wealth or income. The first known use of the term in English dates from 1631. Unlike most political systems, plutocracy is not rooted in any established political philosophy.
Usage
The term plutocracy is generally used as a pejorative to describe or warn against an undesirable condition. Throughout history, political thinkers and philosophers have condemned plutocrats for ignoring their social responsibilities, using their power to serve their own purposes and thereby increasing poverty and nurturing class conflict and corrupting societies with greed and hedonism.
"", an anglicised adaptation of the word "plutocracy", may refer to "a specifically American version of plutocracy".
Examples
Historic examples of plutocracies include the Roman Empire; some city-states in Ancient Greece; the civilization of Carthage; the Italian merchant city-states of Venice, Florence and Genoa; the Dutch Republic; and the pre-World War II Empire of Japan (the zaibatsu). According to Noam Chomsky and Jimmy Carter, the modern United States resembles a plutocracy though with democratic forms. Paul Volcker, a former chair of the Federal Reserve, also believed the U.S. to be developing into a plutocracy.
One modern, formal example of a plutocracy, according to some critics, is the City of London. The City (also called the Square Mile of ancient London, corresponding to the modern financial district, an area of about 2.5 km2) has a unique electoral system for its local administration, separate from the rest of London. More than two-thirds of voters are not residents, but rather representatives of businesses and other bodies that occupy premises in the City, with votes distributed according to their numbers of employees. The principal justification for this arrangement is that most of the services provided by the City of London Corporation are used by the businesses in the City. Around 450,000 non-residents constitute the city's day-time population, far outnumbering the City's 7,000 residents.
In the political jargon and propaganda of Fascist Italy, Nazi Germany and the Communist International, Western democratic states were referred to as plutocracies, with the implication being that a small number of extremely wealthy individuals were controlling the countries and holding them to ransom. Plutocracy replaced democracy and capitalism as the principal fascist term for the U.S. and Great Britain during World War II. In Nazi Germany, it was often used as a dog whistle term for Jewish people in their antisemitic propaganda. Joseph Goebbels, the Reich Minister of Propaganda, found the term to be particularly favorable, describing it as "the main concept at which the ideological struggle will be aimed".
United States
Some modern historians, politicians, and economists argue that the U.S. was effectively plutocratic for at least part of the Gilded Age and Progressive Era periods between the end of the Civil War until the beginning of the Great Depression. President Theodore Roosevelt became known as the "trust-buster" for his aggressive use of antitrust law, through which he managed to break up such major combinations as the largest railroad and Standard Oil, the largest oil company. According to historian David Burton, "When it came to domestic political concerns, TR's bête noire was the plutocracy." In his autobiographical account of taking on monopolistic corporations as president, Roosevelt recounted:
The Sherman Antitrust Act had been enacted in 1890, when large industries reaching monopolistic or near-monopolistic levels of market concentration and financial capital increasingly integrating corporations and a handful of very wealthy heads of large corporations began to exert increasing influence over industry, public opinion and politics after the Civil War. Money, according to contemporary progressive and journalist Walter Weyl, was "the mortar of this edifice", with ideological differences among politicians fading and the political realm becoming "a mere branch in a still larger, integrated business. The state, which through the party formally sold favors to the large corporations, became one of their departments."
In "The Politics of Plutocracy" section of his book, The Conscience of a Liberal, economist Paul Krugman says plutocracy took hold because of three factors: at that time, the poorest quarter of American residents (African-Americans and non-naturalized immigrants) were ineligible to vote, the wealthy funded the campaigns of politicians they preferred, and vote buying was "feasible, easy and widespread", as were other forms of electoral fraud such as ballot-box stuffing and intimidation of the other party's voters.
The U.S. instituted progressive taxation in 1913, but according to Shamus Khan, in the 1970s, elites used their increasing political power to lower their taxes, and today successfully employ what political scientist Jeffrey Winters calls "the income defense industry" to greatly reduce their taxes.
In 1998, Bob Herbert of The New York Times referred to modern American plutocrats as "The Donor Class" (list of top (political party) donors) and defined the class, for the first time, as "a tiny group – just one-quarter of 1 percent of the population – and it is not representative of the rest of the nation. But its money buys plenty of access."
Post-World War II
In modern times, the term is sometimes used pejoratively to refer to societies rooted in state-corporate capitalism or which prioritize the accumulation of wealth over other interests. According to Kevin Phillips, author and political strategist to Richard Nixon, the United States is a plutocracy in which there is a "fusion of money and government."
Chrystia Freeland, author of Plutocrats, says that the present trend towards plutocracy occurs because the rich feel that their interests are shared by society:
When the Nobel Prize–winning economist Joseph Stiglitz wrote the 2011 Vanity Fair magazine article entitled "Of the 1%, by the 1%, for the 1%", the title and content supported Stiglitz's claim that the U.S. is increasingly ruled by the wealthiest 1%. Some researchers have said the U.S. may be drifting towards a form of oligarchy, as individual citizens have less impact than economic elites and organized interest groups upon public policy. In the U.S. Congress itself, more than half of all members are millionaires.
A study conducted by political scientists Martin Gilens of Princeton University and Benjamin Page of Northwestern University, which was released in April 2014, stated that their "analyses suggest that majorities of the American public actually have little influence over the policies our government adopts". Gilens and Page do not characterize the U.S. as an "oligarchy" or "plutocracy" per se; however, they do apply the concept of "civil oligarchy" as used by Jeffrey A. Winters with respect to the U.S.
The investor, billionaire, and philanthropist Warren Buffett, one of the wealthiest people in the world, voiced in 2005 and once more in 2006 his view that his class, the "rich class", is waging class warfare on the rest of society. In 2005 Buffet said to CNN: "It's class warfare, my class is winning, but they shouldn't be." In a November 2006 interview in The New York Times, Buffett stated that "[t]here's class warfare all right, but it's my class, the rich class, that's making war, and we're winning."
Causation
Reasons why a plutocracy develops are complex. In a nation that is experiencing rapid economic growth, income inequality will tend to increase as the rate of return on innovation increases. In other scenarios, plutocracy may develop when a country is collapsing due to resource depletion as the elites attempt to hoard the diminishing wealth or expand debts to maintain stability, which will tend to enrich creditors and financiers. Economists have also suggested that free market economies tend to drift into monopolies and oligopolies because of the greater efficiency of larger businesses (see economies of scale).
Other nations may become plutocratic through kleptocracy or rent-seeking.
See also
Aristocracy
Banana republic
Corporatocracy
Elitism
Kleptocracy
Neo-feudalism
Oligarchy
Overclass
Plutonomy
Timocracy
Upper class
Wealth concentration
Property qualification
References
Further reading
Howard, Milford Wriarson (1895). The American plutocracy. New York: Holland Publishing.
Norwood, Thomas Manson (1888). Plutocracy: or, American white slavery; a politico-social novel. New York: The American News Company.
Pettigrew, Richard Franklin (1921). Triumphant Plutocracy: The Story of American Public Life from 1870 to 1920. New York: The Academy Press.
Reed, John Calvin (1903). The New Plutocracy. New York: Abbey Press.
Winters, Jeffrey A. (2011). Oligarchy. Cambridge University Press
External links
Documentary: Plutocracy Political repression in the U.S.A. part 1, by Metanoia Films
Documentary: Plutocracy II: Solidarity Forever Political repression in the U.S.A. part 2, by Metanoia Films
17th-century neologisms
Oligarchy
Pejorative terms for forms of government
Political philosophy
Social philosophy
Wealth concentration | 0.770133 | 0.998786 | 0.769199 |
Human | Humans (Homo sapiens, meaning "thinking man") or modern humans are the most common and widespread species of primate, and the last surviving species of the genus Homo. They are great apes characterized by their hairlessness, bipedalism, and high intelligence. Humans have large brains, enabling more advanced cognitive skills that enable them to thrive and adapt in varied environments, develop highly complex tools, and form complex social structures and civilizations. Humans are highly social, with individual humans tending to belong to a multi-layered network of cooperating, distinct, or even competing social groups – from families and peer groups to corporations and political states. As such, social interactions between humans have established a wide variety of values, social norms, languages, and traditions (collectively termed institutions), each of which bolsters human society. Humans are also highly curious, with the desire to understand and influence phenomena having motivated humanity's development of science, technology, philosophy, mythology, religion, and other frameworks of knowledge; humans also study themselves through such domains as anthropology, social science, history, psychology, and medicine. There are estimated to be more than eight billion humans alive.
Although some scientists equate the term "humans" with all members of the genus Homo, in common usage it generally refers to Homo sapiens, the only extant member. All other members of the genus Homo, which are now extinct, are known as archaic humans, and the term "modern human" is used to distinguish Homo sapiens from archaic humans. Anatomically modern humans emerged around 300,000 years ago in Africa, evolving from Homo heidelbergensis or a similar species. Migrating out of Africa, they gradually replaced and interbred with local populations of archaic humans. Multiple hypotheses for the extinction of archaic human species such as Neanderthals include competition, violence, interbreeding with Homo sapiens, or inability to adapt to climate change. Humans began exhibiting behavioral modernity about 160,000–60,000 years ago. For most of their history, humans were nomadic hunter-gatherers. The Neolithic Revolution, which began in Southwest Asia around 13,000 years ago (and separately in a few other places), saw the emergence of agriculture and permanent human settlement; in turn, this led to the development of civilization and kickstarted a period of continuous (and ongoing) population growth and rapid technological change. Since then, a number of civilizations have risen and fallen, while a number of sociocultural and technological developments have resulted in significant changes to the human lifestyle.
Genes and the environment influence human biological variation in visible characteristics, physiology, disease susceptibility, mental abilities, body size, and life span. Though humans vary in many traits, humans are among the least genetically diverse species. Any two humans are at least 99.5% genetically similar. Humans are sexually dimorphic: generally, males have greater body strength and females have a higher body fat percentage. At puberty, humans develop secondary sex characteristics. Females are capable of pregnancy, usually between puberty, at around 12 years old, and menopause, around the age of 50. As omnivorous creatures, they are capable of consuming a wide variety of plant and animal material, and have used fire and other forms of heat to prepare and cook food since the time of Homo erectus. Humans can survive for up to eight weeks without food and several days without water. Humans are generally diurnal, sleeping on average seven to nine hours per day. Childbirth is dangerous, with a high risk of complications and death. Often, both the mother and the father provide care for their children, who are helpless at birth.
Humans have a large, highly developed, and complex prefrontal cortex, the region of the brain associated with higher cognition. Humans are highly intelligent and capable of episodic memory; they have flexible facial expressions, self-awareness, and a theory of mind. The human mind is capable of introspection, private thought, imagination, volition, and forming views on existence. This has allowed great technological advancements and complex tool development through complex reasoning and the transmission of knowledge to subsequent generations through language.
Humans have had a dramatic effect on the environment. They are apex predators, being rarely preyed upon by other species. Human population growth, industrialization, land development, overconsumption and combustion of fossil fuels have led to environmental destruction and pollution that significantly contributes to the ongoing mass extinction of other forms of life. Within the last century, humans have explored challenging environments such as Antarctica, the deep sea, and outer space. Human habitation within these hostile environments is restrictive and expensive, typically limited in duration, and restricted to scientific, military, or industrial expeditions. Humans have briefly visited the Moon and made their presence felt on other celestial bodies through human-made robotic spacecraft. Since the early 20th century, there has been continuous human presence in Antarctica through research stations and, since 2000, in space through habitation on the International Space Station.
Etymology and definition
All modern humans are classified into the species Homo sapiens, coined by Carl Linnaeus in his 1735 work Systema Naturae. The generic name "Homo" is a learned 18th-century derivation from Latin , which refers to humans of either sex. The word human can refer to all members of the Homo genus. The name "Homo sapiens" means 'wise man' or 'knowledgeable man'. There is disagreement if certain extinct members of the genus, namely Neanderthals, should be included as a separate species of humans or as a subspecies of H. sapiens.
Human is a loanword of Middle English from Old French , ultimately from Latin , the adjectival form of ('man' – in the sense of humanity). The native English term man can refer to the species generally (a synonym for humanity) as well as to human males. It may also refer to individuals of either sex.
Despite the fact that the word animal is colloquially used as an antonym for human, and contrary to a common biological misconception, humans are animals. The word person is often used interchangeably with human, but philosophical debate exists as to whether personhood applies to all humans or all sentient beings, and further if a human can lose personhood (such as by going into a persistent vegetative state).
Evolution
Humans are apes (superfamily Hominoidea). The lineage of apes that eventually gave rise to humans first split from gibbons (family Hylobatidae) and orangutans (genus Pongo), then gorillas (genus Gorilla), and finally, chimpanzees and bonobos (genus Pan). The last split, between the human and chimpanzee–bonobo lineages, took place around 8–4 million years ago, in the late Miocene epoch. During this split, chromosome 2 was formed from the joining of two other chromosomes, leaving humans with only 23 pairs of chromosomes, compared to 24 for the other apes. Following their split with chimpanzees and bonobos, the hominins diversified into many species and at least two distinct genera. All but one of these lineagesrepresenting the genus Homo and its sole extant species Homo sapiensare now extinct.
The genus Homo evolved from Australopithecus. Though fossils from the transition are scarce, the earliest members of Homo share several key traits with Australopithecus. The earliest record of Homo is the 2.8 million-year-old specimen LD 350-1 from Ethiopia, and the earliest named species are Homo habilis and Homo rudolfensis which evolved by 2.3 million years ago. H. erectus (the African variant is sometimes called H. ergaster) evolved 2 million years ago and was the first archaic human species to leave Africa and disperse across Eurasia. H. erectus also was the first to evolve a characteristically human body plan. Homo sapiens emerged in Africa around 300,000 years ago from a species commonly designated as either H. heidelbergensis or H. rhodesiensis, the descendants of H. erectus that remained in Africa. H. sapiens migrated out of the continent, gradually replacing or interbreeding with local populations of archaic humans. Humans began exhibiting behavioral modernity about 160,000–70,000 years ago, and possibly earlier. This development was likely selected amidst natural climate change in Middle to Late Pleistocene Africa.
The "out of Africa" migration took place in at least two waves, the first around 130,000 to 100,000 years ago, the second (Southern Dispersal) around 70,000 to 50,000 years ago. H. sapiens proceeded to colonize all the continents and larger islands, arriving in Eurasia 125,000 years ago, Australia around 65,000 years ago, the Americas around 15,000 years ago, and remote islands such as Hawaii, Easter Island, Madagascar, and New Zealand in the years 300 to 1280 CE.
Human evolution was not a simple linear or branched progression but involved interbreeding between related species. Genomic research has shown that hybridization between substantially diverged lineages was common in human evolution. DNA evidence suggests that several genes of Neanderthal origin are present among all non sub-Saharan-African populations, and Neanderthals and other hominins, such as Denisovans, may have contributed up to 6% of their genome to present-day non sub-Saharan-African humans.
Human evolution is characterized by a number of morphological, developmental, physiological, and behavioral changes that have taken place since the split between the last common ancestor of humans and chimpanzees. The most significant of these adaptations are hairlessness, obligate bipedalism, increased brain size and decreased sexual dimorphism (neoteny). The relationship between all these changes is the subject of ongoing debate.
History
Prehistory
Until about 12,000 years ago, all humans lived as hunter-gatherers. The Neolithic Revolution (the invention of agriculture) first took place in Southwest Asia and spread through large parts of the Old World over the following millennia. It also occurred independently in Mesoamerica (about 6,000 years ago), China, Papua New Guinea, and the Sahel and West Savanna regions of Africa.
Access to food surplus led to the formation of permanent human settlements, the domestication of animals and the use of metal tools for the first time in history. Agriculture and sedentary lifestyle led to the emergence of early civilizations.
Ancient
An urban revolution took place in the 4th millennium BCE with the development of city-states, particularly Sumerian cities located in Mesopotamia. It was in these cities that the earliest known form of writing, cuneiform script, appeared around 3000 BCE. Other major civilizations to develop around this time were Ancient Egypt and the Indus Valley Civilisation. They eventually traded with each other and invented technology such as wheels, plows and sails. Emerging by 3000 BCE, the Caral–Supe civilization is the oldest complex civilization in the Americas. Astronomy and mathematics were also developed and the Great Pyramid of Giza was built. There is evidence of a severe drought lasting about a hundred years that may have caused the decline of these civilizations, with new ones appearing in the aftermath. Babylonians came to dominate Mesopotamia while others, such as the Poverty Point culture, Minoans and the Shang dynasty, rose to prominence in new areas. The Late Bronze Age collapse around 1200 BCE resulted in the disappearance of a number of civilizations and the beginning of the Greek Dark Ages. During this period iron started replacing bronze, leading to the Iron Age.
In the 5th century BCE, history started being recorded as a discipline, which provided a much clearer picture of life at the time. Between the 8th and 6th century BCE, Europe entered the classical antiquity age, a period when ancient Greece and ancient Rome flourished. Around this time other civilizations also came to prominence. The Maya civilization started to build cities and create complex calendars. In Africa, the Kingdom of Aksum overtook the declining Kingdom of Kush and facilitated trade between India and the Mediterranean. In West Asia, the Achaemenid Empire's system of centralized governance became the precursor to many later empires, while the Gupta Empire in India and the Han dynasty in China have been described as golden ages in their respective regions.
Medieval
Following the fall of the Western Roman Empire in 476, Europe entered the Middle Ages. During this period, Christianity and the Church would provide centralized authority and education. In the Middle East, Islam became the prominent religion and expanded into North Africa. It led to an Islamic Golden Age, inspiring achievements in architecture, the revival of old advances in science and technology, and the formation of a distinct way of life. The Christian and Islamic worlds would eventually clash, with the Kingdom of England, the Kingdom of France and the Holy Roman Empire declaring a series of holy wars to regain control of the Holy Land from Muslims.
In the Americas, between 200 and 900 CE Mesoamerica was in its Classic Period, while further north, complex Mississippian societies would arise starting around 800 CE. The Mongol Empire would conquer much of Eurasia in the 13th and 14th centuries. Over this same time period, the Mali Empire in Africa grew to be the largest empire on the continent, stretching from Senegambia to Ivory Coast. Oceania would see the rise of the Tuʻi Tonga Empire which expanded across many islands in the South Pacific. By the late 15th century, the Aztecs and Inca had become the dominant power in Mesoamerica and the Andes, respectively.
Modern
The early modern period in Europe and the Near East (–1800) began with the final defeat of the Byzantine Empire, and the rise of the Ottoman Empire. Meanwhile, Japan entered the Edo period, the Qing dynasty rose in China and the Mughal Empire ruled much of India. Europe underwent the Renaissance, starting in the 15th century, and the Age of Discovery began with the exploring and colonizing of new regions. This included the colonization of the Americas and the Columbian Exchange. This expansion led to the Atlantic slave trade and the genocide of Native American peoples. This period also marked the Scientific Revolution, with great advances in mathematics, mechanics, astronomy and physiology.
The late modern period (1800–present) saw the Technological and Industrial Revolution bring such discoveries as imaging technology, major innovations in transport and energy development. Influenced by Enlightenment ideals, the Americas and Europe experienced a period of political revolutions known as the Age of Revolution. The Napoleonic Wars raged through Europe in the early 1800s, Spain lost most of its colonies in the New World, while Europeans continued expansion into Africawhere European control went from 10% to almost 90% in less than 50 yearsand Oceania. In the 19th century, the British Empire expanded to become the world's largest empire.
A tenuous balance of power among European nations collapsed in 1914 with the outbreak of the First World War, one of the deadliest conflicts in history. In the 1930s, a worldwide economic crisis led to the rise of authoritarian regimes and a Second World War, involving almost all of the world's countries. The war's destruction led to the collapse of most global empires, leading to widespread decolonization.
Contemporary
Following the conclusion of the Second World War in 1945, the United States and the USSR emerged as the remaining global superpowers. This led to a Cold War that saw a struggle for global influence, including a nuclear arms race and a space race, ending in the collapse of the Soviet Union. The current Information Age, spurred by the development of the Internet and Artificial Intelligence systems, sees the world becoming increasingly globalized and interconnected.
Habitat and population
Early human settlements were dependent on proximity to water anddepending on the lifestyleother natural resources used for subsistence, such as populations of animal prey for hunting and arable land for growing crops and grazing livestock. Modern humans, however, have a great capacity for altering their habitats by means of technology, irrigation, urban planning, construction, deforestation and desertification. Human settlements continue to be vulnerable to natural disasters, especially those placed in hazardous locations and with low quality of construction. Grouping and deliberate habitat alteration is often done with the goals of providing protection, accumulating comforts or material wealth, expanding the available food, improving aesthetics, increasing knowledge or enhancing the exchange of resources.
Humans are one of the most adaptable species, despite having a low or narrow tolerance for many of the earth's extreme environments. Currently the species is present in all eight biogeographical realms, although their presence in the Antarctic realm is very limited to research stations and annually there is a population decline in the winter months of this realm. Humans established nation-states in the other seven realms, such as South Africa, India, Russia, Australia, Fiji, United States and Brazil (each located in a different biogeographical realm).
By using advanced tools and clothing, humans have been able to extend their tolerance to a wide variety of temperatures, humidities, and altitudes. As a result, humans are a cosmopolitan species found in almost all regions of the world, including tropical rainforest, arid desert, extremely cold arctic regions, and heavily polluted cities; in comparison, most other species are confined to a few geographical areas by their limited adaptability. The human population is not, however, uniformly distributed on the Earth's surface, because the population density varies from one region to another, and large stretches of surface are almost completely uninhabited, like Antarctica and vast swathes of the ocean. Most humans (61%) live in Asia; the remainder live in the Americas (14%), Africa (14%), Europe (11%), and Oceania (0.5%).
Estimates of the population at the time agriculture emerged in around 10,000 BC have ranged between 1 million and 15 million. Around 50–60 million people lived in the combined eastern and western Roman Empire in the 4th century AD. Bubonic plagues, first recorded in the 6th century AD, reduced the population by 50%, with the Black Death killing 75–200 million people in Eurasia and North Africa alone. Human population is believed to have reached one billion in 1800. It has since then increased exponentially, reaching two billion in 1930 and three billion in 1960, four in 1975, five in 1987 and six billion in 1999. It passed seven billion in 2011 and passed eight billion in November 2022. It took over two million years of human prehistory and history for the human population to reach one billion and only 207 years more to grow to 7 billion. The combined biomass of the carbon of all the humans on Earth in 2018 was estimated at 60 million tons, about 10 times larger than that of all non-domesticated mammals.
In 2018, 4.2 billion humans (55%) lived in urban areas, up from 751 million in 1950. The most urbanized regions are Northern America (82%), Latin America (81%), Europe (74%) and Oceania (68%), with Africa and Asia having nearly 90% of the world's 3.4 billion rural population. Problems for humans living in cities include various forms of pollution and crime, especially in inner city and suburban slums.
Biology
Anatomy and physiology
Most aspects of human physiology are closely homologous to corresponding aspects of animal physiology. The dental formula of humans is: . Humans have proportionately shorter palates and much smaller teeth than other primates. They are the only primates to have short, relatively flush canine teeth. Humans have characteristically crowded teeth, with gaps from lost teeth usually closing up quickly in young individuals. Humans are gradually losing their third molars, with some individuals having them congenitally absent.
Humans share with chimpanzees a vestigial tail, appendix, flexible shoulder joints, grasping fingers and opposable thumbs. Humans also have a more barrel-shaped chests in contrast to the funnel shape of other apes, an adaptation for bipedal respiration. Apart from bipedalism and brain size, humans differ from chimpanzees mostly in smelling, hearing and digesting proteins. While humans have a density of hair follicles comparable to other apes, it is predominantly vellus hair, most of which is so short and wispy as to be practically invisible. Humans have about 2 million sweat glands spread over their entire bodies, many more than chimpanzees, whose sweat glands are scarce and are mainly located on the palm of the hand and on the soles of the feet.
It is estimated that the worldwide average height for an adult human male is about , while the worldwide average height for adult human females is about . Shrinkage of stature may begin in middle age in some individuals but tends to be typical in the extremely aged. Throughout history, human populations have universally become taller, probably as a consequence of better nutrition, healthcare, and living conditions. The average mass of an adult human is for females and for males. Like many other conditions, body weight and body type are influenced by both genetic susceptibility and environment and varies greatly among individuals.
Humans have a far faster and more accurate throw than other animals. Humans are also among the best long-distance runners in the animal kingdom, but slower over short distances. Humans' thinner body hair and more productive sweat glands help avoid heat exhaustion while running for long distances. Compared to other apes, the human heart produces greater stroke volume and cardiac output and the aorta is proportionately larger.
Genetics
Like most animals, humans are a diploid and eukaryotic species. Each somatic cell has two sets of 23 chromosomes, each set received from one parent; gametes have only one set of chromosomes, which is a mixture of the two parental sets. Among the 23 pairs of chromosomes, there are 22 pairs of autosomes and one pair of sex chromosomes. Like other mammals, humans have an XY sex-determination system, so that females have the sex chromosomes XX and males have XY. Genes and environment influence human biological variation in visible characteristics, physiology, disease susceptibility and mental abilities. The exact influence of genes and environment on certain traits is not well understood.
While no humansnot even monozygotic twinsare genetically identical, two humans on average will have a genetic similarity of 99.5%-99.9%. This makes them more homogeneous than other great apes, including chimpanzees. This small variation in human DNA compared to many other species suggests a population bottleneck during the Late Pleistocene (around 100,000 years ago), in which the human population was reduced to a small number of breeding pairs. The forces of natural selection have continued to operate on human populations, with evidence that certain regions of the genome display directional selection in the past 15,000 years.
The human genome was first sequenced in 2001 and by 2020 hundreds of thousands of genomes had been sequenced. In 2012 the International HapMap Project had compared the genomes of 1,184 individuals from 11 populations and identified 1.6 million single nucleotide polymorphisms. African populations harbor the highest number of private genetic variants. While many of the common variants found in populations outside of Africa are also found on the African continent, there are still large numbers that are private to these regions, especially Oceania and the Americas. By 2010 estimates, humans have approximately 22,000 genes. By comparing mitochondrial DNA, which is inherited only from the mother, geneticists have concluded that the last female common ancestor whose genetic marker is found in all modern humans, the so-called mitochondrial Eve, must have lived around 90,000 to 200,000 years ago.
Life cycle
Most human reproduction takes place by internal fertilization via sexual intercourse, but can also occur through assisted reproductive technology procedures. The average gestation period is 38 weeks, but a normal pregnancy can vary by up to 37 days. Embryonic development in the human covers the first eight weeks of development; at the beginning of the ninth week the embryo is termed a fetus. Humans are able to induce early labor or perform a caesarean section if the child needs to be born earlier for medical reasons. In developed countries, infants are typically in weight and in height at birth. However, low birth weight is common in developing countries, and contributes to the high levels of infant mortality in these regions.
Compared with other species, human childbirth is dangerous, with a much higher risk of complications and death. The size of the fetus's head is more closely matched to the pelvis than in other primates. The reason for this is not completely understood, but it contributes to a painful labor that can last 24 hours or more. The chances of a successful labor increased significantly during the 20th century in wealthier countries with the advent of new medical technologies. In contrast, pregnancy and natural childbirth remain hazardous ordeals in developing regions of the world, with maternal death rates approximately 100 times greater than in developed countries.
Both the mother and the father provide care for human offspring, in contrast to other primates, where parental care is mostly done by the mother. Helpless at birth, humans continue to grow for some years, typically reaching sexual maturity at 15 to 17 years of age. The human life span has been split into various stages ranging from three to twelve. Common stages include infancy, childhood, adolescence, adulthood and old age. The lengths of these stages have varied across cultures and time periods but is typified by an unusually rapid growth spurt during adolescence. Human females undergo menopause and become infertile at around the age of 50. It has been proposed that menopause increases a woman's overall reproductive success by allowing her to invest more time and resources in her existing offspring, and in turn their children (the grandmother hypothesis), rather than by continuing to bear children into old age.
The life span of an individual depends on two major factors, genetics and lifestyle choices. For various reasons, including biological/genetic causes, women live on average about four years longer than men. , the global average life expectancy at birth of a girl is estimated to be 74.9 years compared to 70.4 for a boy. There are significant geographical variations in human life expectancy, mostly correlated with economic developmentfor example, life expectancy at birth in Hong Kong is 87.6 years for girls and 81.8 for boys, while in the Central African Republic, it is 55.0 years for girls and 50.6 for boys. The developed world is generally aging, with the median age around 40 years. In the developing world, the median age is between 15 and 20 years. While one in five Europeans is 60 years of age or older, only one in twenty Africans is 60 years of age or older. In 2012, the United Nations estimated that there were 316,600 living centenarians (humans of age 100 or older) worldwide.
Diet
Humans are omnivorous, capable of consuming a wide variety of plant and animal material. Human groups have adopted a range of diets from purely vegan to primarily carnivorous. In some cases, dietary restrictions in humans can lead to deficiency diseases; however, stable human groups have adapted to many dietary patterns through both genetic specialization and cultural conventions to use nutritionally balanced food sources. The human diet is prominently reflected in human culture and has led to the development of food science.
Until the development of agriculture, Homo sapiens employed a hunter-gatherer method as their sole means of food collection. This involved combining stationary food sources (such as fruits, grains, tubers, and mushrooms, insect larvae and aquatic mollusks) with wild game, which must be hunted and captured in order to be consumed. It has been proposed that humans have used fire to prepare and cook food since the time of Homo erectus. Human domestication of wild plants began about 11,700 years ago, leading to the development of agriculture, a gradual process called the Neolithic Revolution. These dietary changes may also have altered human biology; the spread of dairy farming provided a new and rich source of food, leading to the evolution of the ability to digest lactose in some adults. The types of food consumed, and how they are prepared, have varied widely by time, location, and culture.
In general, humans can survive for up to eight weeks without food, depending on stored body fat. Survival without water is usually limited to three or four days, with a maximum of one week. In 2020 it is estimated 9 million humans die every year from causes directly or indirectly related to starvation. Childhood malnutrition is also common and contributes to the global burden of disease. However, global food distribution is not even, and obesity among some human populations has increased rapidly, leading to health complications and increased mortality in some developed and a few developing countries. Worldwide, over one billion people are obese, while in the United States 35% of people are obese, leading to this being described as an "obesity epidemic." Obesity is caused by consuming more calories than are expended, so excessive weight gain is usually caused by an energy-dense diet.
Biological variation
There is biological variation in the human specieswith traits such as blood type, genetic diseases, cranial features, facial features, organ systems, eye color, hair color and texture, height and build, and skin color varying across the globe. The typical height of an adult human is between , although this varies significantly depending on sex, ethnic origin, and family bloodlines. Body size is partly determined by genes and is also significantly influenced by environmental factors such as diet, exercise, and sleep patterns.
There is evidence that populations have adapted genetically to various external factors. The genes that allow adult humans to digest lactose are present in high frequencies in populations that have long histories of cattle domestication and are more dependent on cow milk. Sickle cell anemia, which may provide increased resistance to malaria, is frequent in populations where malaria is endemic. Populations that have for a very long time inhabited specific climates tend to have developed specific phenotypes that are beneficial for those environmentsshort stature and stocky build in cold regions, tall and lanky in hot regions, and with high lung capacities or other adaptations at high altitudes. Some populations have evolved highly unique adaptations to very specific environmental conditions, such as those advantageous to ocean-dwelling lifestyles and freediving in the Bajau.
Human hair ranges in color from red to blond to brown to black, which is the most frequent. Hair color depends on the amount of melanin, with concentrations fading with increased age, leading to grey or even white hair. Skin color can range from darkest brown to lightest peach, or even nearly white or colorless in cases of albinism. It tends to vary clinally and generally correlates with the level of ultraviolet radiation in a particular geographic area, with darker skin mostly around the equator. Skin darkening may have evolved as protection against ultraviolet solar radiation. Light skin pigmentation protects against depletion of vitamin D, which requires sunlight to make. Human skin also has a capacity to darken (tan) in response to exposure to ultraviolet radiation.
There is relatively little variation between human geographical populations, and most of the variation that occurs is at the individual level. Much of human variation is continuous, often with no clear points of demarcation. Genetic data shows that no matter how population groups are defined, two people from the same population group are almost as different from each other as two people from any two different population groups. Dark-skinned populations that are found in Africa, Australia, and South Asia are not closely related to each other.
Genetic research has demonstrated that human populations native to the African continent are the most genetically diverse and genetic diversity decreases with migratory distance from Africa, possibly the result of bottlenecks during human migration. These non-African populations acquired new genetic inputs from local admixture with archaic populations and have much greater variation from Neanderthals and Denisovans than is found in Africa, though Neanderthal admixture into African populations may be underestimated. Furthermore, recent studies have found that populations in sub-Saharan Africa, and particularly West Africa, have ancestral genetic variation which predates modern humans and has been lost in most non-African populations. Some of this ancestry is thought to originate from admixture with an unknown archaic hominin that diverged before the split of Neanderthals and modern humans.
Humans are a gonochoric species, meaning they are divided into male and female sexes. The greatest degree of genetic variation exists between males and females. While the nucleotide genetic variation of individuals of the same sex across global populations is no greater than 0.1%–0.5%, the genetic difference between males and females is between 1% and 2%. Males on average are 15% heavier and taller than females. On average, men have about 40–50% more upper-body strength and 20–30% more lower-body strength than women at the same weight, due to higher amounts of muscle and larger muscle fibers. Women generally have a higher body fat percentage than men. Women have lighter skin than men of the same population; this has been explained by a higher need for vitamin D in females during pregnancy and lactation. As there are chromosomal differences between females and males, some X and Y chromosome-related conditions and disorders only affect either men or women. After allowing for body weight and volume, the male voice is usually an octave deeper than the female voice. Women have a longer life span in almost every population around the world. There are intersex conditions in the human population, however these are rare.
Psychology
The human brain, the focal point of the central nervous system in humans, controls the peripheral nervous system. In addition to controlling "lower", involuntary, or primarily autonomic activities such as respiration and digestion, it is also the locus of "higher" order functioning such as thought, reasoning, and abstraction. These cognitive processes constitute the mind, and, along with their behavioral consequences, are studied in the field of psychology.
Humans have a larger and more developed prefrontal cortex than other primates, the region of the brain associated with higher cognition. This has led humans to proclaim themselves to be more intelligent than any other known species. Objectively defining intelligence is difficult, with other animals adapting senses and excelling in areas that humans are unable to.
There are some traits that, although not strictly unique, do set humans apart from other animals. Humans may be the only animals who have episodic memory and who can engage in "mental time travel". Even compared with other social animals, humans have an unusually high degree of flexibility in their facial expressions. Humans are the only animals known to cry emotional tears. Humans are one of the few animals able to self-recognize in mirror tests and there is also debate over to what extent humans are the only animals with a theory of mind.
Sleep and dreaming
Humans are generally diurnal. The average sleep requirement is between seven and nine hours per day for an adult and nine to ten hours per day for a child; elderly people usually sleep for six to seven hours. Having less sleep than this is common among humans, even though sleep deprivation can have negative health effects. A sustained restriction of adult sleep to four hours per day has been shown to correlate with changes in physiology and mental state, including reduced memory, fatigue, aggression, and bodily discomfort.
During sleep humans dream, where they experience sensory images and sounds. Dreaming is stimulated by the pons and mostly occurs during the REM phase of sleep. The length of a dream can vary, from a few seconds up to 30 minutes. Humans have three to five dreams per night, and some may have up to seven. Dreamers are more likely to remember the dream if awakened during the REM phase. The events in dreams are generally outside the control of the dreamer, with the exception of lucid dreaming, where the dreamer is self-aware. Dreams can at times make a creative thought occur or give a sense of inspiration.
Consciousness and thought
Human consciousness, at its simplest, is sentience or awareness of internal or external existence. Despite centuries of analyses, definitions, explanations and debates by philosophers and scientists, consciousness remains puzzling and controversial, being "at once the most familiar and most mysterious aspect of our lives". The only widely agreed notion about the topic is the intuition that it exists. Opinions differ about what exactly needs to be studied and explained as consciousness. Some philosophers divide consciousness into phenomenal consciousness, which is sensory experience itself, and access consciousness, which can be used for reasoning or directly controlling actions. It is sometimes synonymous with 'the mind', and at other times, an aspect of it. Historically it is associated with introspection, private thought, imagination and volition. It now often includes some kind of experience, cognition, feeling or perception. It may be 'awareness', or 'awareness of awareness', or self-awareness. There might be different levels or orders of consciousness, or different kinds of consciousness, or just one kind with different features.
The process of acquiring knowledge and understanding through thought, experience, and the senses is known as cognition. The human brain perceives the external world through the senses, and each individual human is influenced greatly by his or her experiences, leading to subjective views of existence and the passage of time. The nature of thought is central to psychology and related fields. Cognitive psychology studies cognition, the mental processes underlying behavior. Largely focusing on the development of the human mind through the life span, developmental psychology seeks to understand how people come to perceive, understand, and act within the world and how these processes change as they age. This may focus on intellectual, cognitive, neural, social, or moral development. Psychologists have developed intelligence tests and the concept of intelligence quotient in order to assess the relative intelligence of human beings and study its distribution among population.
Motivation and emotion
Human motivation is not yet wholly understood. From a psychological perspective, Maslow's hierarchy of needs is a well-established theory that can be defined as the process of satisfying certain needs in ascending order of complexity. From a more general, philosophical perspective, human motivation can be defined as a commitment to, or withdrawal from, various goals requiring the application of human ability. Furthermore, incentive and preference are both factors, as are any perceived links between incentives and preferences. Volition may also be involved, in which case willpower is also a factor. Ideally, both motivation and volition ensure the selection, striving for, and realization of goals in an optimal manner, a function beginning in childhood and continuing throughout a lifetime in a process known as socialization.
Emotions are biological states associated with the nervous system brought on by neurophysiological changes variously associated with thoughts, feelings, behavioral responses, and a degree of pleasure or displeasure. They are often intertwined with mood, temperament, personality, disposition, creativity, and motivation. Emotion has a significant influence on human behavior and their ability to learn. Acting on extreme or uncontrolled emotions can lead to social disorder and crime, with studies showing criminals may have a lower emotional intelligence than normal.
Emotional experiences perceived as pleasant, such as joy, interest or contentment, contrast with those perceived as unpleasant, like anxiety, sadness, anger, and despair. Happiness, or the state of being happy, is a human emotional condition. The definition of happiness is a common philosophical topic. Some define it as experiencing the feeling of positive emotional affects, while avoiding the negative ones. Others see it as an appraisal of life satisfaction or quality of life. Recent research suggests that being happy might involve experiencing some negative emotions when humans feel they are warranted.
Sexuality and love
For humans, sexuality involves biological, erotic, physical, emotional, social, or spiritual feelings and behaviors. Because it is a broad term, which has varied with historical contexts over time, it lacks a precise definition. The biological and physical aspects of sexuality largely concern the human reproductive functions, including the human sexual response cycle. Sexuality also affects and is affected by cultural, political, legal, philosophical, moral, ethical, and religious aspects of life. Sexual desire, or libido, is a basic mental state present at the beginning of sexual behavior. Studies show that men desire sex more than women and masturbate more often.
Humans can fall anywhere along a continuous scale of sexual orientation, although most humans are heterosexual. While homosexual behavior occurs in some other animals, only humans and domestic sheep have so far been found to exhibit exclusive preference for same-sex relationships. Most evidence supports nonsocial, biological causes of sexual orientation, as cultures that are very tolerant of homosexuality do not have significantly higher rates of it. Research in neuroscience and genetics suggests that other aspects of human sexuality are biologically influenced as well.
Love most commonly refers to a feeling of strong attraction or emotional attachment. It can be impersonal (the love of an object, ideal, or strong political or spiritual connection) or interpersonal (love between humans). When in love dopamine, norepinephrine, serotonin and other chemicals stimulate the brain's pleasure center, leading to side effects such as increased heart rate, loss of appetite and sleep, and an intense feeling of excitement.
Culture
Humanity's unprecedented set of intellectual skills were a key factor in the species' eventual technological advancement and concomitant domination of the biosphere. Disregarding extinct hominids, humans are the only animals known to teach generalizable information, innately deploy recursive embedding to generate and communicate complex concepts, engage in the "folk physics" required for competent tool design, or cook food in the wild. Teaching and learning preserves the cultural and ethnographic identity of human societies. Other traits and behaviors that are mostly unique to humans include starting fires, phoneme structuring and vocal learning.
Language
While many species communicate, language is unique to humans, a defining feature of humanity, and a cultural universal. Unlike the limited systems of other animals, human language is openan infinite number of meanings can be produced by combining a limited number of symbols. Human language also has the capacity of displacement, using words to represent things and happenings that are not presently or locally occurring but reside in the shared imagination of interlocutors.
Language differs from other forms of communication in that it is modality independent; the same meanings can be conveyed through different media, audibly in speech, visually by sign language or writing, and through tactile media such as braille. Language is central to the communication between humans, and to the sense of identity that unites nations, cultures and ethnic groups. There are approximately six thousand different languages currently in use, including sign languages, and many thousands more that are extinct.
The arts
Human arts can take many forms including visual, literary, and performing. Visual art can range from paintings and sculptures to film, fashion design, and architecture. Literary arts can include prose, poetry, and dramas. The performing arts generally involve theatre, music, and dance. Humans often combine the different forms (for example, music videos). Other entities that have been described as having artistic qualities include food preparation, video games, and medicine. As well as providing entertainment and transferring knowledge, the arts are also used for political purposes.
Art is a defining characteristic of humans and there is evidence for a relationship between creativity and language. The earliest evidence of art was shell engravings made by Homo erectus 300,000 years before modern humans evolved. Art attributed to H. sapiens existed at least 75,000 years ago, with jewellery and drawings found in caves in South Africa. There are various hypotheses as to why humans have adapted to the arts. These include allowing them to better problem solve issues, providing a means to control or influence other humans, encouraging cooperation and contribution within a society or increasing the chance of attracting a potential mate. The use of imagination developed through art, combined with logic may have given early humans an evolutionary advantage.
Evidence of humans engaging in musical activities predates cave art and so far music has been practiced by virtually all known human cultures. There exists a wide variety of music genres and ethnic musics; with humans' musical abilities being related to other abilities, including complex social human behaviours. It has been shown that human brains respond to music by becoming synchronized with the rhythm and beat, a process called entrainment. Dance is also a form of human expression found in all cultures and may have evolved as a way to help early humans communicate. Listening to music and observing dance stimulates the orbitofrontal cortex and other pleasure sensing areas of the brain.
Unlike speaking, reading and writing does not come naturally to humans and must be taught. Still, literature has been present before the invention of words and language, with 30,000-year-old paintings on walls inside some caves portraying a series of dramatic scenes. One of the oldest surviving works of literature is the Epic of Gilgamesh, first engraved on ancient Babylonian tablets about 4,000 years ago. Beyond simply passing down knowledge, the use and sharing of imaginative fiction through stories might have helped develop humans' capabilities for communication and increased the likelihood of securing a mate. Storytelling may also be used as a way to provide the audience with moral lessons and encourage cooperation.
Tools and technologies
Stone tools were used by proto-humans at least 2.5 million years ago. The use and manufacture of tools has been put forward as the ability that defines humans more than anything else and has historically been seen as an important evolutionary step. The technology became much more sophisticated about 1.8 million years ago, with the controlled use of fire beginning around 1 million years ago. The wheel and wheeled vehicles appeared simultaneously in several regions some time in the fourth millennium BC. The development of more complex tools and technologies allowed land to be cultivated and animals to be domesticated, thus proving essential in the development of agriculturewhat is known as the Neolithic Revolution.
China developed paper, the printing press, gunpowder, the compass and other important inventions. The continued improvements in smelting allowed forging of copper, bronze, iron and eventually steel, which is used in railways, skyscrapers and many other products. This coincided with the Industrial Revolution, where the invention of automated machines brought major changes to humans' lifestyles. Modern technology is observed as progressing exponentially, with major innovations in the 20th century including: electricity, penicillin, semiconductors, internal combustion engines, the Internet, nitrogen fixing fertilizers, airplanes, computers, automobiles, contraceptive pills, nuclear fission, the green revolution, radio, scientific plant breeding, rockets, air conditioning, television and the assembly line.
Religion and spirituality
Definitions of religion vary; according to one definition, a religion is a belief system concerning the supernatural, sacred or divine, and practices, values, institutions and rituals associated with such belief. Some religions also have a moral code. The evolution and the history of the first religions have become areas of active scientific investigation. Credible evidence of religious behaviour dates to the Middle Paleolithic era (45–200 thousand years ago). It may have evolved to play a role in helping enforce and encourage cooperation between humans.
Religion manifests in diverse forms. Religion can include a belief in life after death, the origin of life, the nature of the universe (religious cosmology) and its ultimate fate (eschatology), and moral or ethical teachings. Views on transcendence and immanence vary substantially; traditions variously espouse monism, deism, pantheism, and theism (including polytheism and monotheism).
Although measuring religiosity is difficult, a majority of humans profess some variety of religious or spiritual belief. In 2015 the plurality were Christian followed by Muslims, Hindus and Buddhists. As of 2015, about 16%, or slightly under 1.2 billion humans, were irreligious, including those with no religious beliefs or no identity with any religion.
Science and philosophy
An aspect unique to humans is their ability to transmit knowledge from one generation to the next and to continually build on this information to develop tools, scientific laws and other advances to pass on further. This accumulated knowledge can be tested to answer questions or make predictions about how the universe functions and has been very successful in advancing human ascendancy.
Aristotle has been described as the first scientist, and preceded the rise of scientific thought through the Hellenistic period. Other early advances in science came from the Han dynasty in China and during the Islamic Golden Age. The scientific revolution, near the end of the Renaissance, led to the emergence of modern science.
A chain of events and influences led to the development of the scientific method, a process of observation and experimentation that is used to differentiate science from pseudoscience. An understanding of mathematics is unique to humans, although other species of animals have some numerical cognition. All of science can be divided into three major branches, the formal sciences (e.g., logic and mathematics), which are concerned with formal systems, the applied sciences (e.g., engineering, medicine), which are focused on practical applications, and the empirical sciences, which are based on empirical observation and are in turn divided into natural sciences (e.g., physics, chemistry, biology) and social sciences (e.g., psychology, economics, sociology).
Philosophy is a field of study where humans seek to understand fundamental truths about themselves and the world in which they live. Philosophical inquiry has been a major feature in the development of humans' intellectual history. It has been described as the "no man's land" between definitive scientific knowledge and dogmatic religious teachings. Major fields of philosophy include metaphysics, epistemology, logic, and axiology (which includes ethics and aesthetics).
Society
Society is the system of organizations and institutions arising from interaction between humans. Humans are highly social and tend to live in large complex social groups. They can be divided into different groups according to their income, wealth, power, reputation and other factors. The structure of social stratification and the degree of social mobility differs, especially between modern and traditional societies. Human groups range from the size of families to nations. The first form of human social organization is thought to have resembled hunter-gatherer band societies.
Gender
Human societies typically exhibit gender identities and gender roles that distinguish between masculine and feminine characteristics and prescribe the range of acceptable behaviours and attitudes for their members based on their sex. The most common categorisation is a gender binary of men and women. Some societies recognize a third gender, or less commonly a fourth or fifth. In some other societies, non-binary is used as an umbrella term for a range of gender identities that are not solely male or female.
Gender roles are often associated with a division of norms, practices, dress, behavior, rights, duties, privileges, status, and power, with men enjoying more rights and privileges than women in most societies, both today and in the past. As a social construct, gender roles are not fixed and vary historically within a society. Challenges to predominant gender norms have recurred in many societies. Little is known about gender roles in the earliest human societies. Early modern humans probably had a range of gender roles similar to that of modern cultures from at least the Upper Paleolithic, while the Neanderthals were less sexually dimorphic and there is evidence that the behavioural difference between males and females was minimal.
Kinship
All human societies organize, recognize and classify types of social relationships based on relations between parents, children and other descendants (consanguinity), and relations through marriage (affinity). There is also a third type applied to godparents or adoptive children (fictive). These culturally defined relationships are referred to as kinship. In many societies, it is one of the most important social organizing principles and plays a role in transmitting status and inheritance. All societies have rules of incest taboo, according to which marriage between certain kinds of kin relations is prohibited, and some also have rules of preferential marriage with certain kin relations.
Pair bonding is a ubiquitous feature of human sexual relationships, whether it is manifested as serial monogamy, polygyny, or polyandry. Genetic evidence indicates that humans were predominantly polygynous for most of their existence as a species, but that this began to shift during the Neolithic, when monogamy started becoming widespread concomitantly with the transition from nomadic to sedentary societies. Anatomical evidence in the form of second-to-fourth digit ratios, a biomarker for prenatal androgen effects, likewise indicates modern humans were polygynous during the Pleistocene.
Ethnicity
Human ethnic groups are a social category that identifies together as a group based on shared attributes that distinguish them from other groups. These can be a common set of traditions, ancestry, language, history, society, culture, nation, religion, or social treatment within their residing area. Ethnicity is separate from the concept of race, which is based on physical characteristics, although both are socially constructed. Assigning ethnicity to a certain population is complicated, as even within common ethnic designations there can be a diverse range of subgroups, and the makeup of these ethnic groups can change over time at both the collective and individual level. Also, there is no generally accepted definition of what constitutes an ethnic group. Ethnic groupings can play a powerful role in the social identity and solidarity of ethnopolitical units. This has been closely tied to the rise of the nation state as the predominant form of political organization in the 19th and 20th centuries.
Government and politics
As farming populations gathered in larger and denser communities, interactions between these different groups increased. This led to the development of governance within and between the communities. Humans have evolved the ability to change affiliation with various social groups relatively easily, including previously strong political alliances, if doing so is seen as providing personal advantages. This cognitive flexibility allows individual humans to change their political ideologies, with those with higher flexibility less likely to support authoritarian and nationalistic stances.
Governments create laws and policies that affect the citizens that they govern. There have been many forms of government throughout human history, each having various means of obtaining power and the ability to exert diverse controls on the population. Approximately 47% of humans live in some form of a democracy, 17% in a hybrid regime, and 37% in an authoritarian regime. Many countries belong to international organizations and alliances; the largest of these is the United Nations, with 193 member states.
Trade and economics
Trade, the voluntary exchange of goods and services, is seen as a characteristic that differentiates humans from other animals and has been cited as a practice that gave Homo sapiens a major advantage over other hominids. Evidence suggests early H. sapiens made use of long-distance trade routes to exchange goods and ideas, leading to cultural explosions and providing additional food sources when hunting was sparse, while such trade networks did not exist for the now extinct Neanderthals. Early trade likely involved materials for creating tools like obsidian. The first truly international trade routes were around the spice trade through the Roman and medieval periods.
Early human economies were more likely to be based around gift giving instead of a bartering system. Early money consisted of commodities; the oldest being in the form of cattle and the most widely used being cowrie shells. Money has since evolved into governmental issued coins, paper and electronic money. Human study of economics is a social science that looks at how societies distribute scarce resources among different people. There are massive inequalities in the division of wealth among humans; the eight richest humans are worth the same monetary value as the poorest half of all the human population.
Conflict
Humans commit violence on other humans at a rate comparable to other primates, but have an increased preference for killing adults, infanticide being more common among other primates. Phylogenetic analysis predicts that 2% of early H. sapiens would be murdered, rising to 12% during the medieval period, before dropping to below 2% in modern times. There is great variation in violence between human populations, with rates of homicide about 0.01% in societies that have legal systems and strong cultural attitudes against violence.
The willingness of humans to kill other members of their species en masse through organized conflict (i.e., war) has long been the subject of debate. One school of thought holds that war evolved as a means to eliminate competitors, and has always been an innate human characteristic. Another suggests that war is a relatively recent phenomenon and has appeared due to changing social conditions. While not settled, current evidence indicates warlike predispositions only became common about 10,000 years ago, and in many places much more recently than that. War has had a high cost on human life; it is estimated that during the 20th century, between 167 million and 188 million people died as a result of war. War casualty data is less reliable for pre-medieval times, especially global figures. But compared with any period over the past 600 years, the last ~80 years (post 1946), has seen a very significant drop in global military and civilian death rates due to armed conflict.
See also
List of human evolution fossils
Notes
References
External links
Hominini
Apex predators
Articles containing video clips
Mammals described in 1758
Taxa named by Carl Linnaeus
Tool-using mammals
Cosmopolitan mammals | 0.769217 | 0.999956 | 0.769183 |
Prehistoric Iberia | Prehistory in the Iberian peninsula begins with the arrival of the first Homo genus representatives from Africa, which may range from 1.5 million years (Ma) ago to 1.25 Ma ago, depending on the dating technique employed, so it is set at 1.3 Ma ago for convenience.
The end of Iberian prehistory coincides with the first entrance of the Roman army into the peninsula, in 218 BC, which led to the progressive dissolution of pre-Roman peoples in Roman culture. This end date is also conventional, since pre-Roman writing systems can be traced to as early as 5th century BC.
Overview
Prehistory in Iberia spans around 60% of Quaternary, with written history occupying just 0.08%. For the rest 40%, it was uninhabited by humans. Pleistocene, first epoch of Quaternary, was characterized by climate oscillations between ice ages and interglacials that produced significant changes in Iberia's orography. The first and biggest period in Iberia's prehistory is the Paleolithic, which starts 1.3 Ma and ends almost coinciding with Pleistocene's ending, 11.500 years or 11.5 ka ago. Significant evidence of an extended occupation of Iberia during this period by Homo neanderthalensis has been discovered. The first remains of Homo sapiens have been dated from towards the end of the Paleolithic. For a short time, around 5 ka, both species coexisted, until the former was finally driven to extinction.
Holocene followed Pleistocene with a more homogeneous and humid climate, and the exclusive presence of Homo sapiens. It includes Mesolithic ( 11.5 ka ago - 5.6 ka BC), Neolithic ( 5.6 - 3.2 ka BC) and the Metal Ages: Chalcolithic or Copper Age ( 3.2 - 1.9 ka BC), Bronze Age ( 1.9 ka - 750 BC) and Iron Age ( 750 - 218 BC). Mesolithic and Chalcolithic are transition periods, were characteristics of both the preceding and following ages can be found. Holocene hosted several progressive transformations: territorial and cultural differentiation among Homo sapiens groups, birth of new social organizations and economies, transition from hunting-gathering to agriculture and animal husbandry, and arrival of new peoples from the Mediterranean Sea and central Europe, with foundation of colonies.
There are prehistoric remains scattered throughout the peninsula. Of notable importance is the archaeological site of Atapuerca, in northern Spain, containing a million years of human evolution and declared World Heritage Site by UNESCO in 2000.
Paleolithic
Lower Paleolithic
Lower Paleolithic begins in Iberia with the first human habitation 1.3 Ma ago, and ends conventionally 128 ka ago, making it the longest period of Iberia's Paleolithic. It is mainly studied from the human fossils and stone tools found at archaeological sites, of which Atapuerca is of significant importance. It contains many animal and Homo antecessor fossils showing signs of stone tool manipulation for reaching the spinal cord, which constitute the first evidence of cannibalism among Homo.
At Sima de los Huesos archaeologists have found Homo heidelbergensis fossils, dated 430 ka ago, corresponding to around 30 individuals and with neither evidence of habitation nor of a catastrophic event, thus being hypothesized as the first evidence of Homo burial. DNA analysis from these fossils also suggest a process of continuous hybridization among Homo species throughout this period, until the final arrival of Homo neanderthalensis.
Middle Paleolithic
Middle Paleolithic ( 128 – 40 ka ago) is dominated by an extended occupation of Iberia by Homo neanderthalensis or, more popularly, Neanderthals, who had a heavier body, higher lung volume and a bigger brain than Homo sapiens. Gorham's cave (Gibraltar) contains Neanderthal rock art, suggesting they had higher symbolic thought abilities than it was previously supposed. This period, like the previous one, is mainly studied from fossils and stone tools, which evolve into Mode 3 or Mousterian. There is no extended usage of bone or antlers for tool fabrication, and very little wood usage evidence remains because of decomposition.
By contrast with lower Paleolithic, when habitation was usually in open air and caves were used circumstantially (burial, tool fabrication, butchering), throughout this period caves are increasingly used for habitation, with remains of archaic home conditioning. The Châtelperronian culture, mostly found in southern France, is contemporaneous to the period of time when both Homo neanderthalensis and Homo sapiens coexisted in Europe, and thus at first it was attributed to the latter, but the discovery of a full skeleton from the former changed its attribution to Homo neanderthalensis. Some academics prefer to call it late Mousterian, and there is a debate on whether to consider it either a proper or a transitional industry, since chronologically it belongs to middle Paleolithic but it shows characteristics of upper Paleolithic industries.
Upper Paleolithic
Upper Paleolithic ( 40 - 11.5 ka ago) starts with the Aurignacian culture, which is mostly found in northern Iberia (current Asturias, Cantabria, Basque Country and Catalonia) in the beginning, and is the work of Homo sapiens. It later expands throughout all of the Iberian peninsula and is followed by the Gravettian. In Cantabria most Gravettian remains are found mixed with Aurignacian technology, thus it is considered "intrusive", in contrast with the Mediterranean area, where it probably means a real colonization. The first indications of modern human colonization of the interior and the west of the peninsula are found only in this cultural phase.
Because of the last glacial maximum, western Europe was isolated and developed the Solutrean culture, which shows its earliest appearances in Les Mallaetes (Valencia), with radiocarbon date 20,890 BP. In northern Iberia there are two markedly different tendencies in Asturias and the Vasco-Cantabrian area. Important sites are Altamira and Santimamiñe. The next phase is Magdalenian, even if in the Mediterranean area the Gravettian influence is still persistent. In Portugal there have been some findings north of Lisbon (Casa da Moura, Lapa do Suão).
Art
Iberia is host of impressive Paleolithic cave and rock art. Altamira cave is the most well-known example of the former, being a world heritage site since 1985. Côa Valley, in Portugal, and Siega Verde, in Spain, formed around tributaries into Douro, contain the best preserved rock art, forming together another world heritage site since 1998. Artistic manifestation is found most importantly in the northern Cantabrian area, where the earliest manifestations, for example the Caves of Monte Castillo are as old as Aurignacian times. The practice of this mural art increases in frequency in the Solutrean period, when the first animals are drawn, but it is not until the Magdalenian cultural phase when it becomes truly widespread, being found in almost every cave.
Most of the representations are of animals (bison, horse, deer, bull, reindeer, goat, bear, mammoth, moose) and are painted in ochre and black colors but there are exceptions and human-like forms as well as abstract drawings also appear in some sites. In the Mediterranean and interior areas, the presence of mural art is not so abundant but exists as well since the Solutrean. The monumental Côa Valley has petroglyphs dating up to 22,000 years ago. These document continuous human occupation from the end of the Paleolithic Age. Other examples include Chimachias, Los Casares or La Pasiega, or, in general, the caves principally in Cantabria (in Spain).
Epipaleolithic and Mesolithic
Around 10,000 BC, an interstadial deglaciation called the Allerød Oscillation occurred, weakening the rigorous conditions of the last ice age. This climatic change also represents the end of the Upper Palaeolithic period, beginning the Epipaleolithic. Depending on the terminology preferred by any particular source, the Mesolithic begins after the Epipaleolithic, or includes it. If the Epipaleolithic is not included in it, the Mesolithic is a relatively brief period in Iberia.
As the climate became warmer, the late Magdalenian peoples of Iberia modified their technology and culture. The main techno-cultural change is the process of microlithization: the reduction of size of stone and bone tools, also found in other parts of the World. Also the cave sanctuaries seem to be abandoned and art becomes rarer and mostly done on portable objects, such as pebbles or tools.
It also implies changes in diet, as the megafauna virtually disappears when the steppe becomes woodlands. In this period, hunted animals are of smaller size, typically deer or wild goats, and seafood becomes an important part of the diet where available.
Azilian and Asturian
The first Epipaleolithic culture is the Azilian, also known as microlaminar microlithism in the Mediterranean. This culture is the local evolution of Magdalenian, parallel to other regional derivatives found in Central and Northern Europe. Originally found in the old Magdalenian territory of Vasco-Cantabria and the wider Franco-Cantabrian region, Azilian-style culture eventually expanded to parts of Mediterranean Iberia as well. It reflected a much warmer climate, leading to thick woodlands, and the replacement of large herd animals with smaller and more elusive forest-dwellers.
An archetypical Azilian site in the Iberian peninsula is Zatoya (Navarre), where it is difficult to discern the early Azilian elements from those of late Magdalenian (this transition dated to 11,760 BP). Full Azilian in the same site is dated to 8,150 BP, followed by appearance of geometric elements at a later date, that continue until the arrival of pottery (subneolithic stage).
In the Mediterranean area, virtually this same material culture is often named microlaminar microlithism because it lacks of the bone industry typical of Franco-Cantabrian Azilian. It is found in parts of Catalonia, Valencian Community, Murcia and Mediterranean Andalusia. It has been dated in Les Mallaetes at 10,370 BP.
The Asturian culture was a successor to the Azilian, moved slightly to the west, whose distinctive tool was a pick-axe for picking limpets off rocks.
Geometrical microlithism
In the late phases of the Epipaleolithic a new trend arrives from the north: the geometrical microlithism, directly related to Sauveterrian and Tardenoisian cultures of the Rhine-Danube region.
While in the Franco-Cantabrian region it has a minor impact, not altering the Azilian culture substantially, in Mediterranean Iberia and Portugal its arrival is more noticeable. The Mediterranean geometrical microlithism has two facies:
The Filador facies is directly related to French Sauveterrian and is found in Catalonia, north of the Ebro river.
The Cocina facies is more widespread and, in many sites (Málaga, Spain), shows a strong dependence of fishing and seafood gathering. The Portuguese sites (south of the Tagus, Muge group) have given dates of c.7350 .
Art
The rock art found at over 700 sites along the eastern side of Iberia is the most advanced and widespread surviving from this period, certainly in Europe, and arguably in the world. It is strikingly different from the Upper Palaeolithic art found along the northern coast, with narrative scenes with large numbers of small sketchily painted human figures, rather than the superbly observed individual animal figures that characterise the earlier period.
When it appears in the same scene as animals, the human figure runs towards them. The most common scenes by far are of hunting, and there are scenes of battle and dancing, and possibly agricultural tasks and managing domesticated animals. In some scenes gathering honey is shown, most famously at Cuevas de la Araña en Bicorp (illustrated below). Humans are naked from the waist up, but women have skirts and men sometimes skirts or gaiters or trousers of some sort, and headdresses and masks are sometimes seen, which may indicate rank or status.
Neolithic
In the 6th millennium BC, Andalusia experiences the arrival of the first agriculturalists. Their origin is uncertain (though the mediterranean Africa is a serious candidate) but they arrive with already developed crops (cereals and legumes). The presence of domestic animals instead is unlikely, as only pig and rabbit remains have been found and these could belong to wild animals. They also consumed large amounts of olives but it's uncertain too whether this tree was cultivated or merely harvested in its wild form. Their typical artifact is the La Almagra style pottery, quite variegated.
The Andalusian Neolithic also influenced other areas, notably Southern Portugal, where, soon after the arrival of agriculture, the first dolmen tombs begin to be built c. 4800 BC, being possibly the oldest of their kind anywhere.
C. 5700 BC Cardium pottery Neolithic culture (also known as Mediterranean Neolithic) arrives to Eastern Iberia. While some remains of this culture have been found as far west as Portugal, its distribution is basically Mediterranean (Catalonia, Valencian region, Ebro valley, Balearic islands).
The interior and the northern coastal areas remain largely marginal in this process of spread of agriculture. In most cases it would only arrive in a very late phase or even already in the Chalcolithic age, together with Megalithism.
The location of Perdigões, in Reguengos de Monsaraz, is thought to have been an important location. Twenty small ivory statues dating to 4,500 years BP have been discovered there since 2011. It has constructions dating back to about 5,500 years. It has a necropolis. Outside the location there is a cromlech. The Almendres Cromlech site, in Évora, has megaliths from the late 6th to the early 3rd millennium BC. The Anta Grande do Zambujeiro, also in Évora, is dated between 4000 and 3000 BC. The Antequera Dolmens date from after c. 3700 BC. The Dolmen of Cunha Baixa, in Mangualde Municipality, is dated between 3000 and 2500 BC. The Cave of Salemas was used as a burial ground during the Neolithic.
Chalcolithic
The Chalcolithic or Copper Age is the earliest phase of metallurgy. Copper, silver and gold started to be worked then, though these soft metals could hardly replace stone tools for most purposes. The Chalcolithic is also a period of increased social complexity and stratification and, in the case of Iberia, that of the rise of the first civilizations and of extensive exchange networks that would reach to the Baltic and Africa.
The conventional date for the beginning of Chalcolithic in Iberia is c. 3200 BC. In the following centuries, especially in the south of the peninsula, metal goods, often decorative or ritual, become increasingly common. Additionally there is an increased evidence of exchanges with areas far away: amber from the Baltic and ivory and ostrich-egg products from Northern Africa. A notable example in that regard is the Ivory Lady from Tholos de Montelirio.
The Bell Beaker culture was present in Iberia during the Chalcolithic.
Gordon Childe interpreted the presence of its characteristic artefact as the intrusion of "missionaries" expanding from Iberia along the Atlantic coast, spreading knowledge of Mediterranean copper metallurgy.
Stephen Shennan interpreted their artefacts as belonging to a mobile cultural elite imposing itself over the indigenous substrate populations. Similarly, Sangmeister (1972) interpreted the "Beaker folk" (Glockenbecherleute) as small groups of highly mobile traders and artisans. Christian Strahm (1995) used the term "Bell Beaker phenomenon" (Glockenbecher-Phänomen) as a compromise in order to avoid the term "culture".
The Bell Beaker artefacts at least in their early phase are not distributed across a contiguous areal as is usual for archaeological cultures, but are found in insular concentrations scattered across Europe. Their presence is not associated with a characteristic type of architecture or of burial customs. However, the Bell Beaker culture does appear to coalesce into a coherent archaeological culture in its later phase.
More recent analyses of the "Beaker phenomenon", published since the 2000s, have persisted in describing the origin of the "Beaker phenomenon" as arising from a synthesis of elements, representing "an idea and style uniting different regions with different cultural traditions and background.
"Archaeogenetics studies of the 2010s have been able to resolve the "migrationist vs. diffusionist" question to some extent. The study by Olalde et al. (2017) found only "limited genetic affinity" between individuals associated with the Beaker complex in Iberia and in Central Europe, suggesting that migration played a limited role in its early spread from Iberia. However, the same study found that the further dissemination of the mature Beaker complex was very strongly linked to migration. The spread and fluidity of the Beaker culture back and forth between the Rhine and its origin source in the peninsula may have introduced high levels of steppe-related ancestry, resulting in a near-complete transformation of the local gene pool within a few centuries, to the point of replacement of about 90% of the local Mesolithic-Neolithic patrilineal lineages.
The origin of the "Bell Beaker" artefact itself has been traced to the early 3rd millennium. The earliest examples of the "maritime" Bell Beaker design have been found at the Tagus estuary in Portugal, radiocarbon dated to c. the 28th century BC. The inspiration for the Maritime Bell Beaker is argued to have been the small and earlier Copoz beakers that have impressed decoration and which are found widely around the Tagus estuary in Portugal. Turek has recorded late Neolithic precursors in northern Africa, arguing the Maritime style emerged as a result of seaborne contacts between Iberia and Morocco in the first half of the third millennium BCE. In only a few centuries of their maritime spread, by 2600 BC. they had reached the rich lower Rhine estuary and further upstream into Bohemia and beyond the Elbe where they merged with Corded Ware culture, as also in the French coast of Provence and upstream the Rhone into the Alps and Danube.
A significant Chalcolithic archeological site in Portugal is the Castro of Vila Nova de São Pedro. Other settlements from this period include Pedra do Ouro and the Castro of Zambujal. Megaliths were created during this period, having started earlier, during the late 5th, and lasting until the early 2nd millennium BC. The Castelo Velho de Freixo de Numão, in Vila Nova de Foz Côa Municipality, was populated from about 3000 to 1300 BC. The Cerro do Castelo de Santa Justa, in Alcoutim, is dated to the 3rd millennium BC, between 2400 and 1900 BC.
It is also the period of the great expansion of megalithism, with its associated collective burial practices. In the early Chalcolithic period this cultural phenomenon, maybe of religious undertones, expands along the Atlantic regions and also through the south of the peninsula (additionally it's also found in virtually all European Atlantic regions). In contrast, most of the interior and the Mediterranean regions remain refractary to this phenomenon.
Another phenomenon found in the early chalcolithic is the development of new types of funerary monuments: tholoi and artificial caves. These are only found in the more developed areas: southern Iberia, from the Tagus estuary to Almería, and SE France.
Eventually, c. 2600 BC, urban communities began to appear, again especially in the south. The most important ones are Los Millares in SE Spain and Zambujal (belonging to Vila Nova de São Pedro culture) in Portuguese Estremadura, that can well be called civilizations, even if they lack of the literary component.
It is very unclear if any cultural influence originated in the Eastern Mediterranean (Cyprus?) could have sparked these civilizations. On one side the tholos does have a precedent in that area (even if not used yet as tomb) but on the other there is no material evidence of any exchange between the Eastern and Western Mediterranean, in contrast with the abundance of goods imported from Northern Europe and Africa.
Since c. 2150 BC, the Bell Beaker culture intrudes in Chalcolithic Iberia. After the early Corded style beaker, of quite clear Central European origin, the peninsula begins producing its own types of Bell Beaker pottery. Most important is the Maritime or International style that, associated especially with Megalithism, is for some centuries abundant in all the peninsula and southern France.
Since c. 1900 BC, the Bell Beaker phenomenon in Iberia shows a regionalization, with different styles being produced in the various regions: Palmela type in Portugal, Continental type in the plateau and Almerian type in Los Millares, among others.
Like in other parts of Europe, the Bell Beaker phenomenon (speculated to be of trading or maybe religious nature) does not significantly alter the cultures it inserts itself in. Instead the cultural contexts that existed previously continue basically unchanged by its presence.
Bronze Age
Early Bronze
The center of Bronze Age technology is in the southeast since c. 1800 BC. There the civilization of Los Millares was followed by that of El Argar, initially with no other discontinuity than the displacement of the main urban center some kilometers to the north, the gradual appearance of true bronze and arsenical bronze tools and some greater geographical extension. The Argarian people lived in rather large fortified towns or cities.
From this center, bronze technology spread to other areas. Most notable are:
South-Western Iberian Bronze: in southern Portugal and SW Spain. These poorly defined archaeological horizons show bronze daggers and an expansive trend northward.
Cogotas I culture (Cogotas II is Iron Age Celtic): the pastoral peoples of the plateau become for the first time culturally unified. Their typical artifact is a rough troncoconic pottery.
Some areas like the civilization of Vila Nova seem to have remained apart from the spread of bronze metallurgy remaining technically in the Chalcolithic period for centuries.
Middle Bronze
Basically a continuation of the previous period. The most noticeable change happens in the El Argar civilization, which adopts the Aegean custom of burial in pithoi. This phase is known as El Argar B, beginning c. 1500 BC.
The Northwest (Galicia and northern Portugal), a region that held some of the largest reserves of tin (needed to make true bronze) in Western Eurasia, became a focus for mining, incorporating bronze technology. Their typical artifacts are bronze axes (Group of Montelavar).
The semi-desert region of La Mancha shows its first signs of colonization with the fortified scheme of the Motillas (hillforts). This group is clearly related to the Bronze of Levante, showing the same material culture.
Late Bronze
C. 1300 BC several major changes happen in Iberia, among them:
The Chalcolithic culture of Vila Nova vanishes, possibly in direct relation to the silting of the canal connecting the main city Zambujal with the sea. It is replaced by a non-urban culture, whose main artifact is an externally burnished pottery.
El Argar also disappears as such, what had been a very homogeneous culture, a centralized state for some, becomes an array of many post-Argaric fortified cities.
The Motillas are abandoned.
Bronze of Levante develops in the Valencian Community.
The proto-Celtic Urnfield culture appears in the North-East, conquering all Catalonia and some neighbouring areas.
The Lower Guadalquivir valley shows its first clearly differentiated culture, defined by internally burnished pottery. This group might have some relation with Tartessos.
Western Iberian Bronze cultures show some degree of interaction, not just among them but also with other Atlantic cultures in Britain, France and elsewhere. This has been called the Atlantic Bronze complex.
Iron Age
Iron Age Iberia has two focuses: the Hallstatt-related Urnfields of the North-East and the Phoenician colonies of the South.
During the Iron Age, considered the protohistory of the territory, Celts came, in several waves, possibly starting before 600 BC.
Southwest Paleohispanic script, or Tartessian, seen in Algarve and Lower Alentejo from about the late 8th to the 5th century BC, is possibly the oldest script in Western Europe. It could have come from the Eastern Mediterranean, perhaps from Anatolia or Greece.
Early Iron Age cultures
.
Tempered steel tools were already in use on the Iberian peninsula in late 8th century BC.
Since the late 8th century BC, the Urnfield culture of North-East Iberia began to develop iron metallurgy, and eventually elements of Hallstatt culture. The earliest elements of this culture were found along the lower Ebro river, then gradually expanded upstream to La Rioja and in a hybrid local form to Alava. There was also expansion southward into Castelló, with less marked influences reaching further south. Some offshoots have been detected along the Iberian Mountains, possibly a prelude to the formation of the Celtiberi.
In this period, the social differentiation became more visible with evidence of local chiefdoms and a horse-riding elite. These transformations may represent the arrival of a new wave of cultures from central Europe.
From these outposts in the Upper Ebro and the Iberian mountains, Celtic culture expanded into the plateau and the Atlantic coast. Several groups can be described:
The Bernorio-Miraveche group (northern Burgos and Palencia provinces), that would influence the peoples of the northern fringe.
The northwest Castro culture, in Galicia and northern Portugal, a Celtic culture with peculiarities, due to persistence of aspects of an earlier Atlantic Bronze Age culture.
The Duero group, possibly the precursor of the Celtic Vaccei.
The Cogotas II culture, likely precursor of the Celtic or Celtiberian Vettones (or a pre-Celtic culture with substantial Celtic influences), a markedly cattle-herder culture that gradually expanded southward into Extremadura.
The Lusitanian culture, the precursor of the Lusitani tribe, in central Portugal and Extremadura in western Spain. Generally not considered Celtic since Lusitanian does not meet some the accepted definitions of a Celtic language. Its relationship with the surrounding Celtic culture is unclear. Some believe it was essentially a pre-Celtic Iberian culture with substantial Celtic influences, while others argue that it was an essentially Celtic culture with strong indigenous pre-Celtic influences. There have been arguments for classifying its language as either Italic, a form of archaic Celtic, or proto-Celtic.
All these Indo-European groups have some common elements, like combed pottery since the 6th century and uniform weaponry.
After c. 600 BC, the Urnfields of the North-East were replaced by the Iberian culture, a process that wasn't completed until the 4th century BC. This physical separation from their continental relatives would mean that the Celts of the Iberian peninsula never received the cultural influences of La Tène culture, including Druidism.
Phoenician colonies and influence
The Phoenicians of the Levant, Greeks of Europe, and Carthaginians of Africa all colonized parts of Iberia to facilitate trade. In the 10th century BC, the first contacts between Phoenicians and Iberia (along the Mediterranean coast) were made. This century also saw the emergence of towns and cities in the southern littoral areas of eastern Iberia.
The Phoenicians founded the colony of Gadir (now Cádiz) near Tartessos. The foundation of Cádiz, the oldest continuously inhabited city in western Europe, is traditionally dated to 1104 BC, though, as of 2004, no archaeological discoveries date back further than the 9th century BC. The Phoenicians continued to use Cádiz as a trading post for several centuries leaving a variety of artifacts, most notably a pair of sarcophaguses from around the 4th or 3rd century BC. Contrary to myth, there is no record of Phoenician colonies west of Algarve (namely Tavira), though there might have been some voyages of discovery. Phoenician influence in what is now Portugal was essentially through cultural and commercial exchange with Tartessos.
In the 9th century BC, the Phoenicians, from the city-state of Tyre founded the colony of Malaka (now Málaga) and Carthage (in North Africa). During this century, Phoenicians also had great influence on Iberia with the introduction the use of Iron, of the Potter's wheel, the production of olive oil and wine. They were also responsible for the first forms of Iberian writing, had great religious influence and accelerated urban development. However, there is no real evidence to support the myth of a Phoenician foundation of the city of Lisbon as far back as 1300 BC, under the name Alis Ubbo ("Safe Harbour"), even if in this period there are organized settlements in Olissipona (modern Lisbon, in Portuguese Estremadura) with Mediterranean influences.
There was strong Phoenician influence and settlement in the city of Balsa (modern Tavira, Algarve), in the 8th century BC. Phoenician-influenced Tavira was destroyed by violence in the 6th century BC. With the decadence of Phoenician colonization of the Mediterranean coast of Iberia in the 6th century BC many of the colonies are deserted. The 6th century BC also saw the rise of the colonial might of Carthage, which slowly replaced the Phoenicians in their former areas of dominion.
Greek colonies
The Greek colony at what now is Marseilles began trading with the Iberians on the eastern coast around the 8th century BC. The Greeks finally founded their own colony at Ampurias, in the eastern Mediterranean shore (modern Catalonia), during the 6th century BC beginning their settlement in the Iberian peninsula. There are no Greek colonies west of the Strait of Gibraltar, only voyages of discovery. There is no evidence to support the myth of an ancient Greek founding of Olissipo (modern Lisbon) by Odysseus.
Tartessian culture
The name Tartessian, when applied in archaeology and linguistics, does not necessarily correlate with the semi-mythical city of Tartessos but only roughly with the area where it is typically assumed it should have been.
The Tartessian culture of southern Iberia is actually the local culture as modified by the increasing influence of eastern Mediterranean elements, especially Phoenician. Its core area is Western Andalusia, but soon extends to Eastern Andalusia, Extremadura and the lands of Murcia and Valencia, where a Tartessian complex, rooted in the local Bronze cultures, is in the last stages of the Bronze Age (9th-8th centuries BC) before Phoenician influence can be seen clearly.
The full Tartessian culture, beginning c.720 BC, also extends to southern Portugal, where is eventually replaced by Lusitanian culture. One of the most significant elements of this culture is the introduction of the potter's wheel, that, along with other related technical developments, causes a major improvement in quality of pottery. There are other major advances in craftsmanship, affecting jewelry, weaving and architecture. This latter aspects is especially important, as the traditional circular huts were then gradually replaced by well finished rectangular buildings. It also allowed for the construction of the tower-like burial monuments that are so typical of this culture.
Agriculture also seems to have experienced major advances with the introduction of steel tools and, presumably, of the yoke and animal traction for the plow. In this period it's noticeable the increase of cattle accompanied by some decrease of sheep and goat types.
Another noticeable element is the major increase in economical specialization and social stratification. This is very noticeable in burials; some show off great wealth (chariots, gold, ivory), while the vast majority are much more modest. There is much diversity in burial rituals in this period but the elites seem to converge in one single style: a chambered mound. Some of the most affluent burials are generally attributed to local monarchs.
One of the developments of this period is writing, a skill which was probably acquired through contact with the Phoenicians. John T. Koch controversially claimed to have deciphered the extant Tartessian inscriptions and to have tentatively identified the language as an earlier form of the Celtic languages now spoken in the British Isles and Brittany in the book 'Celtic from the West', published in 2010. However, the linguistic mainstream continues to treat Tartessian as an unclassified, possibly pre-Indo-European language, and Koch's decipherment of the Tartessian script and his theory for the evolution of Celtic has been strongly criticized.
Iberian culture
In the Iberian culture people were organized in chiefdoms and states. Three phases can be identified: the Ancient, the Middle and the Late Iberian period.
With the arrival of Greek influence, not limited to their few colonies, the Tartessian culture begins to transform itself, especially in the South East. This late period is known as the Iberian culture, that in Western Andalusia and the non-Celtic areas of Extremadura is called Ibero-Turdetanian because of its stronger links with the Tartessian substrate.
Greek influence is visible in the gradual change of the style of their monuments that approach more and more the models arrived from the Greek world. Thus the obelisk-like funerary monuments of the previous period now adopt a column like form, totally in line with Greek architecture.
By mid 5th century, aristocratic power was increased and resulted in the abandonment and transformation of the orientalizing model. The oppidum appeared and became the socio-economic model of the aristocratic class. The commerce was also one of the principal sources of aristocratic control and power. In the southeast, between the end of the 5th and the end of the 4th century BC, appeared a highly hierarchical aristocratic society. There were different forms of political control. The power and control seemed to be in the hand of kings or reguli.
Iberian funerary customs are dominated by cremation necropolis, that are partly due to the persistent influences of the Urnfield culture, but they also include burial customs imported from the Greek cultural area (mudbrick rectangular mound).
Urbanism was important in the Iberian cultural area, especially in the south, where Roman accounts mention hundreds of oppida (fortified towns). In these towns (some quite large, some mere fortified villages) the houses were typically arranged in contiguous blocks, in what seems to be another Urnfield cultural influx.
The Iberian script evolved from the Tartessian one with Greek influences that are noticeable in the transformation of some characters. In a few cases a variant of Greek alphabet (Ibero-Ionian script) was used to write Iberian as well.
The transformation from Tartessian to Iberian culture was not sudden but gradual and was more marked in the East, where it begins in the 6th century BC, than in the south-west, where it is only noticeable since the 5th century BC and much more tenuous. A special case is the northeast where the Urnfield culture was Iberized but keeping some elements from the Indo-European substrate.
Post-Tartessos Iron Age
Also in the 6th century BC there was a cultural shift in southwest Iberia (southern Portugal and nearby parts of Andalusia) after Tartessos fell; with a strong Mediterranean character that prolonged and modified Tartessian culture. This occurred mainly in Low Alentejo and Algarve, but had littoral extensions up to the Sado mouth (namely the important city of Bevipo, modern Alcácer do Sal). The first form of writing in western Iberia (south of Portugal), the Southwest Paleohispanic script (still to be translated), dated to the 6th century BC, denotes strong Tartessian influence in its use of a modified Phoenician alphabet. In these writings the word "Conii" (similar to Cunetes or Cynetes, the people of Algarve) appears frequently.
In the 4th century BC, the Celtici appear, a late expansion of Celtic culture into the southwest (southern Extremadura, Alentejo and northern Algarve). The Turduli and Turdetani, probably descendants of the Tartessians, though Celticized, became established in the area of the Guadiana river, in southern Portugal. A series of cities in Algarve, such as Balsa (Tavira), Baesuris (Castro Marim), Ossonoba (Faro) and Cilpes (Silves), became inhabited by the Cynetes.
Arrival of Romans and Punic Wars
In the 4th century BC, Rome began to rise as a Mediterranean power rival to the North African based Carthage. After suffering defeat to the Romans in the First Punic War (264–241 BC), the Carthaginians began to extend their power into the interior of Iberia from their south eastern coastal settlements but this empire was to be short lived. In 218 BC the Second Punic War started and the Carthaginian general Hannibal marched his armies, which included Iberians, from Iberia, across the Pyrenees and the Alps and attacked the Romans in Italy. Starting in the north-east, Rome began its conquest of the Iberian Peninsula.
See also
Pre-Roman peoples of the Iberian Peninsula
Timeline of Portuguese history
Timeline of Spanish history
Prehistory of the Valencian Community
References
Further reading
Mattoso, José (dir.), História de Portugal. Primeiro Volume: Antes de Portugal, Lisboa, Círculo de Leitores, 1992. (in Portuguese)
External links
Detailed map of the pre-Roman peoples of Iberia in the Iron Age (National Geographic Institute of Spain, in spanish)
Atapuerca Foundation
Iberia | 0.77794 | 0.988742 | 0.769182 |
Archean | The Archean Eon ( , also spelled Archaean or Archæan), in older sources sometimes called the Archaeozoic, is the second of the four geologic eons of Earth's history, preceded by the Hadean Eon and followed by the Proterozoic. The Archean represents the time period from (million years ago). The Late Heavy Bombardment is hypothesized to overlap with the beginning of the Archean. The Huronian glaciation occurred at the end of the eon.
The Earth during the Archean was mostly a water world: there was continental crust, but much of it was under an ocean deeper than today's oceans. Except for some rare relict crystals, today's oldest continental crust dates back to the Archean. Much of the geological detail of the Archean has been destroyed by subsequent activity. The Earth's atmosphere was also vastly different in composition from today's: the prebiotic atmosphere was a reducing atmosphere rich in methane and lacking free oxygen.
The earliest known life, mostly represented by shallow-water microbial mats called stromatolites, started in the Archean and remained simple prokaryotes (archaea and bacteria) throughout the eon. The earliest photosynthetic processes, especially those by early cyanobacteria, appeared in the mid/late Archean and led to a permanent chemical change in the ocean and the atmosphere after the Archean.
Etymology and changes in classification
The word Archean is derived from the Greek word , meaning 'beginning, origin'. The Pre-Cambrian had been believed to be without life (azoic); however, fossils were found in deposits that were judged to belong to the Azoic age. Before the Hadean Eon was recognized, the Archean spanned Earth's early history from its formation about 4,540 million years ago until 2,500 million years ago.
Instead of being based on stratigraphy, the beginning and end of the Archean Eon are defined chronometrically. The eon's lower boundary or starting point of 4,031±3 million years ago is officially recognized by the International Commission on Stratigraphy, which is the age of the oldest known intact rock formations on Earth. Evidence of rocks from the preceding Hadean Eon are therefore restricted by definition to non-rock and non-terrestrial sources such as individual mineral grains and lunar samples.
Geology
When the Archean began, the Earth's heat flow was nearly three times as high as it is today, and it was still twice the current level at the transition from the Archean to the Proterozoic (2,500 ). The extra heat was partly remnant heat from planetary accretion, from the formation of the metallic core, and partly arose from the decay of radioactive elements. As a result, the Earth's mantle was significantly hotter than today.
Although a few mineral grains are known to be Hadean, the oldest rock formations exposed on the surface of the Earth are Archean. Archean rocks are found in Greenland, Siberia, the Canadian Shield, Montana, Wyoming (exposed parts of the Wyoming Craton), Minnesota (Minnesota River Valley), the Baltic Shield, the Rhodope Massif, Scotland, India, Brazil, western Australia, and southern Africa. Granitic rocks predominate throughout the crystalline remnants of the surviving Archean crust. These include great melt sheets and voluminous plutonic masses of granite, diorite, layered intrusions, anorthosites and monzonites known as sanukitoids. Archean rocks are often heavily metamorphized deep-water sediments, such as graywackes, mudstones, volcanic sediments, and banded iron formations. Volcanic activity was considerably higher than today, with numerous lava eruptions, including unusual types such as komatiite. Carbonate rocks are rare, indicating that the oceans were more acidic, due to dissolved carbon dioxide, than during the Proterozoic. Greenstone belts are typical Archean formations, consisting of alternating units of metamorphosed mafic igneous and sedimentary rocks, including Archean felsic volcanic rocks. The metamorphosed igneous rocks were derived from volcanic island arcs, while the metamorphosed sediments represent deep-sea sediments eroded from the neighboring island arcs and deposited in a forearc basin. Greenstone belts, which include both types of metamorphosed rock, represent sutures between the protocontinents.
Plate tectonics likely started vigorously in the Hadean, but slowed down in the Archean. The slowing of plate tectonics was probably due to an increase in the viscosity of the mantle due to outgassing of its water. Plate tectonics likely produced large amounts of continental crust, but the deep oceans of the Archean probably covered the continents entirely. Only at the end of the Archean did the continents likely emerge from the ocean. The emergence of continents towards the end of the Archaean initiated continental weathering that left its mark on the oxygen isotope record by enriching seawater with isotopically light oxygen.
Due to recycling and metamorphosis of the Archean crust, there is a lack of extensive geological evidence for specific continents. One hypothesis is that rocks that are now in India, western Australia, and southern Africa formed a continent called Ur as of 3,100 Ma. Another hypothesis, which conflicts with the first, is that rocks from western Australia and southern Africa were assembled in a continent called Vaalbara as far back as 3,600 Ma. Archean rock makes up only about 8% of Earth's present-day continental crust; the rest of the Archean continents have been recycled.
By the Neoarchean, plate tectonic activity may have been similar to that of the modern Earth, although there was a significantly greater occurrence of slab detachment resulting from a hotter mantle, rheologically weaker plates, and increased tensile stresses on subducting plates due to their crustal material metamorphosing from basalt into eclogite as they sank. There are well-preserved sedimentary basins, and evidence of volcanic arcs, intracontinental rifts, continent-continent collisions and widespread globe-spanning orogenic events suggesting the assembly and destruction of one and perhaps several supercontinents. Evidence from banded iron formations, chert beds, chemical sediments and pillow basalts demonstrates that liquid water was prevalent and deep oceanic basins already existed.
Asteroid impacts were frequent in the early Archean. Evidence from spherule layers suggests that impacts continued into the later Archean, at an average rate of about one impactor with a diameter greater than every 15 million years. This is about the size of the Chicxulub impactor. These impacts would have been an important oxygen sink and would have caused drastic fluctuations of atmospheric oxygen levels.
Environment
The Archean atmosphere is thought to have almost completely lacked free oxygen; oxygen levels were less than 0.001% of their present atmospheric level, with some analyses suggesting they were as low as 0.00001% of modern levels. However, transient episodes of heightened oxygen concentrations are known from this eon around 2,980–2,960 Ma, 2,700 Ma, and 2,501 Ma. The pulses of increased oxygenation at 2,700 and 2,501 Ma have both been considered by some as potential start points of the Great Oxygenation Event, which most scholars consider to have begun in the Palaeoproterozoic . Furthermore, oases of relatively high oxygen levels existed in some nearshore shallow marine settings by the Mesoarchean. The ocean was broadly reducing and lacked any persistent redoxcline, a water layer between oxygenated and anoxic layers with a strong redox gradient, which would become a feature in later, more oxic oceans. Despite the lack of free oxygen, the rate of organic carbon burial appears to have been roughly the same as in the present. Due to extremely low oxygen levels, sulphate was rare in the Archean ocean, and sulphides were produced primarily through reduction of organically sourced sulphite or through mineralisation of compounds containing reduced sulphur. The Archean ocean was enriched in heavier oxygen isotopes relative to the modern ocean, though δ18O values decreased to levels comparable to those of modern oceans over the course of the later part of the eon as a result of increased continental weathering.
Astronomers think that the Sun had about 75–80 percent of its present luminosity, yet temperatures on Earth appear to have been near modern levels only 500 million years after Earth's formation (the faint young Sun paradox). The presence of liquid water is evidenced by certain highly deformed gneisses produced by metamorphism of sedimentary protoliths. The moderate temperatures may reflect the presence of greater amounts of greenhouse gases than later in the Earth's history. Extensive abiotic denitrification took place on the Archean Earth, pumping the greenhouse gas nitrous oxide into the atmosphere. Alternatively, Earth's albedo may have been lower at the time, due to less land area and cloud cover.
Early life
The processes that gave rise to life on Earth are not completely understood, but there is substantial evidence that life came into existence either near the end of the Hadean Eon or early in the Archean Eon.
The earliest evidence for life on Earth is graphite of biogenic origin found in 3.7 billion–year-old metasedimentary rocks discovered in Western Greenland.
The earliest identifiable fossils consist of stromatolites, which are microbial mats formed in shallow water by cyanobacteria. The earliest stromatolites are found in 3.48 billion-year-old sandstone discovered in Western Australia. Stromatolites are found throughout the Archean and become common late in the Archean. Cyanobacteria were instrumental in creating free oxygen in the atmosphere.
Further evidence for early life is found in 3.47 billion-year-old baryte, in the Warrawoona Group of Western Australia. This mineral shows sulfur fractionation of as much as 21.1%, which is evidence of sulfate-reducing bacteria that metabolize sulfur-32 more readily than sulfur-34.
Evidence of life in the Late Hadean is more controversial. In 2015, biogenic carbon was detected in zircons dated to 4.1 billion years ago, but this evidence is preliminary and needs validation.
Earth was very hostile to life before 4,300 to 4,200 Ma, and the conclusion is that before the Archean Eon, life as we know it would have been challenged by these environmental conditions. While life could have arisen before the Archean, the conditions necessary to sustain life could not have occurred until the Archean Eon.
Life in the Archean was limited to simple single-celled organisms (lacking nuclei), called prokaryotes. In addition to the domain Bacteria, microfossils of the domain Archaea have also been identified. There are no known eukaryotic fossils from the earliest Archean, though they might have evolved during the Archean without leaving any. Fossil steranes, indicative of eukaryotes, have been reported from Archean strata but were shown to derive from contamination with younger organic matter. No fossil evidence has been discovered for ultramicroscopic intracellular replicators such as viruses.
Fossilized microbes from terrestrial microbial mats show that life was already established on land 3.22 billion years ago.
See also
References
External links
Precambrian geochronology | 0.770365 | 0.998452 | 0.769172 |
History of Western fashion | The following is a chronological list of articles covering the history of Western fashion—the story of the changing fashions in clothing in countries under influence of the Western world—from the 5th century to the present. The series focuses primarily on the history of fashion in Western European countries and countries in the core Anglosphere.
History of fashion by time
400–1100 in fashion
1100–1200 in fashion
1200–1300 in fashion
1300–1400 in fashion
1400–1500 in fashion
1500–1550 in fashion
1550–1600 in fashion
1600–1650 in fashion
1650–1700 in fashion
1700–1750 in fashion
1750–1775 in fashion
1775–1795 in fashion
1795–1820 in fashion
1820s in fashion
1830s in fashion
1840s in fashion
1850s in fashion
1860s in fashion
1870s in fashion
1880s in fashion
1890s in fashion
1900s in fashion
1910s in fashion
1920s in fashion
1930–1945 in fashion
1945–1960 in fashion
1960s in fashion
1970s in fashion
1980s in fashion
1990s in fashion
2000s in fashion
2010s in fashion
2020s in fashion
See also
Medieval dress
Byzantine dress
Early medieval European dress
English medieval clothing
Anglo-Saxon dress
Related topics
Button
History of clothing
History of fashion design
Fashion
Clothing
Clothing terminology
Costume
Haute couture
Hemline
Needlework
Neckline
Sewing
Tailor
Suit
Trim (sewing)
Victorian fashion
Waistline (clothing)
Western dress codes
References | 0.775924 | 0.991185 | 0.769085 |
Olduvai theory | The Olduvai Theory states that the current industrial civilization would have a maximum duration of one hundred years, counted from 1930. From 2030 onwards, humankind would gradually return to levels of civilization comparable to those previously experienced, culminating in about a thousand years (3000 AD) in a hunting-based culture, such as existed on Earth three million years ago, when the Oldowan industry developed; hence the name of this theory, put forward by Richard C. Duncan based on his experience in handling energy sources and his love of archaeology.
Originally, the theory was proposed in 1989 under the name "pulse-transient theory". Subsequently, in 1996, its current name was adopted, inspired by the famous archaeological site, but the theory does not rely in any way on data collected at that site. Richard C. Duncan has published several versions since the appearance of his first paper with different parameters and predictions, which has been a source of criticism and controversy.
In 2007, Duncan defined five postulates based on the observation of data on:
The world energy production per capita.
Earth carrying capacity.
The return to the use of coal as a primary source and the peak oil production.
Migratory movements.
The stages of energy utilization in the United States.
In 2009, he again published an update restating the postulate concerning world energy consumption per capita concerning OECD countries, where previously he only compared with the United States, downplaying the role of emerging economies.
Different people, such as Pedro A. Prieto, based on this and other theories of catastrophic collapse or die-off, have formulated probable scenarios with various dates and social events. On the other hand, there is a group of people, such as Richard Heinberg or Jared Diamond, who also believe in social collapse, but still visualize the possibility of more benevolent scenarios where degrowth can occur with continued welfare.
This theory has been criticized for the way in which the problem of migratory movements is posed and for the ideological orientation of the publishing house that published its articles, the Social Contract Press, which is an advocate of anti-immigration measures and birth control. There are major criticisms on each of the argumentative bases and different ideologies contrary to such approaches such as the Cornucopians, the advocates of the natural resource-based economy, environmentalist positions and the positions of various nations also fail to establish a consistent basis for such claims.
History
Richard C. Duncan is an author who first proposed Olduvai's theory in 1989 under the title "The pulse-transient theory of industrial civilization." Later this theory was supplemented in 1993 with the article "The life-expectancy of industrial civilization: The decline to global equilibrium."
In June 1996, Duncan presented a paper titled "The Olduvai Theory: Falling Towards a post-industrial stone-age Era", adopting the term "Olduvai theory" in place of "pulse-transient theory" used in earlier work. Duncan published a more updated version of his theory under the title "The Peak of World Oil Production and the Road to the Olduvai Gorge" at the 2000 Symposium Summit of the Geological Society of America on November 13, 2000. In 2005, Duncan extended the data set within his theory to 2003 in the article "The Olduvai Theory: Energy, Population, and Industrial Civilization."
Description
The Olduvai theory is a model that is mainly based on the peak oil theory and the per capita energy yield of oil. In the face of a foreseeable depletion, it establishes that the rate of energy consumption and world population growth cannot be the same as that of the 20th century.
Put differently, Olduvai's theory is defined by the rise and fall of the material quality of life (MQOL) which consists of the rate resulting from the increase or decrease of the production, use and consumption of energy sources (E) between the growth of the world population (P), (MQOL = E/P). From 1954 to 1979 that rate grew annually by about 2.8%, from that date to 2000 it increased erratically by 0.2% per year. From 2000 to 2007 it grew again at an exponential rate due to the development of emerging economies.
In works before 2000, Richard C. Duncan considered the peak of per capita energy consumption in 1979 as the peak of civilization. Currently, due to the growth since 2000 of the emerging economies, he considers 2010 as the likely date of peak energy per capita. But despite that adjustment, he continues to claim that in 2030 that rate of energy production per capita would be similar to that of 1930, considering that date as the end of the current civilization.
The theory argues that the first reliable signs of collapse are likely to consist of a series of widespread blackouts in the developed world. With the lack of electrical power and fossil fuels, there will be a transition from today's civilization to a situation close to that of the pre-industrial era. He goes on to argue that in events following that collapse the technological level is expected to eventually move from Dark Ages-like levels to those observed in the Stone Age within approximately three thousand years.
Duncan takes as a basis for the formulation of his theory data consisting of the following facts:
Data obtained on world energy production per capita.
The development of population from 1850 to 2005.
The carrying capacity of the Earth in the absence of oil.
Energy utilization stages and their level of growth in United States anticipate global ones, due to their dominance.
Estimation of the year 2007 as the time of the peak oil.
Migratory movements or attractiveness principle.
According to Duncan, the theory has five postulates:
The exponential growth of world energy production ended in 1970.
The intervals of growth, stagnation, and final decline of energy production per capita in the United States anticipate the intervals of energy production per capita in the rest of the world. In such intervals, there is a shift from oil to coal as the primary energy source.
The final decline of industrial civilization will begin around 2008-2012.
Partial and total blackouts will be reliable indicators of terminal or final decline.
The world population will decline in line with world energy production per capita.
Bases for the formulation of the theory
Carrying capacity limit and demographic explosion
He stipulates that the real capacity of the Earth without oil in the long run is between 500 and 2000 million people, which has been exceeded by a factor of three thanks to an artificial welfare bubble due to cheap oil. He argues that since the homeostatic balance of the Earth is around at most 2 billion people, as oil runs out at least 4 billion people will not be able to be regulated by the system, resulting in a large mortality rate.
Prior to 1800 the world population was doubling at a rate of between 500 and 1000 years, and by that date the world living human population was just under 1 billion. With the first industrial revolution and colonialism, the population in the Western world began to double at a rate just over 100 years, with the rest of the world following soon after, with 1550 million inhabitants by 1900. With the second industrial revolution the world began to double at a rate of less than 100 years, and with maximum oil extraction and the digital revolution it doubled at a rate of about 50 years, from 2.4 billion people in 1950 to 6070 million people in 2000.
The theory not only predicts that the Earth's net load does not allow for the rate of such growth but that its population already exceeded its capacity after 1925. Thus one can see an apocalyptic scenario where the population would slow down in 2012 due to sudden global economic decline and peak in 2015 at around 6900 million (see critiques section), and would never in history grow to these levels again, there being as many deaths as births at any given time (1:1), roughly around the year 2017 or so. Thereafter the number of deaths would exceed the number of births (>1:1) and the world population would begin to contract dramatically with approximately 6.8 billion people remaining by the end of 2020, 6500 million by 2025, 5260 million by 2027, 4600 million by 2030 (reduction between 1800 and 2000 million people in 5 years), until the number of humans stabilizes at a figure between 2000 and 500 million inhabitants at a point between the years 2050 and 2100.
Duncan compares the forecast of his theory with that of Dennis Meadows in his book The Limits to Growth (1972). While Duncan expects the peak population in 2015 to be around 6.9 billion, Meadows expects the peak in 2027 to be around 7.47 billion. In addition, Duncan forecasts only 2000 million inhabitants by 2050, while Meadows estimates 6450 million inhabitants by 2050.
Other estimates similar to Olduvai's theory predict that the population will reach a zenith around the year 2025-2030 reaching a number between 7100 and 8000 million inhabitants and thereafter the population will decrease at the same rate it grew before the zenith describing a symmetric Gaussian bell.
Scholars, such as Paul Chefurka, point out that the Earth's carrying capacity will be defined both by factors such as the level of damage caused to ecosystems during the industrial period (pollution, alterations and even depletion of ecosystems, highly polluting and long-lasting waste and destruction of resources due to possible competition for them), the development of alternative technologies or oil substitutes and the existence of knowledge that would allow the survival of the remaining population in a sustainable manner (such as the rescue of traditional ways of life prior to the industrial revolution).
Principle of attractiveness
The formulation of this basis, supported on the work on the dynamics of complex social systems by Jay Forrester, proposes that the variables of per capita natural resource and material standard of living are subordinated to the per capita energy yield of oil. This principle holds that attractiveness is the difference in material standard of living between nations. Thus the US material standard of living in 2005 was 57.7 barrels of oil equivalent (BOE) per capita while the material standard of living of the rest of the world was 9.8 BOE per capita, there being a difference in consumption of 47.9 BOE equivalent per capita. Put another way, the huge difference in lifestyle and consumption becomes attractive to immigrants.
The new immigrant, upon arriving in that society, adopts the same consumerist lifestyle, further overloading that system. Duncan argues that the greater the immigration the greater the number of population where the differences in the material standard of living of the attracting country will diminish in an equalizing process until that country reaches the world's material standard of living.
This proposition has already been criticized in several parts of the world, because although Duncan insinuates that borders should be closed, he does not stop to consider that the main cause of resource depletion is the consumerist and predatory lifestyle of these attractive countries (see critiques section).
Return to the use of coal as a primary source
The theory proposes that due to the predominance of one nation the rest of the world will follow the same sequence in the implementation of a resource as a primary source. It thus comparatively analyzes a chronology of resource utilization as a primary source between United States and the rest of the world:
Utilization of biomass as a primary source.
In the United States until 1886.
In the rest of the world until 1900.
Use of coal as primary source.
In the United States from 1886 to 1951.
In the rest of the world from 1900 to 1963.
Use of oil as primary source.
In the United States from 1951 to 1986.
In the rest of the world from 1963 to 2005.
Return to the use of coal as primary source.
In the United States since 1986.
In the rest of the world from 2005.
According to Duncan, from 2000 to 2005 while world coal production increased by 4.8% per year, oil increased by just 1.6%.
The return to coal as a primary source, another taboo fact due to its high level of pollution, has been muted in the media as has the carrying capacity of the Earth for obvious political reasons, Duncan says.
Energy consumption of the population
Just as the shift from oil to coal as a primary source in the U.S. is marking global changes in advance, the indicator of the level of per capita energy consumption and production over time in the U.S. is also marking that of the rest of the world. Thus, Duncan distinguishes three stages in U.S. consumption that were subsequently reflected in world consumption:
Growth
1945-1970: U.S. growth stage, average growth of 1.4% per capita energy production per year is observed during the period.
1954-1979: World growth stage, an average growth of 2.8% per capita energy production per year is observed during the period.
Stagnation
1970-1998: U.S. stagnation stage, average decline of 0.6% p.a. of energy production per capita during the period.
1979-2008: A period of global stagnation, an average growth of 0.2% per capita energy production per year is observed during the period, after 2000 an upturn is observed due to the growth of emerging economies.
Final decline or decay
1998 onwards: US final decline stage, an average decline of 1.8% per year of energy production per capita is observed during the period 1998-2005.
2008-2012 onwards: Probable stage of final global decline. The development of emerging economies and the huge coal utilization in China may slow down this process until 2012.
Theory updates
2009 update
After criticism received for the discrepancy shown by the United States per capita energy consumption curve, which tends to decrease, concerning the world curve, which has tended to increase extraordinarily after 2000, Duncan published an update in 2009 of his theory where he compares a curve of the OECD members (30 countries) relative to the curve of the rest of the non-OECD world (165 countries) in which Brazil, India, and China are included.
In this new paper on the various peaks of per capita energy consumption in the world, Duncan concludes the following:
1973: Peak per capita energy in the United States.
2005: Peak energy per capita in OECD countries at around 4.75 tonnes of oil equivalent (toe) per capita.
2008: After having increased from 2000 to 2007 the per capita consumption of non-OECD countries by 28%, the composite leading indicator of China, India and Brazil declined sharply in 2008, leading him to conclude that the average standard of living in non-OECD countries has already begun to fall. However, a February 2010 OECD report appears to contradict this claim (see critiques section).
2010: Most likely date of peak energy per capita globally.
In this new scenario, it forecasts that the United States average standard of living or energy per capita would fall by 90% between 2008 and 2030, OECD levels would fall by 86% and the level of non-OECD countries would fall by 60%. The average standard of living in the OECD would catch up with the average level of the rest of the world by 2030 standing at 3.53 barrels of oil equivalent per capita.
Societal scenarios according to the theory
Pedro A. Prieto, one of the Spanish-language specialists on the subject, has gone so far as to outline a probable scenario of societal collapse based on aspects of this theory.
Crisis of the Nation-State
Wealthy nations would suffer increased insecurity, and what had been democratic societies would become totalitarian and ultraconservative societies where the population itself would demand outside resources and increased security. It is possible that before the great final die-off, large developed nations would dispute scarce resources in a sort of World War III, without ruling out scenarios similar to the final solution or nuclear war. Others argue that such a war, if it were to happen, would be an intercapitalist war involving three blocks of civilizations. The first would be constituted by the Western civilization, the second by the Orthodox civilization as well as Sinic, and a third block formed by the Islamic civilization. Japan and India would play a major role in such a war as they define their position.
If some nations survived, lack of resources could trigger famines in large urban centers forcing widespread looting, and governments would issue decrees and martial laws restricting social freedoms and eliminating property rights to keep the starving population at bay. In the face of permanent shortages, governments would impose rationing that would fall short of the required minimums which would cause the very ones imposing force to plunder for their profit, this would be the first symptom of the fading of the states.
In a major economic crisis, the value of fiat money could plummet, and people could end up in a situation where a necessity like a loaf of bread might be worth as much as something more extravagant. The dominant minorities and military forces would plunder for themselves, and form small dictatorships and kingdoms within what were once great nations. On the other hand, the "great masses of the disinherited" would form disorganized groups of very unstable characters that would act violently and chaotically to take scarce resources. Between one and the other, the conflict would be served and in the end, both would succumb like the rest of the population.
Survivor's profile
It is estimated that cities with more than twenty thousand inhabitants would be very unstable, having better life expectancy in the first place those societies of hunters and gatherers in the Amazon, the Central African jungles, those of Southeast Asia, the Bushmen, and the aborigines in Australia. In second place of survival would be the fairly homogeneous nuclei of three hundred to two thousand inhabitants with an agricultural lifestyle close to places with uncontaminated water resources, inaccessible and hundreds of kilometers away from the large cities and from the hordes of starving people that would exude these cities or from the decaying military forces that would engage in looting.
In the end there could also be a huge number of small agricultural villages vying for the few privileged places, with only those villages surviving that the land carrying capacity would allow.
Other visions
Pedro A. Prieto himself speculates that war scenarios similar to World War III or other types of destructive war conflicts would be less likely to occur if the social collapse is rapid, such as the one predicted by Olduvai's theory. The difference between scenarios is that the majority of the population, contained in the cities, dies of famine in the rapid collapse, while in the slow collapse the war would spread to the safest areas, ranging from large cities to small, isolated rural communities.
The conjectures of those who opine on the possibility of a post-industrial era are spread across a spectrum ranging from scenarios of rapid and catastrophic social collapse to scenarios of slow and benevolent collapse, and even scenarios where they still envision degrowths with continued welfare.
Catastrophic collapse or die-off
The first group, the pessimists, is framed by the same Olduvai theory of Duncan and other works such as the die-off or catastrophic collapse proposed by David Price, Reg Morrison and Jay Hanson. They usually invoke several determinisms such as strong, genetic, and energetic determinism (Leslie A. White's Basic Law of Evolution) to announce the inevitable collapse that will lead to the decomposition of civilized life ruling out the possibility of a peaceful decline.
Smooth decline or "prosperous downhill path"
Among those who predict slow and benevolent collapse scenarios where the degrowth option with continuity of the welfare state we can mention the "prosperous way downhill" of Elizabeth and Howard T. Odum, the end of suburbanization and the return to ruralization proposed by James Kunstler, societies that can still choose to save themselves or fail proposed by Jared Diamond and Richard Heinberg's "gradual shutdown" option.
Heinberg, in his book "Shutdown: Options and Actions in a Post-Coal World", proposes the four possible paths that nations could take in the face of coal and oil depletion:
"Last one and we're out" or "last one standing": Scenario where there is fierce global competition for the remaining resources.
"Gradual shutdown": Where there is global cooperation in reducing energy use, conservation, sound water management, and global population reduction.
"Denial": Posture in the hopes that some unforeseen element or serendipity will solve the problem (see also black swan theory).
"Life-saving community": Sustainably preparing local areas if the global economic project collapses.
The Renaissance of utopias
These are visions where collapse is both an outcome and an objective. As in the 19th century, and at the beginning of the industrial era, romanticism and the utopian movements arose, again and in the face of the prediction of a collapse of the industrial era, new hatching of utopian visions is registered. This renaissance advances in the opposite direction to the decline of sociological theories which can no longer provide adequate solutions due to the translimitation situation.
For Joseph Tainter, a collapsing complex society is suddenly smaller, simpler, less stratified, and with fewer social differences. This situation, according to Theodore Roszak, evokes the utopian dogma of the old environmentalist program of reducing, slowing down, democratizing, and decentralizing.
According to Ernest Garcia, many of these proponents are scientists engaged in areas ranging from the ecologist discipline to geology, computer science, biochemistry, and evolutionary genetics, far removed from the study of the social sciences. Among the most palpable recent utopian movements are anarchoprimitivism, deep ecology, and techno-utopias such as transhumanism.
Critiques and positions on the theory
Criticism of the basis of the argument
Criticism of the limit of carrying capacity and population explosion
This forecast also differs from that of a 2004 United Nations report where estimates of world population development from 1800 to 2300 were calculated, with the worst-case scenario being that where world population reaches a peak of 7500 million between 2035-2040, subsequently reducing to 7000 million by 2065, 6000 million by 2090 and 5500 million approximately by the year 2100.
A report issued in 2011 by the United Nations Population Division states that on October 31, 2011 officially the world's population would reach 7 billion and in the year 2019 it was estimated a total population of 7.8 billion people. All contradicting Duncan's estimate that by 2015 there would be around 6.9 billion living humans in the world population. However, recent times have seen a decline in population growth, albeit due to the increasingly common decision to have fewer children or discard parenthood due to cultural and social factors rather than the deaths caused by famine and disease mentioned in the theory. Because of these factors China abolished its one-child policy and in several places around the world their governments offer incentives to have children.
Criticism of the principle of attractiveness
Of the critics who object to some point of the theory, those who criticize the xenophobic and racist cultural biases that are reflected to a greater extent on the principle of attractiveness stand out. Pedro A. Prieto criticizes the proposal of closing borders to immigrants, but not the closure to the entry of depredated resources that end up serving the high US consumption. Nevertheless, he concludes that the more general tenets of the theory such as peak oil, land carrying capacity, and a return to coal as a primary source are feasible to some degree.
Many of Richard C. Duncan's works have been published by the Social Contract Press, an American publishing house founded by John Tanton and directed by Wayne Lutton. This publishing house is an advocate of birth control and the reduction of immigration, as well as emphasizing issues such as culture and the environment covering everything from the point of view of the political right. Among its most controversial publications is the book The Camp of the Saints by French author Jean Raspail, causing such publisher to be described by the Southern Poverty Law Center as a "hate group" that "publishes a series of racist works."
Critics of the peak oil estimate
Some positions speak from the peak oil theory may be a hoax, as argued by Lindsey Williams (2006), to that of different governments, social organizations or private companies that predict the peak at dates ranging from two years before to forty years after the date proposed by Duncan and with very different behaviors in the production curve.
The abiogenic petroleum origin theory argument, proposed since the 19th century, holds that natural petroleum formed in deep coal deposits, perhaps dating back to the formation of the Earth. This would therefore prove that fossil fuel reserves are more numerous, according to geophysicist Alexander Goncharov of the Carnegie Institution in Washington, who simulated in 2009 the conditions of the mantle with a diamond probe and a laser creating from methane other molecules such as ethane, propane, butane, molecular hydrogen and graphite. Goncharov says all estimates of peak to date have been wrong, so believing in peak oil is unreliable and asserts that oil companies could look for new abiotic deposits.
Criticism of the return to coal
Another data that can be observed and that does not correspond with the prediction that coal replaced oil in 2005 differs from other reports such as the EDRO website, where for the year 2006 oil still represented 35.27% as a source of consumption, while coal still represented 28.02%, although the same page admits the increasing use of coal over oil. Similarly on the BP Global page, in its energy graphs tool mode, it can be seen that within the year 2007, oil consumption had a slight decrease from 3939.4 Mtoe to 3927.9 Mtoe. Yet coal consumption during the same period rose from 3194.5 Mtoe to 3303.7 Mtoe.
Critiques on per capita energy consumption
Duncan's articles assume that peak per capita energy was 11.15 bep/c/yr in 1979, but other data from the U.S. Department of Energy (EIA) show that since that date there has been an increase in that figure to 12.12 bep/c/yr after 2004. This is in contradiction with the postulate of the theory that energy per capita does not grow exponentially from 1979 to 2008.
TheOilDrum.com page argues that a true peak in per capita energy consumption of around 12.50 bep/year was observed between 2004 and 2005 based on data from the United Nations, British Petroleum and the International Energy Agency. These proponents mention that Duncan relied primarily on the per capita energy consumption of oil, but with notable omissions of the growth in per capita energy consumption of coal since 2000, attributed to the Asian emergency, and of the uninterrupted growth of natural gas since 1965.
They point out that the civilizational peak was not in 1979 but at a date after 2004 and with a duration of industrial civilization between 1950 and 2044. They also add that if other resources are not so dependent on the behavior of oil consumption probably the civilizational duration will be much longer than a hundred years.
After the reliability of the postulate that the rest of the world was following in the footsteps of the United States in the behavior of per capita energy consumption dynamics was challenged, in 2009 he published a new article called "Olduvai's Theory: Towards the Re-Equalization of the World Standard of Living", in which he compared the behavior of world per capita consumption with that of the most developed countries (OECD). In that article, based on a March 2009 OECD report of the composite leading indicator for China, India, and Brazil, he claims that world per capita energy consumption would start to decline, however, a new OECD composite leading indicator report in February 2010 sees a huge recovery, which contradicts Duncan's assertion.
Political and ideological criticisms
Ecologist criticism
Social ecologists and international associations such as Greenpeace are more optimistic, pinning their hopes on the alternative energies that neo- Malthusians despise such as geothermal energy, solar energy, wind energy and others with low or no pollution, but reject fusion energy, as they consider it potentially polluting. They say that data such as population growth are counted without taking into account the scenarios opened up by a large number of social and technological changes to solve problems, such as alternative energies and radical lifestyle changes that can reduce the effects that such a theory predicts. In contrast, market ecologists claim that such changes will occur by forcing them on consumers through the use of the laws of supply and demand.
Meanwhile, anarcho-primitivists and deep ecologists see this catastrophist scenario as a painful path to which civilization is leading us. Thus, they tend to see civilizational collapse as an inevitable outcome as much as a goal to be reached.
Left-wing criticism
Some libertarians, anarchists, and socialists think that these types of theories are lies or exaggerations that benefit economic speculation and that they have the purpose of selling more expensive and easily controllable resources that are depleted or scarce, to perpetuate the free market game and the ruling classes.
Jacque Fresco mentions that energy resources are not only inappropriate, but also that there are other very abundant energy sources that the social elites could not easily control because they are not speculable, since their reserves would be virtually inexhaustible in no less than 4000 years at the current rate of consumption, and this is only counting the case of geothermal energy.
He has also created The Venus Project in supposed opposition to the current capitalist economic model based on monetary gain.
Already some time ago there was a wide movement on the web to check the movement and, above all, the figure of Jacques Fresco. From the results, we can infer a possible fraud on Jacques Fresco's shares.
In the meantime, authors such as Peter Lindemann or Jeane Manning, add that there are a number of alternatives for obtaining and distributing energy freely, which if employed, would end the capitalist model of hoarding procurement and distribution. This has led them to formulate a conspiracy theory for the suppression of free energy. Prominent among such forms of free and free energy distribution is the wireless power transfer devised by Nikola Tesla.
In turn, all authors of such arguments about alleged conspiracies, see as an agenda of the elitists the formulations of peak oil, warmongering ideas, catastrophism, and neo-Malthusianism.
Right-wing criticism
Cornucopians are libertarians who argue that population growth, resource scarcity, and its polluting potential are exaggerations or lies, such as peak oil or the devastating environmental effect of coal. They argue that the same laws of the market would solve such problems if they were real.
The main theses defended by cornucopians are usually optimistic and pragmatic. Meanwhile, others consider them conservative, moralistic, and exclusionary. These theses consist of the following points:
Technological progress equals environmental progress. Environmental deterioration is minimized as technologies appear that use resources cleanly and efficiently.
Anti-environmentalism. They criticize catastrophist positions, such as Olduvai's theory, for being based on inadequate models that produce precarious scenarios that do not portray economic dynamics in their historical perspective. They reject the idea of degrowth because it goes against technological and, in turn, environmental progress.
Technological optimism. Technological progress continually invents energy substitutes before a resource is exhausted. In this way man since the Neolithic has continually exceeded the earth load by moving from one technology or energy source to another. Also the availability and efficiency of land for food production increases with the use of new and efficient technologies such as better agrochemicals, pesticides and genetic manipulation.
Growth is green. Economic growth solves all problems, i.e., it is poverty and not wealth that degrades and misuses the environment.
Reliance on the free market. The creation of new forms of ownership and new markets exerts pressures to switch from one technology or energy source to another through the use of economic speculation. For this reason, Cornucopians do not approve of State intervention.
Abolition of birth control. They argue that for every new mouth that demands resources for its nourishment, a brain and a pair of hands are also born, contributing to technological progress. In other words, contrary to what neo-Malthusians think, population is seen as a resource that far from causing problems solves them.
Defense by the anthropocentric aesthetic value of resources rather than by their future value.
Criticism and national positions
Conservatives, traditionalists and nationalists focus their positions only on temporal benefit from the ethnocentric or anthropocentric point of view without accounting for adverse effects to the environment, and do not usually outright deny peak oil or Olduvai's theory, but usually omit some points or all of the theory as a form of institutional denial. It is easy, and in fact according to theory, it predicts that most countries in the world will take this line and move from oil to coal or nuclear power like United States or China without caring about the social or ecological consequences.
An argument in favor of the positions of the various countries, especially China and the United States, is that while there is a shift from oil to coal, coal is beginning to be used in a non-polluting way through integrated gasification combined cycle, although their rate of energy return may be lower than doing it in a polluting way.
Another argument in favor is the cooperation of China, India, Japan, United States and Europe in the ITER project to demonstrate the scientific and technological feasibility of nuclear fusion, although participation from some countries has been intermittent.
If fusion energy were possible, the energy potential of the deuterium contained in all the planet's seas, rivers and lakes would be equivalent to approximately 1,068 x 109 times the world's oil reserves in 2009, i.e., each cubic meter of water on land would be equivalent to 150 tonnes of oil in energy content.
At the world "consumption rate" of 2007 this would equate to an approximate duration of 17.5 billion years of modern industrial civilization before this resource could be exhausted assuming a constant population of 6.5 billion people not growing and no economic growth. In reality the current system is based on economic, productive, demographic, material, or energy growth, and this growth rate is usually measured on an annualized basis. For example, at a growth rate of 2% per year the energy consumption of oil would be doubling every 34.65 years and, at the end of 1220 years, as much energy would be consumed as is available in all the seas in the form of deuterium to perform nuclear fusion. At a growth rate of 5% per year all the deuterium would be used up in 488 years, and at a growth rate of 11.4% per year in only 214 years.
Some positions and several developed countries have opted for the non-anthropogenic global warming or solar-origin version, seeing the environmentalist warnings as an exaggeration. Other countries, the Third World countries, see the depletion theories and the international environmental agreements as measures imposed by the First World countries to curb their development.
See also
Malthusian catastrophe
Climate change
Doomsday argument
Societal collapse
Notes
References
Bibliography
Futurism
Peak oil
World population
Energy consumption
Energy sources | 0.774874 | 0.992521 | 0.769078 |
Creolization | Creolization is the process through which creole languages and cultures emerge. Creolization was first used by linguists to explain how contact languages become creole languages, but now scholars in other social sciences use the term to describe new cultural expressions brought about by contact between societies and relocated peoples. Creolization is traditionally used to refer to the Caribbean, although it is not exclusive to the Caribbean and some scholars use the term to represent other diasporas. Furthermore, creolization occurs when participants select cultural elements that may become part of inherited culture. Sociologist Robin Cohen writes that creolization occurs when “participants select particular elements from incoming or inherited cultures, endow these with meanings different from those they possessed in the original cultures, and then creatively merge these to create new varieties that supersede the prior forms.”
Beginning
According to Charles Stewart, the concept of creolization originates during the 16th century, although there is no date recording the beginning of the word creolization. The term creolization was understood to be a distinction between those individuals born in the "Old World" versus the New World. As consequence to slavery and the different power relations between different races creolization became synonymous with Creole, often of which was used to distinguish the master and the slave. The word Creole was also used to distinguish those Afro-descendants who were born in the New World in comparison to African-born slaves. The word creolization has evolved and changed to have different meaning at different times in history.
What has not changed through the course of time is the context in which Creole has been used. It has been associated with cultural mixtures of African, European, and indigenous (in addition to other lineages in different locations) ancestry (e.g. Caribbeans). Creole has pertained to "African-diasporic geographical and historical specificity". With globalization, creolization has undergone a "remapping of worlds regions", or as Orlando Patterson would explain, "the creation of wholly new cultural forms in the transnational space, such as 'New Yorican' and Miami Spanish". Today, creolization refers to this mixture of different people and different cultures that merge to become one.
Diaspora
Creolization as a relational process can enable new forms of identity formation and processes of communal enrichment through pacific intermixtures and aggregations, but its uneven dynamics remain a factor to consider whether in the context of colonization or globalization. The meeting points of multiple diasporas and the crossing and intersection of diasporas are sites of new creolizations. New sites of creolizations continue the ongoing ethics of the sharing of the world that has now become a global discourse which is rooted in English and French Caribbean. The cultural fusion and hybridization of new diasporas surfaces and creates new forms of creolization.
Culture
There are different processes of creolization have shaped and reshaped the different forms of one culture. For example, food, music, and religion have been impacted by the creolization of today's world.
Food
Creolization has affected the elements and traditions of food. The blend of cooking that describes the mixture of African and French elements in the American South, particularly in Louisiana, and in the French Caribbean have been influenced by creolization. This mixture has led to the unique combination of cultures that led to cuisine of creolization, better known as creole cooking. These very creations of different flavors particularly pertain to a specific territory which is influenced by different histories and experiences. The Caribbean has been colonized under a multitude of different countries which influenced the creation of new and different recipes as well as the implementation of new cooking methods. Creole cooking pulls heavily from French and Spanish influences due to their colonization in the 1600s through the mid to late 1900s. They also draw influence from their African roots and a different mixture of Native American tribe cooking methods.
Music
To some degree, most forms of music considered "popular" came from the oppression of a people or slavery. This cross-fertilization triggers a cultural blending and creates a completely different form of its own through the turmoil and conflict of the dominating and dominated culture. One such form of this is jazz music. The work of art music created by African diaspora composers frequently exhibits this as well.
Jazz music took its roots from the dialogue between black folk music in the U.S., that is derived from plantations and rural areas and black music based in urban New Orleans. Jazz music developed from the creole music that takes its roots from the combination of blues, parlour music, opera, and spiritual music.
Religion
The popular religions of Haiti, Cuba, Trinidad, and Brazil formed from the mixing of African and European elements. Catholicism came with the European colonization of the Caribbean, which led to the heavy influence of its practices upon the already existing religion. Religious beliefs such as Vaudou in Haiti, Santeria in Cuba, Shango in Trinidad, and Candomblé in Brazil take its roots from creolization. The creation of these new religious expressions have sustained and evolved over time to make creole religions. A related concept to creolization is called "cultural additivity".
See also
Creole nationalism
Creole language
Hybridity
Creole peoples
Créolité
Creole languages
Creole cuisine
References
Creole peoples
Cultural assimilation | 0.781432 | 0.984174 | 0.769065 |
Eastern world | The Eastern world, also known as the East or historically the Orient, is an umbrella term for various cultures or social structures, nations and philosophical systems, which vary depending on the context. It most often includes Asia, the Mediterranean region and the Arab world, specifically in historical (pre-modern) contexts, and in modern times in the context of Orientalism. The Eastern world is often seen as a counterpart to the Western world.
The various regions included in the term are varied, hard to generalize, and do not have a single shared common heritage. Although the various parts of the Eastern world share many common threads, most notably being in the "Global South", they have never historically defined themselves collectively. The term originally had a literal geographic meaning, referring to the eastern part of the Old World, contrasting the cultures and civilizations of Asia with those of Europe (or the Western world). Traditionally, this includes East Asia, Southeast Asia, South Asia, Central Asia and West Asia.
Conceptually, the boundary between east and west is more cultural, rather than geographical, as a result of which Australia and New Zealand, which were founded as British settler colonies, are typically grouped with the Western world despite being geographically closer to the Eastern world, while the Central Asian nations of the former Soviet Union, even with significant Western influence, are grouped in the East. Other than much of Asia and Africa, Europe has absorbed almost all of the societies of North Asia, the Americas, and Oceania into the Western world because of settler colonization.
Countries such as the Philippines, which are geographically located in the Eastern world, may be considered Western in some aspects of their society, culture and politics due to immigration and historical cultural influences from the United States and Europe.
Overview
As with other regions of the world, Asia consists of many different, extremely diverse countries, ethnic groups and cultures. This concept is further debated because in some English-speaking countries, common vernacular associates the "Asian" identity to people of East Asian origin and Southeast Asian origin, while in some countries the "Asian" identity is associated with people of South Asian origin, and in other contexts, Asian regions such as the Indian subcontinent are included with East Asia. West Asia (which includes Israel, part of the Arab world, Iran, etc.), which may or may not see themselves part of the Eastern world, are sometimes considered "Middle Eastern" and separate from Asia.
The division between 'East' and 'West', formerly referred to as Orient and Occident, is a product of European cultural history and of the distinction between Christian Europe and the cultures beyond it to the East. With the European colonization of the Americas, the East-West dichotomy became global. The concept of an Eastern, "Indian" (Indies) or "Oriental" sphere was emphasized by ideas of racial as well as religious and cultural differences. Such distinctions were articulated by Westerners in the scholarly tradition known as Orientalism, which is notable in being a Western conception of a unified Eastern world not limited to any specific region(s), but rather all of Asia together.
Culture
While there is no singular Eastern culture of the Eastern world, there are subgroups within it, such as countries within East Asia, Southeast Asia, or South Asia, as well as syncretism within these regions. These include the spread of Eastern religions such as Buddhism or Hinduism, the usage of Chinese characters or Brahmic scripts, language families, the fusion of cuisines, and traditions, among others.
See also
Arab world
Asia-Pacific
Buddhism by country
Christendom
Continental union
East Asian cultural sphere
East–West dichotomy
Far East
Globalization
Global East
Global North and Global South
Greater India
Greater Iran
Greater Middle East
Hinduism by country
Muslim world
Near East
Western world
Westernization
References
Asia-Pacific
Country classifications
Cultural concepts
Cultural regions
Eurocentrism
Eastern culture | 0.772762 | 0.995114 | 0.768986 |
Chronospecies | A chronospecies is a species derived from a sequential development pattern that involves continual and uniform changes from an extinct ancestral form on an evolutionary scale. The sequence of alterations eventually produces a population that is physically, morphologically, and/or genetically distinct from the original ancestors. Throughout the change, there is only one species in the lineage at any point in time, as opposed to cases where divergent evolution produces contemporary species with a common ancestor. The related term paleospecies (or palaeospecies) indicates an extinct species only identified with fossil material. That identification relies on distinct similarities between the earlier fossil specimens and some proposed descendant although the exact relationship to the later species is not always defined. In particular, the range of variation within all the early fossil specimens does not exceed the observed range that exists in the later species.
A paleosubspecies (or palaeosubspecies) identifies an extinct subspecies that evolved into the currently-existing form. The connection with relatively-recent variations, usually from the Late Pleistocene, often relies on the additional information available in subfossil material. Most of the current species have changed in size and so adapted to the climatic changes during the last ice age (see Bergmann's Rule).
The further identification of fossil specimens as part of a "chronospecies" relies on additional similarities that more strongly indicate a specific relationship with a known species. For example, relatively recent specimens, hundreds of thousands to a few million years old with consistent variations (such as always smaller but with the same proportions) as a living species might represent the final step in a chronospecies. The possible identification of the immediate ancestor of the living taxon may also rely on stratigraphic information to establish the age of the specimens.
The concept of chronospecies is related to the phyletic gradualism model of evolution, and it also relies on an extensive fossil record since morphological changes accumulate over time, and two very different organisms could be connected by a series of intermediaries.
Examples
Bison (several paleospecies and -subspecies)
Marine sloths (paleospecies)
Coragyps (chronospecies)
Gymnogyps (paleospecies)
Panthera (numerous chrono- and paleospecies and -subspecies)
Valdiviathyris (no visible change since the Priabonian, 35 million years ago)
See also
Orthogenesis
References
Further reading
Evolutionary species vs. chronospecies from Dr. Steven M. Carr, Memorial University of Newfoundland biology department
Stanley, S. M. (1978) "Chronospecies' longevities, the origin of genera, and the punctuational model of evolution," Paleobiology, 4, 26–40.
External links
Evolutionary biology
Biostratigraphy
Phylogenetics | 0.782193 | 0.983029 | 0.768919 |